IBM SVC Bestr Parctices MAY18 - sg247521

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1002

Front cover

Draft Document for Review February 4, 2016 8:01 am SG24-7933-04

Implementing the IBM System


Storage SAN Volume Controller
with IBM Spectrum Virtualize V7.6

Jon Tate
Maximilian Hart
Hartmut Lonzer
Tarik Jose Maluf
Libor Miklas
Jon Parkes
Anthony Saine
Lev Sturmer
Marcin Tabinowski

Redbooks
Draft Document for Review February 4, 2016 8:01 am 7933edno.fm

International Technical Support Organization

Implementing the IBM System Storage SAN Volume


Controller with IBM Spectrum Virtualize V7.6

December 2015

SG24-7933-04
7933edno.fm Draft Document for Review February 4, 2016 8:01 am

Note: Before using this information and the product it supports, read the information in “Notices” on
page xvii.

Fifth Edition (December 2015)

This edition applies to IBM SAN Volume Controller and IBM Spectrum Virtualize software Version 7.6 and the
IBM SAN Volume Controller 2145-DH8.

This document was created or updated on February 4, 2016.

Note: This book is based on a pre-GA version of a product and may not apply when the product becomes
generally available. We recommend that you consult the product documentation or follow-on versions of
this redbook for more current information.

© Copyright International Business Machines Corporation 2015. All rights reserved.


Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii

IBM Redbooks promotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii


December 2015, Fifth Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii

Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 Storage virtualization terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Requirements driving storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Benefits of using IBM Spectrum Virtualize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Latest changes and enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Chapter 2. IBM SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11


2.1 Brief history of the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 SAN Volume Controller architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.1 SAN Volume Controller topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 SAN Volume Controller terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 SAN Volume Controller components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.3 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.4 Stretched system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.5 MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.6 Quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.7 Disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.8 Storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.9 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4.10 Easy Tier performance function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4.11 Evaluation mode for Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4.12 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.13 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 Volume overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5.1 Image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.5.2 Managed mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.5.3 Cache mode and cache-disabled volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.4 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.5 Thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.5.6 Volume I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.6 HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.7 Distributed Raid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

© Copyright IBM Corp. 2015. All rights reserved. iii


7933TOC.fm Draft Document for Review February 4, 2016 8:01 am

2.7.1 Non-distributed Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39


2.7.2 Distributed array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.7.3 Example of a distributed array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.8 Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.8.1 General encryption concepts and terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.8.2 Accessing an encrypted system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.8.3 Accessing key information on USB flash drives . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.8.4 Encryption technology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.9 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.9.1 Use of IP addresses and Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.9.2 iSCSI volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.9.3 iSCSI authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.9.4 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.10 Advanced Copy Services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.10.1 Synchronous and asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.10.2 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.10.3 Image mode migration and volume mirroring migration . . . . . . . . . . . . . . . . . . . 53
2.11 SAN Volume Controller clustered system overview . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.11.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.11.2 Stretched cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.11.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.11.4 Clustered system management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.12 User authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.12.1 Remote authentication through LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.12.2 SAN Volume Controller modified login message . . . . . . . . . . . . . . . . . . . . . . . . 68
2.12.3 SAN Volume Controller user names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.12.4 SAN Volume Controller superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.12.5 SAN Volume Controller Service Assistant Tool . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.12.6 SAN Volume Controller roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.12.7 SAN Volume Controller local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.12.8 SAN Volume Controller remote authentication and single sign-on . . . . . . . . . . . 71
2.13 SAN Volume Controller hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.13.1 Fibre Channel interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.13.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.13.3 FCoE interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.14 Flash drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.14.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.14.2 Flash Drive solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.14.3 Flash Drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.14.4 Flash Drives and SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.15 What is new with the SAN Volume Controller 7.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.15.1 Withdrawal of the SAN Volume Controller 2145-8xx . . . . . . . . . . . . . . . . . . . . . 79
2.15.2 SAN Volume Controller 7.6 supported hardware list, device driver, and firmware
levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.15.3 SAN Volume Controller 7.6 new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.16 Useful SAN Volume Controller web links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83


3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.2.1 Preparing your uninterruptible power supply unit environment . . . . . . . . . . . . . . . 86
3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

iv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm

3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90


3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.3.4 IP Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.3.5 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.3.6 SAN Volume Controller clustered system configuration . . . . . . . . . . . . . . . . . . . 110
3.3.7 Stretched cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.3.8 Storage pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.3.9 Volume configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.3.10 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.3.11 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.3.12 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.3.13 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . 127
3.3.14 SVC configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3.4.3 SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.4.4 Real-Time Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.4.5 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Chapter 4. Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133


4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.1.1 Network requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . 135
4.2 Setting up the SAN Volume Controller cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.2.1 Initiating Cluster on 2145-DH8 SVC models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
4.2.2 SVC 2145-CF8 and 2145-CG8 service panels . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.2.3 Initiating the cluster from the front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.3 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.3.1 Completing the Create Cluster wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.3.2 Post-requisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.4 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.4.1 Generating public and private SSH key pairs by using PuTTY. . . . . . . . . . . . . . 158
4.4.2 Uploading the SSH public key to the SAN Volume Controller cluster. . . . . . . . . 161
4.4.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
4.4.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.4.5 Configuring SSH for IBM AIX clients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.5 Using IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.5.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.5.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173


5.1 Host attachment overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.2 IBM SAN Volume Controller setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.2.1 Fibre Channel and SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.3 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5.3.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5.3.2 iSCSI nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5.3.3 iSCSI qualified name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5.3.4 iSCSI setup of the SAN Volume Controller and host server . . . . . . . . . . . . . . . . 183
5.3.5 Volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.3.6 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

Contents v
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am

5.3.7 Target failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184


5.3.8 Host failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5.4 Microsoft Windows information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5.4.1 Configuring Windows Server 2008 and 2012 hosts . . . . . . . . . . . . . . . . . . . . . . 186
5.4.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5.4.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 187
5.4.4 Installing and configuring the host adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.4.5 Changing the disk timeout on Windows Server . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.4.6 Installing the SDDDSM multipath driver on Windows . . . . . . . . . . . . . . . . . . . . . 188
5.4.7 Attaching SVC volumes to Microsoft Windows Server 2008 R2 and to Windows
Server 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.4.8 Extending a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.4.9 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.5 Using SAN Volume Controller CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . 205
5.6 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.6.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.6.2 System requirements for the IBM System Storage hardware provider . . . . . . . . 207
5.6.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . . 207
5.6.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.6.5 Creating free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5.6.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5.7 Specific Linux (on x86/x86_64) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
5.7.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.7.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.7.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.7.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.7.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.8 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5.8.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5.8.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 223
5.8.3 HBAs for hosts that are running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5.8.4 VMware storage and zoning guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
5.8.5 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . . 224
5.8.6 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.8.7 Attaching VMware to volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.8.8 Volume naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.8.9 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . . . 229
5.8.10 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5.8.11 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
5.9 Using the SDDDSM, SDDPCM, and SDD web interface . . . . . . . . . . . . . . . . . . . . . . 233
5.10 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.10.1 SAN Volume Controller storage subsystem attachment guidelines . . . . . . . . . 234

Chapter 6. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237


6.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
6.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
6.2.1 Migrating multiple extents within a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . 238
6.2.2 Migrating extents off an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . . . 239
6.2.3 Migrating a volume between storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
6.2.4 Migrating the volume to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.2.5 Non-disruptive Volume Move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
6.3 Functional overview of MDisk migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

vi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm

6.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249


6.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.4 Migrating data from image mode volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
6.4.1 Image mode volume migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
6.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
6.5 Non-disruptive Volume Move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
6.6 Data migration for Windows using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
6.6.1 Windows Server 2008 host connected directly to DS3400 . . . . . . . . . . . . . . . . . 258
6.6.2 Adding SVC between the host system and DS3400. . . . . . . . . . . . . . . . . . . . . . 261
6.6.3 Importing the migrated disks into a Windows Server 2008 host . . . . . . . . . . . . . 271
6.6.4 Adding SVC between host and DS3400 using CLI . . . . . . . . . . . . . . . . . . . . . . . 273
6.6.5 Migrating volume from managed mode to image mode . . . . . . . . . . . . . . . . . . . 275
6.6.6 Migrating volume from image mode to image mode . . . . . . . . . . . . . . . . . . . . . . 279
6.6.7 Removing image mode data from SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
6.6.8 Mapping free disks on Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . . . . . . 286
6.7 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
6.7.1 Preparing SVC to virtualize Linux disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
6.7.2 Moving LUNs to SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
6.7.3 Migrating image mode volumes to managed MDisks . . . . . . . . . . . . . . . . . . . . . 296
6.7.4 Preparing to migrate from SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
6.7.5 Migrating volumes to image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
6.7.6 Removing LUNs from SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
6.8 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
6.8.1 Preparing SVC to virtualize ESX disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
6.8.2 Moving LUNs to SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.8.3 Migrating image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
6.8.4 Preparing to migrate SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
6.8.5 Migrating the managed volumes to image mode volumes . . . . . . . . . . . . . . . . . 317
6.8.6 Removing LUNs from SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
6.9 Migrating AIX SAN disks to SVC volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
6.9.1 Preparing SVC to virtualize AIX disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
6.9.2 Moving LUNs to SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
6.9.3 Migrating image mode volumes to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
6.9.4 Preparing to migrate from SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
6.9.5 Migrating the managed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
6.9.6 Removing LUNs from SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6.10 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
6.11 Migrating volumes between pools using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
6.11.1 Migrating data using migratevdisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
6.11.2 Migrating data using volume mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
6.11.3 Migrating extents using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
6.12 Migrating volumes to an encrypted pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
6.12.1 Using Add Volume Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
6.12.2 Using the GUI’s Migrate to Another Pool (migratevdisk) . . . . . . . . . 346
6.13 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
6.14 Using volume mirroring and thin-provisioned volumes together . . . . . . . . . . . . . . . . 350
6.14.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
6.14.2 Volume mirroring with thin-provisioned volumes. . . . . . . . . . . . . . . . . . . . . . . . 352

Chapter 7. Volume creation and provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359


7.1 An Introduction to Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
7.2 Create Volumes menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363

Contents vii
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am

7.3 Creating volumes using the Quick Volume Creation. . . . . . . . . . . . . . . . . . . . . . . . . . 366


7.3.1 Creating Basic volumes using Quick Volume Creation. . . . . . . . . . . . . . . . . . . . 366
7.3.2 Creating Mirrored volumes using Quick Volume Creation . . . . . . . . . . . . . . . . . 369
7.4 Mapping a volume to the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
7.5 Creating Custom volumes using Advanced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
7.5.1 Creating a Custom Thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
7.5.2 Creating Custom Compressed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
7.5.3 Custom Mirrored Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
7.6 Stretched Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
7.7 HyperSwap and the mkvolume command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
7.7.1 Volume manipulation commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
7.8 Mapping Volumes to Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
7.8.1 Mapping newly created volumes to the host using the wizard . . . . . . . . . . . . . . 394
7.9 Discovering volumes on hosts and multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
7.9.1 Windows 2008 Fibre Channel volume attachment . . . . . . . . . . . . . . . . . . . . . . . 397
7.9.2 Windows 2008 iSCSI volume attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
7.9.3 VMware ESX Fibre Channel attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
7.9.4 VMware ESX iSCSI attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415

Chapter 8. Advanced features for storage efficiency . . . . . . . . . . . . . . . . . . . . . . . . . 425


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
8.2 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
8.2.1 Easy Tier concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
8.2.2 SSD arrays and flash MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
8.2.3 Disk tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
8.2.4 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8.2.5 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
8.2.6 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
8.2.7 Modifying the Easy Tier setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
8.2.8 Monitoring tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
8.2.9 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
8.3 Thin provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
8.3.1 Configuring a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
8.3.2 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
8.3.3 Limitations of virtual capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
8.4 Real-time Compression Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
8.4.1 Common use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
8.4.2 Real-time Compression concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
8.4.3 Random Access Compression Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
8.4.4 Dual RACE component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
8.4.5 Random Access Compression Engine in the IBM Spectrum Virtualize software stack
467
8.4.6 Data write flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
8.4.7 Data read flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
8.4.8 Compression of existing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
8.4.9 Creating compressed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
8.4.10 Comprestimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473

Chapter 9. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475


9.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
9.1.1 Business requirements for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
9.1.2 Backup improvements with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
9.1.3 Restore with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477

viii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm

9.1.4 Moving and migrating data with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477


9.1.5 Application testing with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
9.1.6 Host and application considerations to ensure FlashCopy integrity . . . . . . . . . . 478
9.1.7 FlashCopy attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
9.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
9.2.1 FlashCopy and Tivoli Storage FlashCopy Manager . . . . . . . . . . . . . . . . . . . . . . 480
9.3 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
9.4 Implementing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
9.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
9.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
9.4.3 Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
9.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
9.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
9.4.6 Interaction and dependency between multiple target FlashCopy mappings. . . . 490
9.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 491
9.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
9.4.9 FlashCopy and image mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
9.4.10 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
9.4.11 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
9.4.12 Thin provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
9.4.13 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
9.4.14 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
9.4.15 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
9.4.16 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
9.4.17 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 503
9.4.18 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
9.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
9.6 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
9.6.1 Native IP replication technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
9.6.2 IBM SVC and Storwize System Layers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
9.6.3 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
9.6.4 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
9.6.5 IP partnership and SVC terminology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
9.6.6 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
9.6.7 Remote copy groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
9.6.8 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
9.6.9 Setting up the SVC system IP partnership by using the GUI . . . . . . . . . . . . . . . 527
9.7 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
9.7.1 Multiple SVC system mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
9.7.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
9.7.3 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
9.7.4 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
9.7.5 Synchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
9.7.6 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
9.7.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
9.7.8 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
9.7.9 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
9.7.10 IBM SVC Global Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
9.7.11 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
9.7.12 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
9.7.13 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
9.7.14 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
9.7.15 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544

Contents ix
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am

9.7.16 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545


9.7.17 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
9.7.18 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . 546
9.7.19 Remote Copy configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
9.7.20 Remote Copy states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
9.8 Remote Copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
9.8.1 Remote Copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
9.8.2 Listing available IBM SVC system partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
9.8.3 Changing the system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
9.8.4 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
9.8.5 Creating a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . . 558
9.8.6 Creating a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 558
9.8.7 Changing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 559
9.8.8 Changing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . . 559
9.8.9 Starting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . 559
9.8.10 Stopping Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 559
9.8.11 Starting Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . . . 560
9.8.12 Stopping Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . . 560
9.8.13 Deleting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 560
9.8.14 Deleting Metro Mirror/Global Mirror consistency group. . . . . . . . . . . . . . . . . . . 561
9.8.15 Reversing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 561
9.8.16 Reversing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 561
9.9 Troubleshooting remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
9.9.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
9.9.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563

Chapter 10. Operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565


10.1 Normal operations using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
10.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
10.1.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
10.1.3 UNIX commands available in interactive SSH sessions . . . . . . . . . . . . . . . . . . 572
10.2 New commands and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
10.3 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . 577
10.3.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
10.3.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
10.3.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
10.3.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
10.3.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
10.3.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
10.3.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
10.3.8 Adding MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
10.3.9 Showing MDisks in a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
10.3.10 Working with a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
10.3.11 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
10.3.12 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
10.3.13 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
10.3.14 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
10.3.15 Removing MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
10.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
10.4.1 Creating an FC-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
10.4.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
10.4.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
10.4.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592

x Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm

10.4.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592


10.4.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
10.5 Working with the Ethernet port for iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
10.6 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
10.6.1 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
10.6.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
10.6.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
10.6.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
10.6.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
10.6.6 Adding a compressed volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
10.6.7 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
10.6.8 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
10.6.9 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
10.6.10 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
10.6.11 Using volume protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
10.6.12 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
10.6.13 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
10.6.14 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
10.6.15 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
10.6.16 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
10.6.17 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . 613
10.6.18 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
10.6.19 Showing a volume on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
10.6.20 Showing which volumes are using a storage pool . . . . . . . . . . . . . . . . . . . . . 615
10.6.21 Showing which MDisks are used by a specific volume . . . . . . . . . . . . . . . . . . 616
10.6.22 Showing from which storage pool a volume has its extents . . . . . . . . . . . . . . 616
10.6.23 Showing the host to which the volume is mapped . . . . . . . . . . . . . . . . . . . . . 617
10.6.24 Showing the volume to which the host is mapped . . . . . . . . . . . . . . . . . . . . . 617
10.6.25 Tracing a volume from a host back to its physical disk . . . . . . . . . . . . . . . . . . 618
10.7 Scripting under the CLI for SAN Volume Controller task automation . . . . . . . . . . . . 620
10.7.1 Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
10.8 Managing the clustered system by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
10.8.1 Viewing clustered system properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
10.8.2 Changing system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
10.8.3 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
10.8.4 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
10.8.5 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
10.8.6 Using the ping command to diagnose IP configuration problems . . . . . . . . . . . 628
10.8.7 Setting the clustered system time zone and time . . . . . . . . . . . . . . . . . . . . . . . 628
10.8.8 Starting statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
10.8.9 Determining the status of a copy operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
10.8.10 Shutting down a clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
10.9 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
10.9.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
10.9.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
10.9.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
10.9.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
10.9.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
10.10 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
10.10.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
10.10.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
10.10.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
10.10.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637

Contents xi
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am

10.11 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638


10.11.1 Managing users by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
10.11.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
10.11.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
10.11.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
10.12 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
10.12.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
10.12.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
10.12.3 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
10.12.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
10.12.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . 646
10.12.6 Preparing (pre-triggering) the FlashCopy Consistency Group . . . . . . . . . . . . 647
10.12.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
10.12.8 Starting (triggering) the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . 649
10.12.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
10.12.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
10.12.11 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . 651
10.12.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
10.12.13 Deleting the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 652
10.12.14 Migrating a volume to a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . 652
10.12.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
10.12.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
10.13 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
10.13.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
10.13.2 Creating a SAN Volume Controller partnership between ITSO_SVC_DH8 and
ITSO_SVC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
10.13.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 664
10.13.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
10.13.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . 665
10.13.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
10.13.7 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . 667
10.13.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
10.13.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
10.13.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . 669
10.13.11 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . 670
10.13.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . 671
10.13.13 Restarting a Metro Mirror Consistency Group in the Idling state . . . . . . . . . 672
10.13.14 Changing the copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . 672
10.13.15 Switching the copy direction for a Metro Mirror relationship . . . . . . . . . . . . . 672
10.13.16 Switching the copy direction for a Metro Mirror Consistency Group . . . . . . . 674
10.13.17 Creating a SAN Volume Controller partnership among clustered systems. . 675
10.13.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
10.14 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
10.14.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
10.14.2 Creating a SAN Volume Controller partnership between ITSO_SVC_DH8 and
ITSO_SVC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
10.14.3 Changing link tolerance and system delay simulation . . . . . . . . . . . . . . . . . . 683
10.14.4 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 685
10.14.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
10.14.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . 686
10.14.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
10.14.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 687
10.14.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 688

xii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm

10.14.10 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . 688


10.14.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
10.14.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . 690
10.14.13 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . 691
10.14.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . 692
10.14.15 Restarting a Global Mirror Consistency Group in the Idling state . . . . . . . . . 692
10.14.16 Changing the direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
10.14.17 Switching the copy direction for a Global Mirror relationship . . . . . . . . . . . . 693
10.14.18 Switching the copy direction for a Global Mirror Consistency Group . . . . . . 694
10.14.19 Changing a Global Mirror relationship to the cycling mode. . . . . . . . . . . . . . 695
10.14.20 Creating the thin-provisioned Change Volumes . . . . . . . . . . . . . . . . . . . . . . 697
10.14.21 Stopping the stand-alone remote copy relationship . . . . . . . . . . . . . . . . . . . 698
10.14.22 Setting the cycling mode on the stand-alone remote copy relationship . . . . 698
10.14.23 Setting the Change Volume on the master volume. . . . . . . . . . . . . . . . . . . . 698
10.14.24 Setting the Change Volume on the auxiliary volume . . . . . . . . . . . . . . . . . . 699
10.14.25 Starting the stand-alone relationship in the cycling mode. . . . . . . . . . . . . . . 700
10.14.26 Stopping the Consistency Group to change the cycling mode . . . . . . . . . . . 701
10.14.27 Setting the cycling mode on the Consistency Group . . . . . . . . . . . . . . . . . . 701
10.14.28 Setting the Change Volume on the master volume relationships of the
Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
10.14.29 Setting the Change Volumes on the auxiliary volumes. . . . . . . . . . . . . . . . . 703
10.14.30 Starting the Consistency Group CG_W2K3_GM in the cycling mode . . . . . 704
10.15 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
10.15.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
10.15.2 Running the maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
10.16 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
10.17 Recover system procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712

Chapter 11. Operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715


11.1 Normal SVC operations using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
11.1.1 Introduction to the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
11.1.2 Content view organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
11.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
11.2 Monitoring menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
11.2.1 System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
11.2.2 System details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
11.2.3 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
11.2.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
11.3 Working with external disk controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
11.3.1 Viewing the disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
11.3.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
11.3.3 Discovering Storage from the external panel . . . . . . . . . . . . . . . . . . . . . . . . . . 735
11.4 Working with storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
11.4.1 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
11.4.2 Creating storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
11.4.3 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
11.4.4 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
11.5 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
11.5.1 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
11.5.2 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
11.5.3 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
11.5.4 Assigning MDisks to storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
11.5.5 Unassigning MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745

Contents xiii
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am

11.6 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746


11.7 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
11.7.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
11.7.2 Adding a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
11.7.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
11.7.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
11.7.5 Creating or modifying volume mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
11.7.6 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
11.7.7 Deleting all host mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
11.8 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
11.8.1 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
11.8.2 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762
11.8.3 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
11.8.4 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
11.8.5 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
11.8.6 Deleting all host mappings for a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
11.8.7 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
11.8.8 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
11.8.9 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
11.8.10 Adding mirrored copy of existing volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
11.8.11 Deleting volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
11.8.12 Splitting a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
11.8.13 Validating volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782
11.8.14 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
11.8.15 Migrating a volume to an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . 784
11.8.16 Creating an image mode mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
11.9 Copy Services and managing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
11.9.1 Creating a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
11.9.2 Single-click snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
11.9.3 Single-click clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
11.9.4 Single-click backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
11.9.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
11.9.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 797
11.9.7 Showing related volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
11.9.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 803
11.9.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 804
11.9.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
11.9.11 Renaming FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
11.9.12 Renaming Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
11.9.13 Deleting FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
11.9.14 Deleting FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
11.9.15 Starting FlashCopy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
11.9.16 Stopping FlashCopy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
11.10 Copy Services: Managing Remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
11.10.1 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
11.10.2 Creating Fibre Channel partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
11.10.3 Creating IP-based partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
11.10.4 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . 817
11.10.5 Creating Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
11.10.6 Renaming Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
11.10.7 Renaming remote copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
11.10.8 Moving stand-alone remote copy relationship to Consistency Group . . . . . . . 829
11.10.9 Removing remote copy relationship from Consistency Group . . . . . . . . . . . . 830

xiv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm

11.10.10 Starting remote copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831


11.10.11 Starting remote copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
11.10.12 Switching copy direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
11.10.13 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . 835
11.10.14 Stopping remote copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
11.10.15 Stopping Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
11.10.16 Deleting stand-alone remote copy relationships . . . . . . . . . . . . . . . . . . . . . . 839
11.10.17 Deleting Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
11.11 Managing SVC clustered system from GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
11.11.1 System status information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
11.11.2 View I/O Groups and their associated nodes . . . . . . . . . . . . . . . . . . . . . . . . . 843
11.11.3 View SVC clustered system properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
11.11.4 Renaming SVC clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
11.11.5 Renaming site information of the nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
11.11.6 Rename a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
11.11.7 Shutting down SVC clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
11.12 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
11.12.1 Updating system software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
11.12.2 Update drive software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
11.13 Managing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
11.14 Managing nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
11.14.1 Viewing node properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
11.14.2 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857
11.14.3 Adding node to SVC system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857
11.14.4 Removing node from SVC clustered system . . . . . . . . . . . . . . . . . . . . . . . . . 860
11.15 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
11.15.1 Events panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
11.15.2 Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
11.15.3 Support panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
11.16 User management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
11.16.1 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
11.16.2 Modifying the user properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
11.16.3 Removing a user password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
11.16.4 Removing a user SSH public key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
11.16.5 Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
11.16.6 Creating a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880
11.16.7 Modifying the user group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
11.16.8 Deleting a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 882
11.16.9 Audit log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
11.17 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
11.17.1 Configuring the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
11.17.2 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
11.17.3 Fibre Channel information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
11.17.4 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
11.17.5 Email notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
11.17.6 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
11.17.7 System options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
11.17.8 Upgrading IBM Spectrum Virtualize software . . . . . . . . . . . . . . . . . . . . . . . . . 896
11.18 VMware Virtual Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
11.18.1 Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
11.18.2 Setting GUI preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904

Appendix A. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . 907

Contents xv
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am

SAN Volume Controller performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908


Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
IBM Spectrum Virtualize performance perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
Real-time performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
Performance data collection and Tivoli Storage Productivity Center for Disk . . . . . . . . 918
11.18.3 Port Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919

Appendix B. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923


Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924

Appendix C. Stretched Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939


Stretched cluster overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 940
Common terms used in Stretched Cluster Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 942
Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942
Sites and Failure Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
Disaster Recovery (DR) feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
Volume Mirroring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
Public and Private SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
Stretched cluster deployment planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
Enhanced Stretched Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
Standard and Enhanced Stretched Cluster comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 951
Enhanced Stretched Cluster comparison with HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . 952
Recommendations and best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
Non-ISL stretched cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
Inter-switch link configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959
FCIP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
Bandwidth reduction and planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
Technical guidelines and Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
Referenced websites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967

xvi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933spec.fm

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2015. All rights reserved. xvii


7933spec.fm Draft Document for Review February 4, 2016 8:01 am

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® HACMP™ pureScale®
AIX 5L™ HyperSwap® Real-time Compression™
DB2® IBM® Redbooks®
developerWorks® IBM FlashSystem® Redbooks (logo) ®
DS4000® IBM Spectrum™ Storwize®
DS5000™ IBM Spectrum Accelerate™ System Storage®
DS8000® IBM Spectrum Control™ Tivoli®
Easy Tier® IBM Spectrum Protect™ WebSphere®
FlashCopy® IBM Spectrum Scale™ XIV®
FlashSystem™ IBM Spectrum Virtualize™
GPFS™ POWER®

The following terms are trademarks of other companies:

Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.

Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other company, product, or service names may be trademarks or service marks of others.

xviii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933itsoad.fm
IBM REDBOOKS PROMOTIONS

IBM Redbooks promotions

Find and read thousands of


IBM Redbooks publications
Search, bookmark, save and organize favorites
Get up-to-the-minute Redbooks news and announcements
Link to the latest Redbooks blogs and videos

Get the latest version of the Redbooks Mobile App

Download
Android
iOS

Now

Promote your business


in an IBM Redbooks
publication
®
Place a Sponsorship Promotion in an IBM
®
Redbooks publication, featuring your business
or solution with a link to your web site.

Qualified IBM Business Partners may place a full page


promotion in the most popular Redbooks publications.
Imagine the power of being seen by users who download ibm.com/Redbooks
millions of Redbooks publications each year! About Redbooks Business Partner Programs
Draft Document for Review February 4, 2016 8:01 am 7933itsoad.fm

THIS PAGE INTENTIONALLY LEFT BLANK


Draft Document for Review February 4, 2016 8:01 am 7933pref.fm

Preface

This IBM® Redbooks® publication is a detailed technical guide to the IBM System Storage®
SAN Volume Controller, powered by IBM Spectrum™ Virtualize Version 7.6.

The SAN Volume Controller (SVC) is a virtualization appliance solution, which maps
virtualized volumes that are visible to hosts and applications to physical volumes on storage
devices. Each server within the storage area network (SAN) has its own set of virtual storage
addresses that are mapped to physical addresses. If the physical addresses change, the
server continues running by using the same virtual addresses that it had before. Therefore,
volumes or storage can be added or moved while the server is still running.

The IBM virtualization technology improves the management of information at the “block”
level in a network, which enables applications and servers to share storage devices on a
network.

Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.

Jon Tate is a Project Manager for IBM System Storage SAN


Solutions at the International Technical Support Organization
(ITSO), San Jose Center. Before joining the ITSO in 1999, he
worked in the IBM Technical Support Center, providing Level
2/3 support for IBM mainframe storage products. Jon has 30
years of experience in storage software and management,
services, and support, and he is an IBM Certified IT Specialist,
an IBM SAN Certified Specialist, and Project Management
Professional (PMP) certified. He is also the UK Chairman of
the Storage Networking Industry Association.

Maximilian Hart works for IBM Germany as a Product Field


Engineer in the EMEA Level 2 IBM Spectrum Virtualize™
support team covering IBM SAN Volume Controller and
Storwize® V7000 products since 2011. His main
responsibilities are to define and coordinate/communicate
action plans to resolve technical problems in customers'
business critical situations, to identify and resolve product
deficiencies with development and to provide regular technical
skill transfer and storage expertise towards field resources. He
joined IBM in 2005 for an Integrated Degree Program and
worked with the IBM SAN Volume Controller during his diploma
thesis in 2008 on earlier codel evels.

© Copyright IBM Corp. 2015. All rights reserved. xxi


7933pref.fm Draft Document for Review February 4, 2016 8:01 am

Hartmut Lonzer is an OEM Alliance Manager for IBM Storage.


Before that he was a Client Technical Specialist for IBM
Germany. He works in the IBM Germany headquarters in
Ehningen. His main focus is on IBM SAN Volume Controller
and the IBM Storwize Family. His experience with the IBM SAN
Volume Controller and Storwize products goes back to the
beginning of these products. Hartmut has been with IBM in
various technical roles for 38 years.

Tarik Jose Maluf is a Storage Consultant for IBM Systems


Hardware in Brazil. He has more than 15 years of experience in
the IT industry including Server and Storage management,
support and implementation. Before working as a Storage
Consultant, he worked as a Storage administrator for IBM
Global Services providing support for IBM storage products. He
has been working as a Storage administrator and Consultant
since 2010 and his areas of expertise include enterprise disk
(IBM DS8000® family, XIV®) midrange disk (IBM Storwize
V3700, V5000 and V7000) and virtualization (IBM SAN Volume
Controller). He is an IBM Certified IT Specialist and a member
of the Technology Leadership Council Brazil (TLC-BR).

Libor Miklas is an IT Architect working for IBM Czech


Republic. He demonstrates twelve years of extensive
experience within the IT industry. During the last ten years, his
main focus has been on the data protection solutions and on
storage management and provisioning. Libor and his team
design, implement, and support midrange and enterprise
storage environments for various global and local clients,
worldwide. He is an IBM Certified Deployment Professional for
the IBM Spectrum Protect™ family of products and holds a
Master’s degree in Electrical Engineering and
Telecommunications.

Jon Parkes works for IBM Hursley.

xxii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933pref.fm

Anthony Saine is a Level 2 storage support for IBM Storage


products (SVC/Storwize) at RTP (Research Triangle Park) ,
Raleigh, North Carolina. Anthony has 8 years of support
experience for storage related products having worked
previously for EMV/Data Domain. Anthony Joined IBM in 2011
and has a degree in computer programming, is A+ and
Network Plus Certified..

Lev Sturmer is a Level 3 Support Engineer in the Real-time


Compression™ division of IBM Storage and Technology Group
for the past four years, and works for IBM Israel. Before joining
IBM, Lev worked as a Security and Networking Engineer where
he gained 10 years experience across various environments.

Marcin Tabinowski works as an IT Specialist in IBM Systems


& Technology Group Lab Services in Poland. He has over nine
years of experience in designing and implementing IT solutions
based on storage and IBM POWER® systems. His main
responsibilities are architecting, consulting, implementing, and
documenting projects including storage systems, SAN
networks, POWER systems, disaster recovery, virtualization,
and data migration. Pre-sales, post-sales, and training are also
part of his everyday duties. Marcin holds many certifications
that span different IBM storage products and POWER systems.
He also holds an MSC of Computer Science from Wroclaw
University of Technology, Poland.

Thanks to the authors of the previous version:

Frank Enders
Torben Jensen
Hartmut Lonzer
Libor Miklas
Marcin Tabinowski

Thanks to the following people for their contributions to this project:

Ian Boden
Paul Cashman
John Fairhurst
Carlos Fuente
Katja Gebuhr
Paul Merrison
Nicholas Sunderland
Dominic Tomkins

Preface xxiii
7933pref.fm Draft Document for Review February 4, 2016 8:01 am

Barry Whyte
Stephen Wright
IBM Hursley, UK

Nick Clayton
IBM Systems & Technology Group, UK

Navin Manohar
IBM Systems, TX, US

Special thanks to the Brocade Communications Systems staff in San Jose, California for their
unparalleled support of this residency in terms of equipment and support in many areas:

Jim Baldyga
Silviano Gaona
Sangam Racherla
Brian Steffler
Marcus Thordal
Brocade Communications Systems

Now you can become a published author, too!


Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
[email protected]
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

xxiv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933pref.fm

Stay connected to IBM Redbooks


򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html

Preface xxv
7933pref.fm Draft Document for Review February 4, 2016 8:01 am

xxvi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933chang.fm

Summary of changes

This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.

Summary of Changes
for SG24-7933-04
for Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V7.6
as created or updated on February 4, 2016.

December 2015, Fifth Edition


This revision includes the following new and changed information.

New information
򐂰 Encryption
򐂰 Distributed RAID
򐂰 New software and hardware description

Changed information
򐂰 IBM Spectrum Virtualize rebranding
򐂰 GUI and CLI
򐂰 Compression

© Copyright IBM Corp. 2015. All rights reserved. xxvii


7933chang.fm Draft Document for Review February 4, 2016 8:01 am

xxviii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm

Chapter 1. Introduction to storage


virtualization
In this chapter, we define the concept of storage virtualization. Then, we present an overview
that describes how you can apply virtualization to help address challenging storage
requirements.

This chapter discusses the following topics:


򐂰 Storage virtualization terminology
򐂰 Requirements driving storage virtualization
򐂰 Latest changes and enhancements
򐂰 Summary

© Copyright IBM Corp. 2015. All rights reserved. 1


7933 01 Introduction Libor.fm Draft Document for Review February 4, 2016 8:01 am

1.1 Storage virtualization terminology


Although storage virtualization is a term that is used extensively throughout the storage
industry, it can be applied to various technologies and underlying capabilities. In reality, most
storage devices technically can claim to be virtualized in one form or another. Therefore, we
must start by defining the concept of storage virtualization as it is used in this book.

IBM defines storage virtualization in the following manner:


򐂰 Storage virtualization is a technology that makes one set of resources resemble another
set of resources, preferably with more desirable characteristics.
򐂰 It is a logical representation of resources that is not constrained by physical limitations and
hides part of the complexity. It also adds or integrates new function with existing services
and can be nested or applied to multiple layers of a system.

When we mention storage virtualization, it is important to understand that virtualization can


be implemented at various layers within the I/O stack. We must clearly distinguish between
virtualization at the disk layer (block-based) and virtualization at the file system layer
(file-based).

The focus of this publication is virtualization at the disk layer, which is referred to as
block-level virtualization, or the block aggregation layer. A description of file system
virtualization is beyond the scope of this book.

For more information about file system virtualization, see the following resources:
򐂰 IBM Spectrum Scale™ (based on IBM General Parallel File System, GPFS™):
http://www.ibm.com/systems/storage/spectrum/scale

The Storage Networking Industry Association’s (SNIA) block aggregation model provides a
useful overview of the storage domain and its layers, as shown in Figure 1-1 on page 3. It
illustrates three layers of a storage domain: the file, block aggregation, and block subsystem
layers.

The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage
controllers), or in storage devices (intelligent disk arrays).

The IBM implementation of a block aggregation solution is IBM Spectrum Virtualize software,
running on IBM SAN Volume Controller (SVC) and IBM Storwize family. The SVC is
implemented as a clustered appliance in the storage network layer. For more information
about the reasons why IBM chose to develop its SVC with IBM Spectrum Virtualize in the
storage network layer, see Chapter 2, “IBM SAN Volume Controller” on page 11.

2 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm

Figure 1-1 SNIA block aggregation model1

The key concept of virtualization is to decouple the storage from the storage functions that
are required in the storage area network (SAN) environment.

Decoupling means abstracting the physical location of data from the logical representation of
the data. The virtualization engine presents logical entities to the user and internally manages
the process of mapping these entities to the actual location of the physical storage.

The actual mapping that is performed depends on the specific implementation, such as the
granularity of the mapping, which can range from a small fraction of a physical disk up to the
full capacity of a physical disk. A single block of information in this environment is identified by
its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which
is known as a logical block address (LBA).

The term physical disk is used in this context to describe a piece of storage that might be
carved out of a RAID array in the underlying disk subsystem.

Specific to the IBM Spectrum Virtualize implementation, the address space that is mapped
between the logical entity is referred to as a volume. The array of physical disks is referred to
as managed disks (MDisks).

Figure 1-2 on page 4 shows an overview of block-level virtualization.

1 Source: Storage Networking Industry Association.

Chapter 1. Introduction to storage virtualization 3


7933 01 Introduction Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 1-2 Block-level virtualization overview

The server and application are aware of the logical entities only, and they access these
entities by using a consistent interface that is provided by the virtualization layer.

The functionality of a volume that is presented to a server, such as expanding or reducing the
size of a volume, mirroring a volume, creating an IBM FlashCopy®, and thin provisioning, is
implemented in the virtualization layer. It does not rely in any way on the functionality that is
provided by the underlying disk subsystem. Data that is stored in a virtualized environment is
stored in a location-independent way, which allows a user to move or migrate data between
physical locations, which are referred to as storage pools.

We refer to block-level storage virtualizations as the cornerstones of virtualization. These


cornerstones of virtualization are the core benefits that a product, such as IBM Spectrum
Virtualize, can provide over the traditional directly attached or SAN storage.

IBM Spectrum Virtualize provides the following benefits:


򐂰 Online volume migration while applications are running, which is possibly the greatest
single benefit for storage virtualization. This capability allows data to be migrated on and
between the underlying storage subsystems without any impact to the servers and
applications. In fact, this migration is performed without the knowledge of the servers and
applications that it even occurred.
򐂰 Simplified storage management by providing a single image for multiple controllers and a
consistent user interface for provisioning heterogeneous storage.
򐂰 Enterprise-level Copy Services functions. Performing Copy Services functions within the
SVC removes dependencies on the storage subsystems; therefore, it enables the source
and target copies to be on other storage subsystem types.
򐂰 Storage usage can be increased by pooling storage across the SAN.
򐂰 System performance is often improved with IBM Spectrum Virtualize and SVC as a result
of volume striping across multiple arrays or controllers and the other cache that it provides.

The SVC delivers these functions in a homogeneous way on a scalable and highly available
platform over any attached storage and to any attached server.

4 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm

1.2 Requirements driving storage virtualization


Today, an emphasis is put on the IBM Cognitive era of clients’ businesses and their dynamic
infrastructure. Therefore, a storage environment is needed to be as flexible as the application
and server mobility requirements. Business demands change quickly.

The following key client concerns drive storage virtualization:


򐂰 Growth in data center costs
򐂰 Inability of IT organizations to respond quickly to business demands
򐂰 Poor asset usage
򐂰 Poor availability or service levels
򐂰 Lack of skilled staff for storage administration

You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs relate to managing the storage system.

But how much of the management of multiple systems, with separate interfaces, can be
handled as a single entity? In a non-virtualized storage environment, every system is an
“island” that must be managed separately.

1.2.1 Benefits of using IBM Spectrum Virtualize


IBM SAN Volume Controller running IBM Spectrum Virtualize reduces the number of
separate environments that must be managed down to a single environment. It also provides
a single interface for storage management. After the initial configuration of the storage
subsystems, all of the day-to-day storage management operations are performed from SVC.

Because IBM Spectrum Virtualize provides advanced functions, such as mirroring and
FlashCopy, there is no need to purchase them again for each attached disk subsystem that
stands behind SVC.

Today, it is typical that open systems run at less than 50% of the usable capacity that is
provided by the RAID disk subsystems. The use of the installed raw capacity in the disk
subsystems shows usage numbers of less than 35%, depending on the RAID level that is
used. A block-level virtualization solution, such as IBM Spectrum Virtualize, can allow
capacity usage to increase to approximately 75 - 80%.

With the IBM Spectrum Virtualize, free space does not need to be maintained and managed
within each storage subsystem, which further increases capacity utilization.

1.3 Latest changes and enhancements


Software version 7.3 and its related hardware upgrade represented an important milestone in
the product line development with further enhancements in the most recent software version
7.6. The product internal architecture is significantly rebuilt, enabling the system to break the
previous limitations in terms of scalability and flexibility.

We will cover the major hardware changes of the last two years as we still consider them
worth mentioning, and provide a brief summary of them.

Chapter 1. Introduction to storage virtualization 5


7933 01 Introduction Libor.fm Draft Document for Review February 4, 2016 8:01 am

The 2145-DH8 and SVC Small Form Factor (SFF) Expansion Enclosure Model 24F deliver
increased performance, expanded connectivity, compression acceleration, and additional
internal flash storage capacity.

A 2145-DH8, which is based on IBM System x server technology, consists of one Xeon E5 v2
eight-core 2.6 GHz processor and 32 GB of memory. It includes three 1 Gb Ethernet ports as
standard for 1 Gbps iSCSI connectivity and supports up to four 16 Gbps Fibre Channel (FC)
I/O adapter cards for 16 Gbps FC or 10 Gbps iSCSI/Fibre Channel over Ethernet (FCoE)
connectivity (Converged Network Adapters). Up to three
quad-port 16 Gbps native FC cards are supported. It also includes two integrated AC power
supplies and battery units, replacing the uninterruptible power supply feature that was
required on the previous generation storage engine models.

The front view of the two-node cluster based on the 2145-DH8 is shown in Figure 1-3.

Figure 1-3 Front view of 2145-DH8

At software version 7.3 announce we included the following changes:


򐂰 Significant upgrade of the 2145-DH8 hardware
The 2145-DH8 introduces a 2U server based on the IBM x3650 M4 series and integrates
the following features:
– Minimum eight-core processors with 32 GB memory for the SVC. Optional secondary
processor with additional 32 GB memory when third I/O card is needed. Secondary
processor is compulsory when IBM Real-time Compression is enabled.
– Integrated dual-battery pack as an uninterruptible power supply in a power outage.
External UPS device is no longer needed, avoiding miscabling issues.
– Dual, redundant power supplies, therefore no external power switch is required.
– Removed front panel. Most of its actions were moved to the functionality of the rear
Technician Port (Ethernet port) with enabled Dynamic Host Configuration Protocol
(DHCP) for instant access.
– Two boot drives with data mirrored across drives. The SVC node will still boot in a drive
failure. Dump data is striped for performance reasons.
– Enhanced scalability with up to three PCI Express slot capabilities, which allow users
to install up to three four-port 8 Gbps FC host bus adapters (HBA) (12 ports). It

6 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm

supports one four-port 10GbE card (iSCSI or FCoE) and one dual-port 12 Gbps
serial-attached SCSI (SAS) card for flash drive expansion unit attachment (model
2145-24F).
– Improved Real-time Compression engine (RACE) with the processing offloaded to the
secondary dedicated processor and using 36 GB of dedicated memory cache. At a
minimum, one Compression Accelerator card needs to be installed (up to 200
compressed volumes) or two Compression Accelerators allow up to 512 compressed
volumes.
– Optional 2U expansion enclosure 2145-24F with up to 24 flash drives (200, 400, 800,
or 1600 GB).
򐂰 V7.3 includes the following software enhancements:
– Extended functionality of Easy Tier by storage pool balancing mode within the same
tier. It moves or exchanges extents between highly utilized and low-utilized managed
disks (MDisks) within a storage pool, therefore increasing the read and write
performance of the volumes. This function is enabled automatically in the SVC and
does not need any license. It cannot be disabled by the administrator.
– The SVC cache rearchitecture splits the original single cache into upper and lower
caches of different sizes. Upper cache uses up to 256 MB and lower cache uses up to
64 GB of installed memory allocated to both processors (if installed). And, 36 GB of
memory is always allocated for Real-time Compression if enabled.
– Near-instant prepare for FlashCopy due to the presence of lower cache. Multiple
snapshots of the golden image now share cache data (instead of a number of N
copies).

At software version 7.4 we announced the following changes:


򐂰 Hardware changes:
The SVC introduces a new 16 Gbps FC adapter based on the Emulex Lancer
multiprotocol chip, which offers to the SVC either FC connectivity or Fibre Channel over
Ethernet (FCoE). The adapter can be configured as a two-port 16 Gbps FC, four-port 8
Gbps FC, or four-port 10 GbE profile.
򐂰 Software changes:
– The most noticeable change in 7.4 after the first login is the modified GUI with the new
layout of the system panel with enhanced functions that are available directly from the
welcome window. The concept of the GUI design conforms to the well-known approach
from IBM System Storage XIV Gen3 and IBM FlashSystem™ 840. It provides
common, unified procedures to manage all these systems in a similar way, allowing
administrators to simplify their operational procedures across all systems.
– Child pools are new objects, which are created from the physical storage pool and
provide most of the functions of managed disk groups (MDiskgrps), for example,
volume creation, but the user can specify the capacity of the child pool at creation. In
previous SVC systems, the disk space of a storage pool is from MDisks, so the
capacity of a storage pool depends on the capacity of the MDisks. The user cannot
freely create a storage pool with a particular capacity that they want. The maximum
number of storage pools remains at 128, and each storage pool can have up to 127
child pools. Child pools can only be created in the command-line interface (CLI);
however, they are shown as child pools with all their differences to parent pools in the
GUI.
– A new level of volume protection prevents users from removing mappings of volumes
that are considered active. Active means that the system has detected recent I/O
activity to the volume from any host within a protection period that is defined by the

Chapter 1. Introduction to storage virtualization 7


7933 01 Introduction Libor.fm Draft Document for Review February 4, 2016 8:01 am

user. This behavior is enabled by system-wide policy settings. The detailed volume
view contains the new field that indicates when the volume was last accessed.
– A user can replace a failed flash drive by removing it from the 2145-24F expansion unit
and installing a new replacement drive, without requiring Directed Maintenance
Procedure (DMP) to supervise the action. The user determines that the fault LED is
illuminated for a drive, then they can expect to be able to reseat or replace the drive in
that slot. The system automatically performs the drive hardware validation tests and
promotes the unit into the configuration if these checks pass.
– The 2145-DH8 with SVC software version 7.4 and higher supports the T10 Data
Integrity Field between the internal RAID layer and the drives that are attached to
supported enclosures.
– The SVC supports 4096-bytes native drive block size without requiring clients to
change their block size. The SVC supports an intermix of 512 and 4096 drive native
block sizes within an array. The GUI recognizes drives with different block sizes and
represents them with different classes.
– The SVC 2145-DH8 improves the performance of Real-time Compression and
provides up to double I/O per second (IOPS) on the model DH8 when it is equipped
with both Compression Accelerator cards. It introduces two separate software
compression engines (RACE), taking advantage of multi-core controller architecture.
Hardware resources are shared between both RACE engines.
– Adds virtual LAN (VLAN) support for iSCSI and IP replication. When VLAN is enabled
and its ID is configured for the IP addresses that are used for either iSCSI host attach
or IP replication on the SVC, appropriate VLAN settings on the Ethernet network and
servers must also be correctly configured to avoid connectivity issues. After the VLANs
are configured, changes to their settings disrupt the iSCSI or IP replication traffic to
and from the SVC.
– New informational fields are added to the CLI output of the lsportip, lsportfc, and
lsportsas commands, indicating the physical port locations of each logical port in the
system.

At software version 7.5 announce we included the following changes:


򐂰 Software changes:
– Direct host attachment via 16 Gbps FC adapters with all operating systems except
AIX®.
– Support for Microsoft Offloaded Data Transfer (ODX)
– Introduction of HyperSwap® topology. It enables each volume to be presented by two
I/O groups, in other words by two pairs of SVC nodes. The configuration tolerates
combinations of node and site failures, using a flexible choice of host multipathing
driver interoperability.
– Support of VMware vSphere v6.0 Virtual Volumes (VVol). Each virtual machine (VM)
keeps different types of data each in a VVol, each of which is presented as a volume
(logical unit is SCSI) by the IBM Spectrum Virtualize system. Therefore, each VM owns
a small number of volumes.

At software version 7.6 we announced the following changes:


򐂰 Hardware changes:
– The SVC model 2145-DH8 supports up to four quad-port 16 Gbps native FC adapters
either for SAN access or direct host attachment.
– Removed support of IBM SAN Volume Controller modes 2145-8G4 and 8A4.

8 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm

򐂰 Software changes:
– The visual and functional enhancements in Graphical User Interface (GUI) with
changed layout of context menus and integrated performance meter on main page.
– Implementation of Distributed RAID (DRAID), that differs from traditional RAID arrays
by elimination of dedicated spare drives; spare capacity is rather spread across disks
making the reconstruction of failed disk faster.
– Introduced software encryption enabled by IBM Spectrum Virtualize and using
AES256-XTS algorithm. Encryption is enabled on storage pool level, all newly created
volumes in such pool are automatically encrypted. Encryption license with USB flash
drives is required.
– Developed Comprestimator tool, that is included in IBM Spectrum Virtualize software.
It provides the statistics to estimate potential storage savings. Available from CLI, it
does not need compression licenses and does not trigger any compression process. It
uses the same algorithm of estimation as external host-based application, so results
are similar.
– Enhanced GUI wizard for initial configuration of Hyperswap topology. IBM Spectrum
Virtualize now allows IP-attached quorum disk in Hyperswap system configuration.
– Increased maximum number of iSCSI hosts attached to the system to 2048 (512 host
IQNs per I/O group) with a maximum of four iSCSI sessions per SVC node (8 per I/O
group).
– Improved and optimized read I/O performance in Hyperswap system configuration by
parallel read from primary and secondary local volume copy. Both copies must be in a
synchronized state.
– Extends the support of VMware Virtual Volumes (VVols). Using IBM Spectrum
Virtualize you can manage one-to-one partnership of VM drives to SVC volumes. It
eliminates single, shared volume (datastore) I/O contention.
– Customizable login banner. Using CLI commands you can now define and show
welcome message or important disclaimer on login screen to users. This banner is
shown in GUI or CLI login window.

The IBM SAN Volume Controller 2145-DH8 ships with preloaded IBM Spectrum Virtualize 7.6
software. Downgrading the software to software version 7.2 or lower is not supported. The
2145-DH8 rejects any attempt to install a version that is lower than 7.3. With specific
hardware installed (4-port 16 Gbps FC HBAs), or in case of specific software configurations
(HyperSwap, enabled encryption, more than 512 iSCSI host, etc.), it is not possible to install
software versions lower than 7.6.

Chapter 1. Introduction to storage virtualization 9


7933 01 Introduction Libor.fm Draft Document for Review February 4, 2016 8:01 am

1.4 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major
storage vendors offer storage virtualization products. The use of storage virtualization as the
foundation for a flexible and reliable storage solution helps enterprises to better align
business and IT by optimizing the storage infrastructure and storage management to meet
business demands.

The IBM Spectrum Virtualize running on IBM SAN Volume Controller is a mature,
eighth-generation virtualization solution that uses open standards and complies with SNIA
storage model. The SVC is an appliance-based, in-band block virtualization process in which
intelligence (including advanced storage functions) is migrated from individual storage
devices to the storage network.

IBM Spectrum Virtualize can improve the utilization of your storage resources, simplify your
storage management, and improve the availability of business applications.

10 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Chapter 2. IBM SAN Volume Controller


In this chapter, we explain the major concepts underlying the IBM SAN Volume Controller
(SVC).

We present a brief history of the SVC product, and then provide an architectural overview.
After we define SVC terminology, we describe software and hardware concepts and the other
functionalities that are available with the newest release.

Finally, we provide links to websites where you can obtain more information about the SVC.

This chapter includes the following topics:


򐂰 Brief history of the SAN Volume Controller
򐂰 SAN Volume Controller architectural overview
򐂰 SAN Volume Controller terminology
򐂰 SAN Volume Controller components
򐂰 Volume overview
򐂰 HyperSwap
򐂰 Distributed Raid overview
򐂰 Encryption overview
򐂰 iSCSI overview
򐂰 Advanced Copy Services overview
򐂰 SAN Volume Controller clustered system overview
򐂰 User authentication
򐂰 SAN Volume Controller hardware overview
򐂰 Flash drives
򐂰 What is new with SVC 7.6
򐂰 Useful SAN Volume Controller web links

© Copyright IBM Corp. 2015. All rights reserved. 11


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

2.1 Brief history of the SAN Volume Controller


The IBM implementation of block-level storage virtualization, the IBM System Storage SAN
Volume Controller, is based on an IBM project that was started in the second half of 1999 at
the IBM Almaden Research Center. The project was called COMmodity PArts Storage
System, or COMPASS.

One goal of this project was to create a system that was almost exclusively composed of
off-the-shelf standard parts. As with any enterprise-level storage control system, it had to
deliver a level of performance and availability that was comparable to the highly optimized
storage controllers of previous generations. The idea of building a storage control system that
is based on a scalable cluster of lower performance servers instead of a monolithic
architecture of two nodes is still a compelling idea.

COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices.

The first documentation that covered this project was released to the public in 2003 in the
form of the IBM Systems Journal, Vol. 42, No. 2, 2003, “The software architecture of a SAN
storage control system”, by J. S. Glider, C. F. Fuente, and W. J. Scales. The article is available
at this website:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853

The results of the COMPASS project defined the fundamentals for the product architecture.
The first release of the IBM System Storage SAN Volume Controller was announced in July
2003.

Each of the following releases brought new and more powerful hardware nodes, which
approximately doubled the I/O performance and throughput of its predecessors, provided new
functionality, and offered more interoperability with new elements in host environments, disk
subsystems, and the storage area network (SAN).

The most recently released hardware node, the 2145-DH8, is based on an IBM System x
3650 M4 server technology with the following features:
򐂰 One 2.6 GHz Intel Xeon Processor E5-2650 v2 with eight processor cores (A second
processor is optional.)
򐂰 Up to 64 GB of cache
򐂰 Up to three four-port 8 Gbps Fibre Channel (FC) cards
򐂰 Up to four two-port 16 Gbps FC cards
򐂰 Up to four four-port 16 Gbps FC cards
򐂰 One four-port 10 Gbps iSCSI/Fibre Channel over Ethernet (FCoE) card
򐂰 One 12 Gbps serial-attached SCSI (SAS) Expansion card for an additional two SAS
expansions
򐂰 Three 1 Gbps ports for management and iSCSI host access
򐂰 One Technican port
򐂰 Two battery packs

The SVC node can support up to two external Expansion Enclosures for Flash Cards.

12 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

2.2 SAN Volume Controller architectural overview


The SVC is a SAN block aggregation virtualization appliance that is designed for attachment
to various host computer systems.

The following major approaches are used today for the implementation of block-level
aggregation and virtualization:
򐂰 Symmetric: In-band appliance
Virtualization splits the storage that is presented by the storage systems into smaller
chunks that are known as extents. These extents are then concatenated, by using various
policies, to make virtual disks (volumes). With symmetric virtualization, host systems can
be isolated from the physical storage. Advanced functions, such as data migration, can run
without the need to reconfigure the host. With symmetric virtualization, the virtualization
engine is the central configuration point for the SAN. The virtualization engine directly
controls access to the storage and to the data that is written to the storage. As a result,
locking functions that provide data integrity and advanced functions, such as cache and
Copy Services, can be run in the virtualization engine itself. Therefore, the virtualization
engine is a central point of control for device and advanced function management.
Symmetric virtualization allows you to build a firewall in the storage network. Only the
virtualization engine can grant access through the firewall.
Symmetric virtualization can have disadvantages. The main disadvantage that is
associated with symmetric virtualization is scalability. Scalability can cause poor
performance because all input/output (I/O) must flow through the virtualization engine. To
solve this problem, you can use an n-way cluster of virtualization engines that has failover
capacity. You can scale the additional processor power, cache memory, and adapter
bandwidth to achieve the level of performance that you want. Additional memory and
processing power are needed to run advanced services, such as Copy Services and
caching.
The SVC uses symmetric virtualization. Single virtualization engines, which are known as
nodes, are combined to create clusters. Each cluster can contain between two and eight
nodes.
򐂰 Asymmetric: Out-of-band or controller-based
With asymmetric virtualization, the virtualization engine is outside the data path and
performs a metadata-style service. The metadata server contains all the mapping and the
locking tables; the storage devices contain only data. In asymmetric virtual storage
networks, the data flow is separated from the control flow. A separate network or SAN link
is used for control purposes. The metadata server contains all the mapping and locking
tables, and the storage devices contain only data. Because the flow of control is separated
from the flow of data, I/O operations can use the full bandwidth of the SAN. A separate
network or SAN link is used for control purposes. However, there are disadvantages to
asymmetric virtualization.
Asymmetric virtualization can have the following disadvantages:
– Data is at risk to increased security exposures, and the control network must be
protected with a firewall.
– Metadata can become complicated when files are distributed across several devices.
– Each host that accesses the SAN must know how to access and interpret the
metadata. Specific device drivers or agent software must therefore be running on each
of these hosts.
– The metadata server cannot run advanced functions, such as caching or Copy
Services, because it only knows about the metadata and not about the data itself.

Chapter 2. IBM SAN Volume Controller 13


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 2-1 shows variations of the two virtualization approaches.

Logical Entity
(Volume)

SAN
SAN
Virtualization

Virtualization

Disk Disk Disk


Sub-system Sub-system Sub-system Disk Subsystem
A C C Z

In-Band: Appliance Out-of-Band: Controller-based


Symmetric Virtualization Asymmetric Virtualization

Figure 2-1 Overview of block-level virtualization architectures

Although these approaches provide essentially the same cornerstones of virtualization,


interesting side-effects can occur, as described next.

The controller-based approach has high functionality, but it fails in terms of scalability or
upgradeability. Because of the nature of its design, no true decoupling occurs with this
approach, which becomes an issue for the lifecycle of this solution, such as with a controller.
Data migration issues and questions are challenging, such as how to reconnect the servers to
the new controller, and how to reconnect them online without any effect on your applications.

Be aware that with this approach, you not only replace a controller but also implicitly replace
your entire virtualization solution. In addition to replacing the hardware, updating or
repurchasing the licenses for the virtualization feature, advanced copy functions, and so on
might be necessary.

With a SAN or fabric-based appliance solution that is based on a scale-out cluster


architecture, lifecycle management tasks, such as adding or replacing new disk subsystems
or migrating data between them, are simple. Servers and applications remain online, data
migration occurs transparently on the virtualization platform, and licenses for virtualization
and copy services require no update: that is, they require no other costs when disk
subsystems are replaced.

Only the fabric-based appliance solution provides an independent and scalable virtualization
platform that can provide enterprise-class copy services and that is open for future interfaces
and protocols. By using the fabric-based appliance solution, you can choose the disk
subsystems that best fit your requirements, and you are not locked into specific SAN
hardware.

For these reasons, IBM chose the SAN or fabric-based appliance approach for the
implementation of the SVC.

14 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

The SVC includes the following key characteristics:


򐂰 It is highly scalable, which provides an easy growth path to two-n nodes (grow in a pair of
nodes due to the cluster function).
򐂰 It is SAN interface-independent. It supports FC and FCoE and iSCSI, but it is also open for
future enhancements.
򐂰 It is host-independent for fixed block-based Open Systems environments.
򐂰 It is external storage RAID controller-independent, which provides a continuous and
ongoing process to qualify more types of controllers.
򐂰 It can use disks that are internal to the nodes (solid-state drives (SSDs) in older SVC
systems) or externally direct-attached in Flash Expansions.

On the SAN storage that is provided by the disk subsystems, the SVC can offer the following
services:
򐂰 Creates a single pool of storage
򐂰 Provides logical unit virtualization
򐂰 Manages logical volumes
򐂰 Mirrors logical volumes

The SVC system also provides these functions:


򐂰 Large scalable cache
򐂰 Copy Services
򐂰 IBM FlashCopy (point-in-time copy) function, including thin-provisioned FlashCopy to
make multiple targets affordable
򐂰 Metro Mirror (synchronous copy)
򐂰 Global Mirror (asynchronous copy)
򐂰 Data migration
򐂰 Space management
򐂰 IBM Easy Tier to migrate the most frequently used data to higher or lower-performance
storage
򐂰 Thin-provisioned logical volumes
򐂰 Compressed volumes to consolidate storage
򐂰 Encryption of external attached Storage
򐂰 Supporting HyperSwap
򐂰 Supporting Virtual Volumes
򐂰 Direct attachment of Hosts

2.2.1 SAN Volume Controller topology


SAN-based storage is managed by the SVC in one or more “pairs” of SVC hardware nodes.
This configuration is referred to as a clustered system. These nodes are attached to the SAN
fabric, with RAID controllers and host systems. The SAN fabric is zoned to allow the SVC to
“see” the RAID controllers, and for the hosts to see the SVC.

The hosts cannot see or operate on the same physical storage (logical unit number (LUN))
from the RAID controller that is assigned to the SVC. Storage controllers can be shared
between the SVC and direct host access if the same LUNs are not shared. The zoning

Chapter 2. IBM SAN Volume Controller 15


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

capabilities of the SAN switch must be used to create distinct zones to ensure that this rule is
enforced.

SAN fabrics can include standard FC, FC over Ethernet, iSCSI over Ethernet, or possible
future types.

Figure 2-2 shows a conceptual diagram of a storage system that uses the SVC. It shows
several hosts that are connected to a SAN fabric or LAN. In practical implementations that
have high-availability requirements (most of the target clients for the SVC), the SAN fabric
“cloud” represents a redundant SAN. A redundant SAN consists of a fault-tolerant
arrangement of two or more counterpart SANs, which provide alternative paths for each
SAN-attached device.

Both scenarios (the use of a single network and the use of two physically separate networks)
are supported for iSCSI-based and LAN-based access networks to the SVC. Redundant
paths to volumes can be provided in both scenarios.

For simplicity, Figure 2-2 shows only one SAN fabric and two zones: host and storage. In a
real environment, it is a preferred practice to use two redundant SAN fabrics. The SVC can be
connected to up to four fabrics. For more information about zoning, see Chapter 3, “Planning
and configuration” on page 83.

Figure 2-2 SVC conceptual and topology overview

A clustered system of SVC nodes that are connected to the same fabric presents logical
disks or volumes to the hosts. These volumes are created from managed LUNs or managed
disks (MDisks) that are presented by the RAID disk subsystems.

The following two distinct zones are shown in the fabric:


򐂰 A host zone, in which the hosts can see and address the SVC nodes
򐂰 A storage zone, in which the SVC nodes can see and address the MDisks/LUNs that are
presented by the RAID subsystems

16 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens
through the SVC nodes. This design is commonly referred to as symmetric virtualization.
LUNs that are not processed by the SVC can still be provided to the hosts.

For iSCSI-based host access, the use of two networks and separating iSCSI traffic within the
networks by using a dedicated virtual local area network (VLAN) path for storage traffic
prevents any IP interface, switch, or target port failure from compromising the host servers’
access to the volumes’ LUNs.

2.3 SAN Volume Controller terminology


To provide a higher level of consistency among IBM storage products, the terminology that is
used starting with the SVC version 7 (and throughout the rest of this book) changed when
compared to previous SVC releases. Table 2-1 summarizes the major changes.

Table 2-1 SVC glossary terms


SVC terminology Previous SVC term Description

Clustered system or system Cluster A collection of nodes that are


placed in pairs (I/O Groups) for
redundancy to provide a single
management interface.

Event Error An occurrence of significance


to a task or system. Events can
include the completion or failure
of an operation, a user action,
or the change in the state of a
process.

Host mapping VDisk-to-host mapping The process of controlling


which hosts have access to
specific volumes within a
clustered system.

Storage pool (pool) Managed disk (MDisk) group A collection of storage that
identifies an underlying set of
resources. These resources
provide the capacity and
management requirements for
a volume or set of volumes.

Thin provisioning (or Space-efficient The ability to define a storage


thin-provisioned) unit (full system, storage pool,
or volume) with a logical
capacity size that is larger than
the physical capacity that is
assigned to that storage unit.
Allocates storage when data is
written to it.

Volume Virtual disk (VDisk) A fixed amount of physical or


virtual storage on a data
storage medium (tape or disk)
that supports a form of identifier
and parameter list, such as a
volume label or I/O control.

Chapter 2. IBM SAN Volume Controller 17


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

For more information about the terms and definitions that are used in the SVC environment,
see Appendix B, “Terminology” on page 923.

2.4 SAN Volume Controller components


The SVC product provides block-level aggregation and volume management for attached disk
storage. In simpler terms, the SVC manages a number of back-end storage controllers or
locally attached disks and maps the physical storage within those controllers or disk arrays
into logical disk images, or volumes, that can be seen by application servers and workstations
in the SAN.

The SAN is zoned so that the application servers cannot see the back-end physical storage,
which prevents any possible conflict between the SVC and the application servers that are
trying to manage the back-end storage. The SVC is based on the components that are
described next.

2.4.1 Nodes
Each SVC hardware unit is called a node. The node provides the virtualization for a set of
volumes, cache, and copy services functions. The SVC nodes are deployed in pairs (cluster)
and multiple pairs make up a clustered system or system. A system can consist of 1 - 4 SVC
node pairs.

One of the nodes within the system is known as the configuration node. The configuration
node manages the configuration activity for the system. If this node fails, the system chooses
a new node to become the configuration node.

Because the nodes are installed in pairs, each node provides a failover function to its partner
node if a node fails.

2.4.2 I/O Groups


Each pair of SVC nodes is also referred to as an I/O Group. An SVC clustered system can
have 1 - 4 I/O Groups.

A specific volume is always presented to a host server by a single I/O Group of the system.
The I/O Group can be changed.

When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to one specific I/O Group in the system. Also, under normal conditions, the I/Os for
that specific volume are always processed by the same node within the I/O Group. This node
is referred to as the preferred node for this specific volume.

Both nodes of an I/O Group act as the preferred node for their own specific subset of the total
number of volumes that the I/O Group presents to the host servers. A maximum of
2,048 volumes per I/O Group is allowed. In an eight-node cluster, the maximum is 8,192
volumes. However, both nodes also act as failover nodes for their respective partner node
within the I/O Group. Therefore, a node takes over the I/O workload from its partner node, if
required.

Therefore, in an SVC-based environment, the I/O handling for a volume can switch between
the two nodes of the I/O Group. For this reason, it is mandatory for servers that are connected
through FC to use multipath drivers to handle these failover situations.

18 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

The SVC I/O Groups are connected to the SAN so that all application servers that are
accessing volumes from this I/O Group have access to this group. Up to 512 host server
objects can be defined per I/O Group. The host server objects can access volumes that are
provided by this specific I/O Group.

If required, host servers can be mapped to more than one I/O Group within the SVC system;
therefore, they can access volumes from separate I/O Groups. You can move volumes
between I/O Groups to redistribute the load between the I/O Groups. Modifying the I/O Group
that services the volume can be done concurrently with I/O operations if the host supports
nondisruptive volume move. It also requires a rescan at the host level to ensure that the
multipathing driver is notified that the allocation of the preferred node changed and the ports
by which the volume is accessed changed. This modification can be done in the situation
where one pair of nodes becomes overused.

2.4.3 System
The system or clustered system consists of 1 - 4 I/O Groups. Certain configuration limitations
are then set for the individual system. For example, the maximum number of volumes that is
supported per system is 8,192 (having a maximum of 2,048 volumes per I/O Group), or the
maximum managed disk that is supported is ~28 PiB per system.

All configuration, monitoring, and service tasks are performed at the system level.
Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a
management IP address is set for the system.

A process is provided to back up the system configuration data onto disk so that it can be
restored if there is a disaster. This method does not back up application data. Only SVC
system configuration information is backed up.

For the purposes of remote data mirroring, two or more systems must form a partnership
before relationships between mirrored volumes are created.

For more information about the maximum configurations that apply to the system, I/O Group,
and nodes, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1005251

2.4.4 Stretched system


We describe the three possible implementations of a stretched system.

Chapter 2. IBM SAN Volume Controller 19


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Stretched systems
A stretched system is an extended high availability (HA) method that is supported by the SVC
to enable I/O operations to continue after the loss of half of the system. A stretched system is
also sometimes referred to as a split system because one-half of the system and I/O Group is
usually in a geographically distant location from the other, often 10 kilometers (6.2 miles) or
more. The maximum distance is approximately 300 km (186.4 miles). It depends on the
round-trip time, which must not be greater than 80 ms. With version 7.4, this round-trip time is
enhanced to 250 ms. A third site is required to host an FC storage system that provides a
quorum disk. This storage system can also be used for other purposes than to act only as a
quorum disk.

Enhanced stretched systems


SVC supports an enhanced stretched system configuration. This configuration can be used
regardless of whether the stretched system is configured with or without inter-switch links
(ISLs) between nodes.

Enhanced stretched systems provide the following primary benefits:


򐂰 In addition to the automatic failover that occurs when a site fails in a standard stretched
system configuration, an enhanced stretched system provides a manual override that can
be used to choose which of the two sites continues operation.
򐂰 Enhanced stretched systems intelligently route I/O traffic between nodes and controllers
to reduce the amount of I/O traffic between sites, and to minimize the impact to host
application I/O latency.
򐂰 Enhanced stretched systems include an implementation of additional policing rules to
ensure the correct configuration of a standard stretched system.

Note: The site attribute in the node and controller object needs to be set in an enhanced
stretched system.

For more information, see Appendix C, “Stretched Cluster” on page 939.

2.4.5 MDisks
The SVC system and its I/O Groups view the storage that is presented to the SAN by the
back-end controllers as a number of disks or LUNs, which are known as managed disks or
MDisks. Because the SVC does not attempt to provide recovery from physical disk failures
within the back-end controllers, an MDisk often is provisioned from a RAID array. However,
the application servers do not see the MDisks at all. Instead, they see a number of logical
disks, which are known as virtual disks or volumes, which are presented by the SVC I/O
Groups through the SAN (FC/FCoE) or LAN (iSCSI) to the servers.

The MDisks are placed into storage pools where they are divided into a number of extents,
which are 16 MiB - 8192 MiB, as defined by the SVC administrator.

For more information about the total storage capacity that is manageable per system
regarding the selection of extents, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1005251#_Extents

A volume is host-accessible storage that was provisioned out of one storage pool; or, if it is a
mirrored volume, out of two storage pools.

20 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

The maximum size of an MDisk is 1 PiB. An SVC system supports up to 4096 MDisks
(including internal RAID arrays). At any point, an MDisk is in one of the following three modes:
򐂰 Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and has no metadata that is stored
on it. The SVC does not write to an MDisk that is in unmanaged mode, except when it
attempts to change the mode of the MDisk to one of the other modes. The SVC can see
the resource, but the resource is not assigned to a storage pool.
򐂰 Managed MDisk
Managed mode MDisks are always members of a storage pool, and they contribute
extents to the storage pool. Volumes (if not operated in image mode) are created from
these extents. MDisks that are operating in managed mode might have metadata extents
that are allocated from them and can be used as quorum disks. This mode is the most
common and normal mode for an MDisk.
򐂰 Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume by
using virtualization. This mode is provided to satisfy the following major usage scenarios:
– Image mode allows the virtualization of MDisks that already contain data that was
written directly and not through an SVC; rather, it was created by a direct-connected
host.
This mode allows a client to insert the SVC into the data path of an existing storage
volume or LUN with minimal downtime. For more information about the data migration
process, see Chapter 6, “Data migration” on page 237.
Image mode allows a volume that is managed by the SVC to be used with the native
copy services function that is provided by the underlying RAID controller. To avoid the
loss of data integrity when the SVC is used in this way, it is important that you disable
the SVC cache for the volume.
– The SVC provides the ability to migrate to image mode, which allows the SVC to export
volumes and access them directly from a host without the SVC in the path.
Each MDisk that is presented from an external disk controller has an online path count
that is the number of nodes that has access to that MDisk. The maximum count is the
maximum number of paths that is detected at any point by the system. The current count
is what the system sees at this point. A current value that is less than the maximum can
indicate that SAN fabric paths were lost. For more information, see 2.5.1, “Image mode
volumes” on page 29.
SSDs that are in SVC 2145-CG8 or Flash space, which are presented by the external
Flash Enclosures of the SVC 2145-DH8 nodes, are presented to the cluster as MDisks. To
determine whether the selected MDisk is an SSD/Flash, click the link on the MDisk name
to display the Viewing MDisk Details panel.
If the selected MDisk is an SSD/Flash that is on an SVC, the Viewing MDisk Details panel
displays values for the Node ID, Node Name, and Node Location attributes. Alternatively,
you can select Work with Managed Disks → Disk Controller Systems from the
portfolio. On the Viewing Disk Controller panel, you can match the MDisk to the disk
controller system that has the corresponding values for those attributes.

Chapter 2. IBM SAN Volume Controller 21


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

2.4.6 Quorum disk


A quorum disk is a managed disk (MDisk) that contains a reserved area for use exclusively by
the system. The system uses quorum disks to break a tie when exactly half the nodes in the
system remain after a SAN failure. This situation is referred to as split brain. Quorum
functionality is not supported on Flash Drives within SVC nodes. With IBM SVC Version 7.6
also IP Quorum Disk is supported

Three candidate quorum disks exist. However, only one quorum disk is active at any time. For
more information about quorum disks, see 2.11.1, “Quorum disks” on page 55.

2.4.7 Disk tier


It is likely that the MDisks (LUNs) that are presented to the SVC system have various
performance attributes because of the type of disk or RAID array on which they are placed.
The MDisks can be on 15 K disk revolutions per minute (RPMs) Fibre Channel or SAS disk,
Nearline SAS, or Serial Advanced Technology Attachment (SATA), SSDs, or even on Flash
drives.

Therefore, a storage tier attribute is assigned to each MDisk, with the default being
generic_hdd.

2.4.8 Storage pool


A storage pool is a collection of up to 128 MDisks that provides the pool of storage from which
volumes are provisioned. A single system can manage up to 1024 storage pools. The size of
these pools can be changed (expanded or shrunk) at run time by adding or removing MDisks,
without taking the storage pool or the volumes offline. Expanding a storage pool with a single
drive is not possible.

At any point, an MDisk can be a member in one storage pool only, except for image mode
volumes. For more information, see 2.5.1, “Image mode volumes” on page 29.

Figure 2-3 on page 23 shows the relationships of the SVC entities to each other.

22 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Vol1 Vol2 Vol3 Vol5 Vol4 Vol6 Vol7 Vol8

IOGRP_0 IOGRP_1 IOGRP_2 IOGRP_3


Node_2 Node_4 Node_6 Node_8
Node_1 Node_3 Node_5 Node_7

SVC Clustered Vol1 Vol2


Vol7 Vol8
Vol3 Vol4 Vol5 Vol6
Systems

Pool_SSDN7 Pool_SSDN8
Storage_Pool_01
Storage_Pool_02
SSD SSD SSD SSD
MD1 MD2 MD3 MD4 MD5 SSD SSD SSD SSD

DISK Subsystem DISK Subsystem


A C

Figure 2-3 Overview of SVC clustered system with I/O Group

Each MDisk in the storage pool is divided into a number of extents. The size of the extent is
selected by the administrator when the storage pool is created and cannot be changed later.
The size of the extent is 16 MiB - 8192 MiB.

It is a preferred practice to use the same extent size for all storage pools in a system. This
approach is a prerequisite for supporting volume migration between two storage pools. If the
storage pool extent sizes are not the same, you must use volume mirroring (2.5.4, “Mirrored
volumes” on page 32) to copy volumes between pools.

The SVC limits the number of extents in a system to 222 = ~4 million. Because the number of
addressable extents is limited, the total capacity of an SVC system depends on the extent
size that is chosen by the SVC administrator. The capacity numbers that are specified in
Table 2-2 on page 24 for an SVC system assume that all defined storage pools were created
with the same extent size.

Chapter 2. IBM SAN Volume Controller 23


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Table 2-2 Extent size-to-address ability matrix


Extent Maximum Maximum Maximum MDisk Total storage
size non-thin-provisioned thin-provisioned capacity in GB capacity that is
(MB) volume capacity in GB volume capacity manageable
in GB per systema

16 2,048 (2 TiB) 2,000 2,048 (2 TiB) 64 TiB

32 4,096 (4 TiB) 4,000 4,096 (4 TiB) 128 TiB

64 8,192 (8 TiB) 8,000 8,192 (8 TiB) 256 TiB

128 16,384 (16 TiB) 16,000 16,384 (16 TiB) 512 TiB

256 32,768 (32 TiB) 32,000 32,768 (32 TiB) 1 PiB

512 65,536 (64 TiB) 65,000 65,536 (64 TiB) 2 PiB

1,024 131,072 (128 TiB) 130,000 131,072 (128 TiB) 4 PiB

2,048 262,144 (256 TiB) 260,000 262,144 (256 TiB) 8 PiB

4,096 262,144 (256 TiB) 262,144 524,288 (512 TiB) 16 PiB

8,192 262,144 (256 TiB) 262,144 1,048,576 (1024 TiB) 32 PiB

a
The total capacity values assumes that all of the storage pools in the system use the same
extent size.

For most systems, a capacity of 1 - 2 PiB is sufficient. A preferred practice is to use 256 MiB
for larger clustered systems. The default extent size is 1,024 MB.

Single-tiered storage pool


MDisks that are used in a single-tiered storage pool must have the following characteristics to
avoid causing performance problems and other issues:
򐂰 They have the same hardware characteristics, for example, the same RAID type, RAID
array size, disk type, and RPMs.
򐂰 The disk subsystems that are providing the MDisks must have similar characteristics, for
example, maximum I/O operations per second (IOPS), response time, cache, and
throughput.
򐂰 The MDisks that are used are the same size; therefore, the MDisks provide the same
number of extents. If that is not feasible, check the distribution of the volumes’ extents in
that storage pool.

For more information, see IBM System Storage SAN Volume Controller and Storwize V7000
Best Practices and Performance Guidelines, SG24-7521, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Multitiered storage pool


A multitiered storage pool has a mix of MDisks with more than one type of disk tier attribute,
for example, a storage pool that contains a mix of generic_hdd and generic_ssd MDisks.

Therefore, a multitiered storage pool contains MDisks with various characteristics, as


opposed to a single-tier storage pool. However, it is a preferred practice for each tier to have
MDisks of the same size and MDisks that provide the same number of extents.

24 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Multitiered storage pools are used to enable the automatic migration of extents between disk
tiers by using the IBM SVC Easy Tier function.

2.4.9 Volumes
Volumes are logical disks that are presented to the host or application servers by the SVC.
The hosts cannot see the MDisks; they can see only the logical volumes that are created from
combining extents from a storage pool.

The following types of volumes are available:


򐂰 Striped: A volume that is created in striped mode has extents that are allocated from each
MDisk in the storage pool in a round-robin fashion.
򐂰 Sequential: With a sequential mode volume, extents are allocated sequentially from an
MDisk.
򐂰 Image: Image mode is a one-to-one mapped extent mode volume.

Striped mode is the best method to use for most cases. However, sequential extent allocation
mode can slightly increase the sequential performance for certain workloads.

Figure 2-4 shows the striped volume mode and sequential volume mode. How the extent
allocation from the storage pool differs also is shown.

Figure 2-4 Storage pool extents overview

You can allocate the extents for a volume in many ways. The process is under full user control
when a volume is created and the allocation can be changed at any time by migrating single
extents of a volume to another MDisk within the storage pool.

2.4.10 Easy Tier performance function


IBM Easy Tier is a performance function that automatically migrates or moves extents off a
volume to or from one MDisk storage tier to another MDisk storage tier. Since version 7.3, we

Chapter 2. IBM SAN Volume Controller 25


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

have a three-tier implementation. We support three kinds of tier attributes. Easy Tier monitors
the host I/O activity and latency on the extents of all volumes with the Easy Tier function that
is turned on in a multitier storage pool over a 24-hour period.

Next, it creates an extent migration plan that is based on this activity and then dynamically
moves high-activity or hot extents to a higher disk tier within the storage pool. It also moves
extents whose activity dropped off or cooled down from the high-tier MDisks back to a
lower-tiered MDisk.

Easy Tier: The Easy Tier function can be turned on or off at the storage pool level and
volume level.

The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes.
The usage statistics file can be offloaded from the SVC nodes. Then, you can use IBM
Storage Advisor Tool (STAT) to create a summary report. STAT is free available on the web
under following link:

http://www.ibm.com/support/docview.wss?uid=ssg1S4000935

2.4.11 Evaluation mode for Easy Tier


To experience the potential benefits of using Easy Tier in your environment before installing
Flash Drives, you can turn on the Easy Tier function for a single-level storage pool. Next, turn
on the Easy Tier function for the volumes within that pool. Easy Tier then starts monitoring
activity on the volume extents in the pool.

Easy Tier creates a migration report every 24 hours on the number of extents that can be
moved if the pool were a multitiered storage pool. Therefore, although Easy Tier extent
migration is not possible within a single-tier pool, the Easy Tier statistical measurement
function is available.

The usage statistics file can be offloaded from the SVC configuration node by using the GUI
(click Settings → Support). Figure 2-5 on page 27 shows you an example of what the file
looks like.

26 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Figure 2-5 Sample for a heatmap statistic file

Be aware that if you cannot find the heatmap file check the other node to see if it is there.

Select the other node to discover the heatmap file (Figure 2-6).

Figure 2-6 Select other node for the heatmap file

Then, you can use IBM Storage Advisor Tool (STAT) to create the statistics report. A web
browser is used to view the STAT output. For more information about the STAT utility, see
following web page:
https://ibm.biz/BdEzve

Chapter 2. IBM SAN Volume Controller 27


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

For more information about Easy Tier functionality and generating statistics by using IBM
STAT, see Chapter 8, “Advanced features for storage efficiency” on page 425.

2.4.12 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A
host within the SVC is a collection of host bus adapter (HBA) worldwide port names
(WWPNs) or iSCSI-qualified names (IQNs) that are defined on the specific server.

Note: iSCSI names are internally identified by “fake” WWPNs, or WWPNs that are
generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume
that is accessed by multiple hosts of a server system.

iSCSI is an alternative means of attaching hosts. However, all communication with back-end
storage subsystems (and with other SVC systems) is still through FC.

Node failover can be handled without having a multipath driver that is installed on the iSCSI
server. An iSCSI-attached server can reconnect after a node failover to the original target
IP address, which is now presented by the partner node. To protect the server against link
failures in the network or HBA failures, the use of a multipath driver is mandatory.

Volumes are LUN-masked to the host’s HBA WWPNs by a process called host mapping.
Mapping a volume to the host makes it accessible to the WWPNs or iSCSI names (IQNs) that
are configured on the host object.

For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination)
adapter. Host objects can have IQNs and WWPNs.

2.4.13 Maximum supported configurations


To see more information about the maximum configurations that are applicable to the system,
I/O Group, and nodes, select Restrictions in the section of the following SVC support site
that corresponds to your SVC code level:
http://www.ibm.com/support/docview.wss?uid=ssg1S1005251

Certain configuration limits exist in the SVC, including the following list of important limits. For
the most current information, see the SVC support site.
򐂰 Sixteen worldwide node names (WWNNs) per storage subsystem
򐂰 One PB MDisk
򐂰 8192 MiB extents
򐂰 Long object names can be up to 63 characters

2.5 Volume overview


The maximum size of a single volume is 256 TiB. A single fully populated (eight-node) SVC
system supports up to 8,192 volumes.

Volumes have the following characteristics or attributes:


򐂰 Volumes can be created and deleted.
򐂰 Volumes can be resized (expanded or shrunk).

28 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

򐂰 Volume extents can be migrated at run time to another MDisk or storage pool.
򐂰 Volumes can be created as fully allocated or thin-provisioned. A conversion from a fully
allocated to a thin-provisioned volume and vice versa can be done at run time.
򐂰 Volumes can be stored in multiple storage pools (mirrored) to make them resistant to disk
subsystem failures or to improve the read performance.
򐂰 Volumes can be mirrored synchronously or asynchronously for longer distances. An SVC
system can run active volume mirrors to a maximum of three other SVC systems, but not
from the same volume.
򐂰 Volumes can be copied by using FlashCopy. Multiple snapshots and quick restore from
snapshots (reverse FlashCopy) are supported.
򐂰 Volumes can be compressed.
򐂰 Volumes can be virtual. The system supports VMware vSphere Virtual Volumes,
sometimes referred to as VVols, which allow VMware vCenter to manage system objects
like volumes and pools. The system administrator can create these objects and assign
ownership to VMware administrators to simplify management of these objects.

Volumes have two major modes: managed mode and image mode. Managed mode volumes
have two policies: the sequential policy and the striped policy. Policies define how the extents
of a volume are allocated from a storage pool.

2.5.1 Image mode volumes


Image mode volumes are used to migrate LUNs that were previously mapped directly to host
servers over to the control of the SVC.

Image mode provides a one-to-one mapping between the logical block addresses (LBAs)
between a volume and an MDisk. Image mode volumes have a minimum size of one block
(512 bytes) and always occupy at least one extent.

An image mode MDisk is mapped to one, and only one, image mode volume.

The volume capacity that is specified must be equal to the size of the image mode MDisk.
When you create an image mode volume, the specified MDisk must be in unmanaged mode
and must not be a member of a storage pool. The MDisk is made a member of the specified
storage pool (Storage Pool_IMG_xxx) as a result of creating the image mode volume.

The SVC also supports the reverse process in which a managed mode volume can be
migrated to an image mode volume. If a volume is migrated to another MDisk, it is
represented as being in managed mode during the migration and is only represented as an
image mode volume after it reaches the state where it is a straight-through mapping.

An image mode MDisk is associated with exactly one volume. The last extent is partial (not
filled) if the (image mode) MDisk is not a multiple of the MDisk Group’s extent size. An image
mode volume is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and
it does not have any SVC metadata extents that are assigned to it. Managed or image mode
MDisks are always members of a storage pool.

It is a preferred practice to put image mode MDisks in a dedicated storage pool and use a
special name for it (for example, Storage Pool_IMG_xxx). The extent size that is chosen for
this specific storage pool must be the same as the extent size into which you plan to migrate
the data. All of the SVC copy services functions can be applied to image mode disks. See
Figure 2-7 on page 30.

Chapter 2. IBM SAN Volume Controller 29


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 2-7 Image mode volume versus striped volume

2.5.2 Managed mode volumes


Volumes operating in managed mode provide a full set of virtualization functions. Within a
storage pool, the SVC supports an arbitrary relationship between extents on (managed
mode) volumes and extents on MDisks. Each volume extent maps to exactly one MDisk
extent.

Figure 2-8 on page 31 shows this mapping. It also shows a volume that consists of several
extents that are shown as V0 - V7. Each of these extents is mapped to an extent on one of the
MDisks: A, B, or C. The mapping table stores the details of this indirection. Several of the
MDisk extents are unused. No volume extent maps to them. These unused extents are
available for use in creating volumes, migration, expansion, and so on.

30 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Figure 2-8 Simple view of block virtualization

The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm: if the set of MDisks from which to allocate extents contains more than
one MDisk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no
free extents when its turn arrives, its turn is missed and the round-robin moves to the next
MDisk in the set that has a free extent.

When a volume is created, the first MDisk from which to allocate an extent is chosen in a
pseudo-random way rather than by choosing the next disk in a round-robin fashion. The
pseudo-random algorithm avoids the situation where the “striping effect” that is inherent in a
round-robin algorithm places the first extent for many volumes on the same MDisk. Placing
the first extent of a number of volumes on the same MDisk can lead to poor performance for
workloads that place a large I/O load on the first extent of each volume, or that create multiple
sequential streams.

2.5.3 Cache mode and cache-disabled volumes


Under normal conditions, a volume’s read and write data is held in the cache of its preferred
node, with a mirrored copy of write data that is held in the partner node of the same I/O
Group. However, it is possible to create a volume with cache disabled, which means that the
I/Os are passed directly through to the back-end storage controller rather than being held in
the node’s cache.

Having cache-disabled volumes makes it possible to use the native copy services in the
underlying RAID array controller for MDisks (LUNs) that are used as SVC image mode
volumes. Using SVC Copy Services instead of the underlying disk controller copy services
gives better results.

Chapter 2. IBM SAN Volume Controller 31


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

2.5.4 Mirrored volumes


The mirrored volume feature provides a simple RAID 1 function; therefore, a volume has two
physical copies of its data. This approach allows the volume to remain online and accessible
even if one of the MDisks sustains a failure that causes it to become inaccessible.

The two copies of the volume often are allocated from separate storage pools or by using
image-mode copies. The volume can participate in FlashCopy and remote copy relationships;
it is serviced by an I/O Group; and it has a preferred node.

Each copy is not a separate object and cannot be created or manipulated except in the
context of the volume. Copies are identified through the configuration interface with a copy ID
of their parent volume. This copy ID can be 0 or 1.

This feature provides a point-in-time copy functionality that is achieved by “splitting” a copy
from the volume. However, the mirrored volume feature does not address other forms of
mirroring that are based on remote copy, which is sometimes called IBM HyperSwap, that
mirrors volumes across I/O Groups or clustered systems. It is also not intended to manage
mirroring or remote copy functions in back-end controllers.

Figure 2-9 provides an overview of volume mirroring.

Figure 2-9 Volume mirroring overview

A second copy can be added to a volume with a single copy or removed from a volume with
two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A
newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as “fresh” and the secondary copy
is defined as “stale”.

The synchronization process updates the secondary copy until it is fully synchronized. This
update is done at the default “synchronization rate” or at a rate that is defined when the
volume is created or modified. The synchronization status for mirrored volumes is recorded
on the quorum disk.

32 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

If a two-copy mirrored volume is created with the format parameter, both copies are formatted
in parallel and the volume comes online when both operations are complete with the copies in
sync.

If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk.

If it is known that MDisk space (which is used for creating copies) is already formatted or if the
user does not require read stability, a “no synchronization” option can be selected that
declares the copies as “synchronized” (even when they are not).

To minimize the time that is required to resynchronize a copy that is out of sync, only the
256 KiB grains that were written to since the synchronization was lost are copied. This
approach is known as an incremental synchronization. Only the changed grains must be
copied to restore synchronization.

Important: An unmirrored volume can be migrated from one location to another by adding
a second copy to the wanted destination, waiting for the two copies to synchronize, and
then removing the original copy 0. This operation can be stopped at any time. The two
copies can be in separate storage pools with separate extent sizes.

Where there are two copies of a volume, one copy is known as the primary copy. If the
primary is available and synchronized, reads from the volume are directed to it. The user can
select the primary when the volume is created or can change it later.

Placing the primary copy on a high-performance controller maximizes the read performance
of the volume.

Write I/O operations data flow with a mirrored volume


For write I/O operations to a mirrored volume, the SVC preferred node definition, with the
multipathing driver on the host, is used to determine the preferred path. The host routes the
I/Os through the preferred path, and the corresponding node is responsible for further
destaging written data from cache to both volume copies. Figure 2-10 shows the data flow for
write I/O processing when volume mirroring is used.

Figure 2-10 Data flow for write I/O processing in a mirrored volume in the SVC

As shown in Figure 2-10, all the writes are sent by the host to the preferred node for each
volume (1); then, the data is mirrored to the cache of the partner node in the I/O Group (2),
and acknowledgment of the write operation is sent to the host (3). The preferred node then
destaged the written data to the two volume copies (4).

Chapter 2. IBM SAN Volume Controller 33


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

With version 7.3, the cache architecture changed from an upper-cache design to a two-layer
cache design. With this change, the data is only written once and is then directly destaged
from the controller to the locally attached disk system. Figure 2-11 shows the data flow in a
stretched environment.

Site1 Site2
Preferred Node IO group Node Pair Non-Preferred Node
Write Data with location
UCA UCA

Reply with location


Data is replicated
Mirror once across ISL.
Copy 1 Copy 2
Preferred Non preferred Copy 1 Non preferred
Copy 2
LCA1 LCA1 Preferred
LCA 2 LCA 2
Destage Token write data message with location Destage

Storage at Site 1 Storage at Site 2


Figure 2-11 Design of an enhanced stretched cluster

For more information about the change, see Chapter 6 of IBM System Storage SAN Volume
Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.

A volume with copies can be checked to see whether all of the copies are identical or
consistent. If a medium error is encountered while it is reading from one copy, it is repaired by
using data from the other copy. This consistency check is performed asynchronously with
host I/O.

Important: Mirrored volumes can be taken offline if there is no quorum disk available. This
behavior occurs because the synchronization status for mirrored volumes is recorded on
the quorum disk.

Mirrored volumes use bitmap space at a rate of 1 bit per 256 KiB grain, which translates to
1 MiB of bitmap space supporting 2 TiB of mirrored volumes. The default allocation of bitmap
space is 20 MiB, which supports 40 TiB of mirrored volumes. If all 512 MiB of variable bitmap
space is allocated to mirrored volumes, 1 PiB of mirrored volumes can be supported.

Table 2-3 shows you the bitmap space default configuration.

Table 2-3 Bitmap space default configuration


Copy service Minimum Default Maximum Minimum1 functionality
allocated allocated allocated when using the default
bitmap bitmap bitmap values
space space space

Remote copy2 0 20 MiB 512 MiB 40 TiB of remote mirroring


volume capacity

34 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Copy service Minimum Default Maximum Minimum1 functionality


allocated allocated allocated when using the default
bitmap bitmap bitmap values
space space space

FlashCopy3 0 20 MiB 2 GiB 10 TiB of FlashCopy source


volume capacity
5TiB of incremental
FlashCopy source volume
capacity

Volume 0 20 MiB 512 MiB 40 TiB of mirrored volumes


mirroring

RAID 0 40 MiB 512 MiB 80 TiB array capacity using


RAID 0,1 or 10
80 TiB array capacity in
three-disk RAID 5 array
Slightly less than 120TiB array
capacity in five-disk RAID 6
array
1. The actual amount of functionality might increase based on settings such as grain
size and strip size. RAID is subject to a 15% margin or error.
2. Remote copy includes Metro Mirror, Global Mirror, and active-active relationships.
3. FlashCopy includes the FlashCopy function, Global Mirror with change volumes, and
active-active relationships.

The sum of all bitmap memory allocation for all functions except FlashCopy must not exceed
552 MiB.

2.5.5 Thin-provisioned volumes


Volumes can be configured to be thin-provisioned or fully allocated. A thin-provisioned
volume behaves as though application reads and writes were fully allocated. When a
thin-provisioned volume is created, the user specifies two capacities: the real physical
capacity that is allocated to the volume from the storage pool, and its virtual capacity that is
available to the host. In a fully allocated volume, these two values are the same.

Therefore, the real capacity determines the quantity of MDisk extents that is initially allocated
to the volume. The virtual capacity is the capacity of the volume that is reported to all other
SVC components (for example, FlashCopy, cache, and remote copy) and to the host servers.

The real capacity is used to store the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.

Thin-provisioned volumes can be used as volumes that are assigned to the host, by
FlashCopy to implement thin-provisioned FlashCopy targets, and with the mirrored volumes
feature.

When a thin-provisioned volume is initially created, a small amount of the real capacity is
used for initial metadata. I/Os are written to grains of the thin volume that were not previously
written, which causes grains of the real capacity to be used to store metadata and the actual
user data. I/Os are written to grains that were previously written, which updates the grain
where data was previously written.

Chapter 2. IBM SAN Volume Controller 35


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

The grain size is defined when the volume is created. The grain size can be 32 KiB, 64 KiB,
128 KiB, or 256 KiB. The default grain size is 256 KiB, which is the recommended option. If
you select 32 KiB for the grain size, the volume size cannot exceed 260TiB. The grain size
cannot be changed after the thin-provisioned volume is created. Generally, smaller grain sizes
save space, but they require more metadata access, which can adversely affect performance.
If you do not use the thin-provisioned volume as a FlashCopy source or target volume, use
256 KiB to maximize performance. If you use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the FlashCopy
function.

Figure 2-12 shows the thin-provisioning concept.

Figure 2-12 Conceptual diagram of thin-provisioned volume

Thin-provisioned volumes store user data and metadata. Each grain of data requires
metadata to be stored. Therefore, the I/O rates that are obtained from thin-provisioned
volumes are less than the I/O rates that are obtained from fully allocated volumes.

The metadata storage overhead is never greater than 0.1% of the user data. The overhead is
independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in
a FlashCopy map, use the same grain size as the map grain size for the best performance. If
you are using the thin-provisioned volume directly with a host system, use a small grain size.

Thin-provisioned volume format: Thin-provisioned volumes do not need formatting. A


read I/O, which requests data from deallocated data space, returns zeros. When a write I/O
causes space to be allocated, the grain is zeroed before use. However, if the node is a
2145-DH8, space is not allocated for a host write that contains all zeros. The formatting
flag is ignored when a thin volume is created or when the real capacity is expanded; the
virtualization component never formats the real capacity of a thin-provisioned volume.

The real capacity of a thin volume can be changed if the volume is not in image mode.
Increasing the real capacity allows a larger amount of data and metadata to be stored on the
volume. Thin-provisioned volumes use the real capacity that is provided in ascending order as

36 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

new data is written to the volume. If the user initially assigns too much real capacity to the
volume, the real capacity can be reduced to free storage for other uses.

A thin-provisioned volume can be configured to autoexpand. This feature causes the SVC to
automatically add a fixed amount of more real capacity to the thin volume as required.
Therefore, autoexpand attempts to maintain a fixed amount of unused real capacity for the
volume, which is known as the contingency capacity.

The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.

A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and it must expand.

Autoexpand does not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity is recalculated.

To support the auto expansion of thin-provisioned volumes, the storage pools from which they
are allocated have a configurable capacity warning. When the used capacity of the pool
exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% is
specified, the event is logged when 20% of the free capacity remains.

A thin-provisioned volume can be converted nondisruptively to a fully allocated volume (or


vice versa) by using the volume mirroring function. For example, you can add a
thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated
copy from the volume after they are synchronized.

The fully allocated-to-thin-provisioned migration procedure uses a zero-detection algorithm


so that grains that contain all zeros do not cause any real capacity to be used.

2.5.6 Volume I/O governing


I/O operations can be constrained so that the maximum amount of I/O activity that a host can
perform on a volume can be limited over a specific period.

This governing feature can be used to satisfy a quality of service (QoS) requirement or a
contractual obligation (for example, if a client agrees to pay for I/Os performed, but does not
pay for I/Os beyond a certain rate). Only Read, Write, and Verify commands that access the
physical medium are subject to I/O governing.

The governing rate can be set in I/Os per second or MB per second. It can be altered by
changing the throttle value by running the chvdisk command and specifying the -rate
parameter.

I/O governing: I/O governing on Metro Mirror or Global Mirror secondary volumes does
not affect the data copy rate from the primary volume. Governing has no effect on
FlashCopy or data migration I/O rates.

Chapter 2. IBM SAN Volume Controller 37


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

An I/O budget is expressed as a number of I/Os (or MBs) over a minute. The budget is evenly
divided among all SVC nodes that service that volume, which means among the nodes that
form the I/O Group of which that volume is a member.

The algorithm operates two levels of policing. While a volume on each SVC node receives I/O
at a rate lower than the governed level, no governing is performed. However, when the I/O
rate exceeds the defined threshold, the policy is adjusted. A check is made every minute to
see that each node is receiving I/O below the threshold level. Whenever this check shows that
the host exceeded its limit on one or more nodes, policing begins for new I/Os.

The following conditions exist while policing is in force:


򐂰 A budget allowance is calculated for a 1-second period.
򐂰 I/Os are counted over a period of a second.
򐂰 If I/Os are received in excess of the 1-second budget on any node in the I/O Group, those
I/Os and later I/Os are pended.
򐂰 When the second expires, a new budget is established, and any pended I/Os are redriven
under the new budget.

This algorithm might cause I/O to backlog in the front end, which might eventually cause a
Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a
host stays within its 1-second budget on all nodes in the I/O Group for 1 minute, the policing is
relaxed and monitoring takes place over the 1-minute period as before.

2.6 HyperSwap
The IBM HyperSwap function is a high availability feature that provides dual-site, active-active
access to a volume. Active-active volumes have a copy in one site and a copy at another site.
Data that is written to the volume is automatically sent to both copies so that either site can
provide access to the volume if the other site becomes unavailable. Active-active
relationships are made between the copies at each site. These relationships automatically
run and switch direction according to which copy or copies are online and up to date.
Relationships can be grouped into consistency groups just like Metro Mirror and Global Mirror
relationships. The consistency groups fail over consistently as a group based on the state of
all copies in the group. An image that can be used in the case of a disaster recovery is
maintained at each site.When the system topology is set to hyperswap, each node, controller,
and host map object in the system configuration must have a site attribute set to 1 or 2. Both
nodes of an I/O group must be at the same site. This site must be the same site as the site of
the controllers that provide the managed disks to that I/O group. When managed disks are
added to storage pools, their site attributes must match. This requirement ensures that each
copy in the active-active relationship is fully independent and is located at a distinct site.

2.7 Distributed Raid


With V7.6 we have introduced the Distributed Raid (DRAID) function.

When you plan your network, consideration must be given to the type of RAID configuration
that you use. SAN Volume Controller supports either a non-distributed array or a distributed
array configuration. We discuss DRAID in greater depth in Implementing the IBM Storwize
V7000 and IBM Spectrum Virtualize V7.6, SG24-7938

38 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

2.7.1 Non-distributed Array


An array can contain 2 - 16 drives; several arrays create the capacity for a pool. For
redundancy, spare drives ("hot spares") are allocated to assume read/write operations if any
of the other drives fail. The rest of the time, the spare drives are idle and do not process
requests for the system. When a member drive fails in the array, the data can only be
recovered onto the spare as fast as that drive can write the data. As a result, the load on the
remaining member drives is increased by up to 100%. Because of this bottleneck, rebuilding
the data can take many hours as the system tries to balance host and rebuild workload. The
latency of I/O to the rebuilding array is affected for this entire time. Because volume data is
striped across MDisks, all volumes are affected during the time it takes to rebuild the drive.

2.7.2 Distributed array


Distributed array configurations may contain between 4 - 128 drives. Distributed arrays
remove the need for separate drives that are idle until a failure occurs. Instead of allocating
one or more drives as spares, the spare capacity is distributed over specific rebuild areas
across all the member drives. Data can be copied faster to the rebuild area and redundancy is
restored much more rapidly. Additionally, as the rebuild progresses, the performance of the
pool is more uniform because all of the available drives are used for every volume extent.
After the failed drive is replaced, data is copied back to the drive from the distributed spare
capacity. Unlike "hot spare" drives, read/write requests are processed on other parts of the
drive that are not being used as rebuild areas. The number of rebuild areas is based on the
width of the array. The size of the rebuild area determines how many times the distributed
array can recover failed drives without risking becoming degraded. For example, a distributed
array that uses RAID 6 drives can handle two concurrent failures. After the failed drives have
been rebuilt, the array can tolerate another two drive failures. If all of the rebuild areas are
used to recover data, the array becomes degraded on the next drive failure.

The concept of distributed RAID is to distribute an array with width W across a set of X drives.
For example, you might have a 2+P RAID-5 array that is distributed across a set of 40 drives.
The array type and width define the level of redundancy. In the previous example, there is a
33% capacity overhead for parity. If an array stride needs to be rebuilt, two component strips
must be read to rebuild the data for the third component. The set size defines how many
drives are used by the distributed array. It is obviously a requirement that performance and
usable capacity scales according to the number of drives in the set. The other key feature of a
distributed array is that instead of having a hot spare, the set includes spare strips that are
also distributed across the set of drives. The data and spares are distributed such that if one
drive in the set fails redundancy can be restored by rebuilding data on to the spare strips at a
rate much greater than the rate of a single component.

Distributed arrays are used to create large-scale internal managed disks. They can manage 4
- 128 drives and contain their own rebuild areas to accomplish error recovery when drives fail.
As a result, rebuild times are dramatically reduced, which lowers the exposure volumes have
to the extra load of recovering redundancy. Because the capacity of these managed disks is
potentially so great, when they are configured in the system the overall limits change in order
to allow them to be virtualized. For every distributed array, the space for 16 MDisk extent
allocations is reserved and therefore 15 other MDisk identities are removed from the overall
pool of 4096. Distributed arrays also aim to provide a uniform performance level. A distributed
array can contain multiple drive classes if the drives are similar (for example, the drives have
the same attributes, but the capacities are larger) to achieve this performance. All the drives
in a distributed array must come from the same I/O group to maintain a simple configuration
model.

The key benefits of a distributed array are:

Chapter 2. IBM SAN Volume Controller 39


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Quicker rebuild times with less impact to host I/O.


򐂰 More user flexibility in defining how many drives are used by an array (for example, a user
can create 9+P arrays with 24 drives without having four drives left unused).
򐂰 Rebuild areas means that there are no idle spindles in the system, and thus performance
improves slightly.

Supported RAID levels


These are the supported RAID levels

Distributed RAID 5
Distributed RAID 5 arrays stripe data over the member drives with one parity strip on every
stripe. These distributed arrays can support 4 - 128 drives. RAID 5 distributed arrays can
tolerate the failure of one member drive.

Distributed RAID 6
Distributed RAID 6 arrays stripe data over the member drives with two parity strips on every
stripe. These distributed arrays can support 6 - 128 drives. A RAID 6 distributed array can
tolerate any two concurrent member drive failures.

2.7.3 Example of a distributed array


Figure 2-13 on page 41 shows an example of a distributed array that is configured with RAID
level 6; all of the drives in the array are active. The rebuild areas are distributed across all of
the drives and the drive count includes all of the drives.

In Figure 2-13 on page 41 the numbers represent:


1 An active drive
2 Rebuild areas, which distributed across all drives
3 Drive count, which includes all drives
4 Stripes of data (2 stripes are shown)
5 Stripe width
6 Pack, which equals the drive count that is multiplied by stripe width
7 Additional packs in the array (not shown)

Figure 2-13 on page 41 shows a distributed RAID 6 array.

40 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Figure 2-13 Distributed RAID 6 array

Figure 2-14 on page 42 shows a distributed array that contains a failed drive. To recover data,
the data is read from multiple drives. The recovered data is then written to the rebuild areas,
which are distributed across all of the drives in the array. The remaining rebuild areas are
distributed across all drives.

In Figure 2-14 on page 42 the numbers represent:


1 Failed drive
2 Rebuild areas, which are distributed across all drives
3 Remaining rebuild areas rotate across each remaining drive
4 Additional packs in the array (not shown)

Figure 2-14 shows distributed RAID 6 array with a failed drive.

Chapter 2. IBM SAN Volume Controller 41


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 2-14 Distributed RAID 6 array with a failed drive

2.8 Encryption
SAN Volume Controller 2145-DH8 system provides optional encryption of data at rest, which
protects against the potential exposure of sensitive user data and user metadata that is
stored on discarded, lost, or stolen storage devices. Encryption of system data and system
metadata is not required, so system data and metadata are not encrypted.

2.8.1 General encryption concepts and terms


Encryption-capable refers to the ability of the system to optionally encrypt user data and
metadata by using a secret key.

Encryption-disabled describes a system where no secret key is configured.

Encryption-enabled describes a system where a secret key is configured and used. The key
must be used to unlock the encrypted data enabling access control.

Access-control-enabled describes an encryption-enabled system that is configured so that an


access key must be provided to authenticate with an encrypted entity, such as a secret key or

42 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

flash module, to unlock and operate that entity. The system permits access control
enablement only when it is encryption-enabled. A system that is encryption-enabled can
optionally also be access-control-enabled to provide functional security.

Protection-enabled describes a system that is both encryption-enabled and


access-control-enabled. An access key must be provided to unlock the system so that it can
transparently perform all required encryption-related functionality, such as encrypt on write
and decrypt on read.

The Protection Enablement Process (PEP) transitions the system from a state that is not
protection-enabled to a state that is protection-enabled. The PEP requires that the customer
provide a secret key to access the system. This secret key must be resiliently stored and
backed up externally to the system; for example, it may be stored on USB flash drives. PEP is
not merely activating a feature using the management GUI or CLI. To avoid loss of data that
was written to the system before the PEP occurs, the customer must move all of the data to
be retained off of the system before the PEP is initiated. After PEP has completed, the
customer must move the data back onto the system. The PEP is performed during the system
initialization process, if encryption is activated. The system does not support Application
Managed Encryption (AME).

Application-transparent encryption is an attribute of the encryption architecture, referring to


the fact that applications are not aware that encryption and protection is occurring. This is in
contrast to AME, which is not transparent to applications, and where an application must
serve keys to a storage device.

2.8.2 Accessing an encrypted system


Planning for encryption involves purchasing a licensed function and then activating and
enabling the function on the system.

To encrypt data that is stored on drives, the nodes capable of encryption must be licensed
and configured to use encryption. When encryption is activated and enabled on the system,
valid encryption keys must be present on the system when the system unlocks the drives or
the user generates a new key. The encryption key must be stored on USB flash drives that
contain a copy of the key that was generated when encryption was enabled. Without these
keys, user data on the drives cannot be accessed.

Before activating and enabling encryption, you must determine the method of accessing key
information during times when the system requires an encryption key to be present. The
system requires an encryption key to be present during the following operations:
򐂰 System power-on
򐂰 System reboot
򐂰 User initiated re-key operations

Several factors must be considered when planning for encryption:


򐂰 Physical security of the system
򐂰 Need and benefit of manually providing encryption keys when the system requires
򐂰 Availability of key data
򐂰 Encryption license is purchased, activated, and enabled on the system

Chapter 2. IBM SAN Volume Controller 43


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

2.8.3 Accessing key information on USB flash drives


There are two options for accessing key information on USB flash drives.

USB flash drives are inserted in the system at all times


If you want the system to restart automatically, a USB flash drive must be left inserted in all
the nodes on the system. This way all nodes have access to the encryption key when they
power on. This method requires that the physical environment where the system is located is
secure. If the location is secure, it prevents an unauthorized person from making copies of the
encryption keys, stealing the system, or accessing data that is stored on the system.

USB flash drives are never inserted into the system except as required
For the most secure operation, do not keep the USB flash drives inserted into the nodes on
the system. However, this method requires that you manually insert the USB flash drives that
contain copies of the encryption key in the nodes during operations which the system
requires an encryption key to be present. USB flash drives that contain the keys must be
stored securely to prevent theft or loss. During operations which the system requires an
encryption key to be present, the USB flash drives must be inserted manually into each node
so data can be accessed. After the system has completed unlocking the drives, the USB flash
drives must be removed and stored securely to prevent theft or loss.

2.8.4 Encryption technology


Data encryption is protected by the Advanced Encryption Standard (AES) algorithm that uses
a 256-bit symmetric encryption key in XTS mode, as defined in the IEEE 1619-2007 standard
as XTS-AES-256. That data encryption key is itself protected by a 256-bit AES key wrap
when stored in non-volatile form.

2.9 iSCSI overview


iSCSI is an alternative means of attaching hosts to the SVC. All communications with
back-end storage subsystems and with other SVC systems occur through FC only.

The iSCSI function is a software function that is provided by the SVC code, not hardware.

In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP
network, which is based on IP routers and Ethernet switches. iSCSI is a block-level protocol
that encapsulates SCSI commands into TCP/IP packets; therefore, it uses an existing IP
network instead of requiring expensive FC HBAs and a SAN fabric infrastructure.

A pure SCSI architecture is based on the client/server model. A client (for example, server or
workstation) starts read or write requests for data from a target server (for example, a data
storage system). Commands, which are sent by the client and processed by the server, are
put into the Command Descriptor Block (CDB). The server runs a command and completion
is indicated by a special signal alert.

The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the TCP/IP network, especially over a
potentially unreliable IP network.

44 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

The following concepts of names and addresses are carefully separated in iSCSI:
򐂰 An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
initiator name and target name also refer to an iSCSI name.
򐂰 An iSCSI address specifies not only the iSCSI name of an iSCSI node, but a location of
that node. The address consists of a host name or IP address, a TCP port number (for the
target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node
and provides statically allocated IP addresses.

Each iSCSI node, that is, an initiator or target, has a unique IQN, which be can up to
255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes.

The iSCSI qualified name format is defined in RFC3720 and contains (in order) the following
elements:
򐂰 The string iqn
򐂰 A date code that specifies the year and month in which the organization registered the
domain or subdomain name that is used as the naming authority string
򐂰 The organizational naming authority string, which consists of a valid, reversed domain or a
subdomain name
򐂰 Optional: A colon (:), followed by a string of the assigning organization’s choosing, which
must make each assigned iSCSI name unique

For the SVC, the IQN for its iSCSI target is specified as shown in the following example:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

On a Microsoft Windows server, the IQN (that is, the name for the iSCSI initiator), can be
defined as shown in the following example:
iqn.1991-05.com.microsoft:<computer name>

The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An
alias can be assigned to an initiator or a target. The alias is independent of the name and
does not have to be unique. Because it is not unique, the alias must be used in a purely
informational way. It cannot be used to specify a target at login or during authentication.
Targets and initiators can have aliases.

An iSCSI name provides the correct identification of an iSCSI device irrespective of its
physical location. The IQN is an identifier, not an address.

Caution: Before you change system or node names for an SVC system that has servers
that are connected to it by way of iSCSI, be aware that because the system and node
name are part of the SVC’s IQN, you can lose access to your data by changing these
names. The SVC GUI displays a warning, but the CLI does not display a warning.

The iSCSI session, which consists of a login phase and a full feature phase, is completed with
a special command.

The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to
adjust various parameters between two network entities and to confirm the access rights of
an initiator.

Chapter 2. IBM SAN Volume Controller 45


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

If the iSCSI login phase is completed successfully, the target confirms the login for the
initiator; otherwise, the login is not confirmed and the TCP connection breaks.

When the login is confirmed, the iSCSI session enters the full feature phase. If more than one
TCP connection was established, iSCSI requires that each command and response pair must
go through one TCP connection. Therefore, each separate read or write command is carried
out without the necessity to trace each request for passing separate flows. However, separate
transactions can be delivered through separate TCP connections within one session.

Figure 2-15 shows an overview of the various block-level storage protocols and the position of
the iSCSI layer.

Figure 2-15 Overview of block-level protocol stacks

2.9.1 Use of IP addresses and Ethernet ports


The SVC node hardware has three Ethernet ports as standard. The configuration details of
these three Ethernet ports can be displayed by the GUI or CLI.

The following types of IP addresses are available:


򐂰 System management IP address
This address is used for access to the SVC CLI, SVC GUI, and to the Common
Information Model Object Manager (CIMOM) that runs on the SVC configuration node.
Only the configuration node presents a system management IP address at any one time.
Two system management IP addresses, one for each of the two Ethernet ports, are
available. Configuration node failover is also supported.
򐂰 Port IP address
This address is used to perform iSCSI I/O to the system. Each node can have a port
IP address for each of its ports.
򐂰 Service IP address
The service IP addresses are used to access the service assistant tool, which you can use
to complete service-related actions on the node. All nodes in the system have different
service addresses. A node that is operating in service state does not operate as a
member of the system.

SVC nodes have up to six standard Ethernet ports. These ports are for 1 Gbps support or
with the optional Ethernet Card10 Gbps support. System management is possible only over
the 1 Gbps ports.

Figure 2-16 shows an overview of the IP addresses on an SVC node port and how these IP
addresses are moved between the nodes of an I/O Group.

46 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

The management IP addresses and the iSCSI target IP addresses fail over to the partner
node N2 if node N1 fails (and vice versa). The iSCSI target IPs fail back to their corresponding
ports on node N1 when node N1 is running again.

Figure 2-16 SVC IP address overview

It is a preferred practice to keep all of the eth0 ports on all of the nodes in the system on the
same subnet. The same practice applies for the eth1 ports; however, it can be a separate
subnet to the eth0 ports.

You can configure a maximum of 512 iSCSI hosts per I/O Group per SVC because of IQN
limits. 2048 host objects for the complete system.

2.9.2 iSCSI volume discovery


The iSCSI target implementation on the SVC nodes uses the hardware offload features that
are provided by the node’s hardware. This implementation results in a minimal effect on the
node’s CPU load for handling iSCSI traffic, and simultaneously delivers excellent throughput
(up to 95 MBps user data) on each of the three LAN ports. The use of jumbo frames, which
are maximum transmission unit (MTU) sizes greater than 1,500 bytes, is a preferred practice.

Hosts can discover volumes through one of the following mechanisms:


򐂰 Internet Storage Name Service (iSNS)
The SVC can register with an iSNS name server; you set the IP address of this server by
using the svctask chcluster command. A host can then query the iSNS server for
available iSCSI targets.
򐂰 Service Location Protocol (SLP)
The SVC node runs an SLP daemon, which responds to host requests. This daemon
reports the available services on the node, such as the CIMOM service that runs on the
configuration node. The iSCSI I/O service can now also be reported.

Chapter 2. IBM SAN Volume Controller 47


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 iSCSI Send Target request


The host can send a Send Target request by using the iSCSI protocol to the iSCSI TCP/IP
port (port 3260).

2.9.3 iSCSI authentication


Authentication of the host server from the SVC system is optional and disabled, by default.
The user can choose to enable Challenge Handshake Authentication Protocol (CHAP)
authentication, which involves sharing a CHAP secret between the SVC system and the host.
The SVC as authenticator sends a challenge message to the specific server (peer). The
server responds with a value that is checked by the SVC. If there is a match, the SVC
acknowledges the authentication. If not, the SVC ends the connection and does not allow any
I/O to volumes.

A CHAP secret can be assigned to each SVC host object. The host must then use CHAP
authentication to begin a communications session with a node in the system. A CHAP secret
can also be assigned to the system.

Volumes are mapped to hosts, and LUN masking is applied by using the same methods that
are used for FC LUNs.

Because iSCSI can be used in networks where data security is a concern, the specification
allows for separate security methods. For example, you can set up security through a method,
such as IPSec, which is not apparent for higher levels, such as iSCSI, because it is
implemented at the IP level. For more information about securing iSCSI, see Securing Block
Storage Protocols over IP, RFC3723, which is available at this website:
http://tools.ietf.org/html/rfc3723

2.9.4 iSCSI multipathing


Multipathing drivers means that the host can send commands down multiple paths to the
SVC to the same volume. A fundamental multipathing difference exists between FC and
iSCSI environments.

If FC-attached hosts see their FC target and volumes go offline, for example, because of a
problem in the target node, its ports, or the network, the host must use a separate SAN path
to continue I/O. Therefore, a multipathing driver is always required on the host.

SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the
key difference) the host is reconnected to the same IP target that reappears after a short
period and its volumes continue to be available for I/O. iSCSI allows failover without host
multipathing. To achieve this failover without host multipathing, the partner node in the I/O
Group takes over the port IP addresses and iSCSI names of a failed node.

Be aware: With the iSCSI implementation in the SVC, an IP address failover/failback


between partner nodes of an I/O Group takes place only in cases of a planned or
unplanned node restart (node offline). When the partner node returns to online status, a
delay of 5 minutes happens before failback occurs for the IP addresses and iSCSI names.

A host multipathing driver for iSCSI is required if you want the following capabilities:
򐂰 Protecting a server from network link failures

48 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

򐂰 Protecting a server from network failures if the server is connected through two separate
networks
򐂰 Providing load balancing on the server’s network links

2.10 Advanced Copy Services overview


Advanced Copy Services is a class of functionality of storage arrays and storage devices that
allow various forms of block-level data duplication. By using Advanced Copy Services, you
can make mirror images of part or all of your data eventually between distant sites. This
function has the following benefits and uses:
򐂰 Facilitating disaster recovery
򐂰 Building reporting instances to offload billing activities from production databases
򐂰 Building quality assurance systems on regular intervals for regression testing
򐂰 Offloading offline backups from production systems
򐂰 Building test systems by using production data

The SVC supports the following copy services:


򐂰 Synchronous remote copy (Metro Mirror)
򐂰 Asynchronous remote copy (Global Mirror)
򐂰 Asynchronous remote copy with Change Volumes (Global Mirror)
򐂰 Point-in-Time copy (FlashCopy)
򐂰 Data migration (Image mode migration and volume mirroring migration)

Copy services functions are implemented within an SVC system (FlashCopy and image mode
migration) or between the SVC or SVC and Storwize systems (Metro Mirror and Global
Mirror). To use Metro Mirror and Global Mirror functions, you must have the remote copy
license installed on each side.

You can create partnerships with the SVC and Storwize systems to allow Metro Mirror and
Global Mirror to operate between the two systems. To create these partnerships, both
clustered systems must be at version 6.3.0 or later.

A clustered system is in one of two layers: the replication layer or the storage layer. The SVC
system is always in the replication layer. The Storwize system is in the storage layer by
default, but the system can be configured to be in the replication layer instead.

Figure 2-17 shows an example of the layers in an SVC and Storwize clustered-system
partnership.

Chapter 2. IBM SAN Volume Controller 49


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 2-17 Replication between SVC and Storwize systems

Within the SVC, both intracluster copy services functions (FlashCopy and image mode
migration) operate at the block level. Intercluster functions (Global Mirror and Metro Mirror)
operate at the volume layer. A volume is the container that is used to present storage to host
systems. Operating at this layer allows the Advanced Copy Services functions to benefit from
caching at the volume layer and helps facilitate the asynchronous functions of Global Mirror
and lessen the effect of synchronous Metro Mirror.

Operating at the volume layer also allows Advanced Copy Services functions to operate
above and independently of the function or characteristics of the underlying disk subsystems
that are used to provide storage resources to an SVC system. Therefore, if the physical
storage is virtualized with an SVC or Storwize and the backing array is supported by the SVC
or Storwize, you can use disparate backing storage.

FlashCopy: Although FlashCopy operates at the block level, this level is the block level of
the SVC, so the physical backing storage can be anything that the SVC supports. However,
performance is limited to the slowest performing storage that is involved in FlashCopy.

2.10.1 Synchronous and asynchronous remote copy


Global Mirror and Metro Mirror are implemented at the volume layer within the SVC. They are
collectively referred to as remote copy. In general, the purpose of both functions is to maintain
two copies of data. Often, the two copies are separated by distance, but not necessarily. The
remote copy can be maintained in one of two modes: synchronous or asynchronous.

Metro Mirror is the IBM branded term for synchronous remote copy function. Global Mirror
is the IBM branded term for the asynchronous remote copy function.

Synchronous remote copy ensures that updates are physically committed (not in volume
cache) in both the primary and the secondary SVC clustered systems before the application
considers the updates complete. Therefore, the secondary SVC clustered system is fully
up-to-date if it is needed in a failover.

50 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

However, the application is fully exposed to the latency and bandwidth limitations of the
communication link to the secondary system. In a truly remote situation, this extra latency can
have a significantly adverse effect on application performance; therefore, a limitation of 300
kilometers (~186 miles) exists on the distance of Metro Mirror. This distance induces latency
of approximately 5 microseconds per kilometer, which does not include the latency that is
added by the equipment in the path.

The nature of synchronous remote copy is that latency for the distance and the equipment in
the path is added directly to your application I/O response times. The overall latency for a
complete round trip must not exceed 80 milliseconds.

With version 7.4, the round-trip time was improved to 250 ms. Figure 2-18 shows a list of the
round-trip times.

Figure 2-18 Maximum round-trip times

Special configuration guidelines for SAN fabrics are used for data replication. The distance
and available bandwidth of the intersite links must be considered. For more information about
these guidelines, see the SVC Support Portal, which is available at this website:
https://ibm.biz/BdEzB5

In asynchronous remote copy, the application is provided acknowledgment that the write is
complete before the write is committed (written to backing storage) at the secondary site.
Therefore, on a failover, certain updates (data) might be missing at the secondary site.

The application must have an external mechanism for recovering the missing updates or
recovering to a consistent point (which is usually a few minutes in the past). This mechanism
can involve user intervention, but in most practical scenarios, it must be at least partially
automated.

Recovery on the secondary site involves assigning the Global Mirror targets from the SVC
target system to one or more hosts (which depends on your disaster recovery design) and
making those volumes visible on the host and creating any required multipath device
definitions.

The application must then be started and a recovery procedure to either a consistent point in
time or recovery of the missing updates must be performed. For this reason, the initial state of
Global Mirror targets is called crash consistent. This term might sound daunting, but it merely
means that the data on the volumes appears to be in the same state as though an application
crash occurred.

Chapter 2. IBM SAN Volume Controller 51


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

In asynchronous remote copy with cycling mode (Change Volumes), changes are tracked and
copied to intermediate Change Volumes where needed. Changes are transmitted to the
secondary site periodically. The secondary volumes are much further behind the primary
volume, and more data must be recovered if there is a failover. Because the data transfer can
be smoothed over a longer time period, however, lower bandwidth is required to provide an
effective solution.

Because most applications, such as databases, have mechanisms for dealing with this type of
data state for a long time, it is a fairly mundane operation (depending on the application). After
this application recovery procedure is finished, the application starts normally.

RPO: When you are planning your Recovery Point Objective (RPO), you must account for
application recovery procedures, the length of time that they take, and the point to which
the recovery procedures can roll back data.

Although Global Mirror on an SVC can provide typically subsecond RPO times, the
effective RPO time can be up to 5 minutes or longer, depending on the application
behavior.

Most clients aim to automate the failover or recovery of the remote copy through failover
management software. The SVC provides Simple Network Management Protocol (SNMP)
traps and interfaces to enable this automation. IBM Support for automation is provided by IBM
Spectrum Control™.

The documentation is available at the IBM Spectrum Control Knowledge Center at this
website:
http://www.ibm.com/support/knowledgecenter/SS5R93/welcome

2.10.2 FlashCopy
FlashCopy is the IBM branded name for Point-in-Time, which is sometimes called Time-Zero,
or T0 copy. This function makes a copy of the blocks on a source volume and can duplicate
them on 1 - 256 target volumes.

FlashCopy: When the multiple target capability of FlashCopy is used, if any other copy (C)
is started while an existing copy is in progress (B), C has a dependency on B. Therefore, if
you end B, C becomes invalid.

FlashCopy works by creating one or two (for incremental operations) bitmaps to track
changes to the data on the source volume. This bitmap is also used to present an image of
the source data at the point that the copy was taken to target hosts while the actual data is
being copied. This capability ensures that copies appear to be instantaneous.

Bitmap: In this context, bitmap refers to a special programming data structure that is used
to compactly store Boolean values. Do not confuse this definition with the popular image
file format.

If your FlashCopy targets have existing content, the content is overwritten during the copy
operation. Also, the “no copy” (copy rate 0) option, in which only changed data is copied,
overwrites existing content. After the copy operation starts, the target volume appears to have
the contents of the source volume as it existed at the point that the copy was started.
Although the physical copy of the data takes an amount of time that varies based on system

52 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

activity and configuration, the resulting data at the target appears as though the copy was
made instantaneously.

FlashCopy permits the management operations to be coordinated through a grouping of


FlashCopy pairs so that a common single point in time is chosen for copying target volumes
from their respective source volumes. This capability allows a consistent copy of data for an
application that spans multiple volumes.

The SVC also permits source and target volumes for FlashCopy to be thin-provisioned
volumes. FlashCopies to or from thinly provisioned volumes allow the duplication of data
while using less space. These types of volumes depend on the rate of change of the data.
Typically, these types of volumes are used in situations where time is limited. Over time, they
might fill the physical space that they were allocated. Reverse FlashCopy enables target
volumes to become restore points for the source volume without breaking the FlashCopy
relationship and without having to wait for the original copy operation to complete. The SVC
supports multiple targets and therefore multiple rollback points.

In most practical scenarios, the FlashCopy functionality of the SVC is integrated into a
process or procedure that allows the benefits of the point-in-time copies to be used to
address business needs. IBM offers IBM Spectrum Protect Snapshot for this functionality. For
more information about IBM Spectrum Protect Snapshot, see this website:
http://www.ibm.com/software/products/en/spectrum-protect-snapshot

Most clients aim to integrate the FlashCopy feature for point-in-time copies and quick
recovery of their applications and databases.

2.10.3 Image mode migration and volume mirroring migration


Two methods of Advanced Copy Services are available outside of the licensed Advanced
Copy Services features: image mode migration and volume mirroring migration. The base
software functionality of the SVC includes both of these capabilities.

Image mode migration works by establishing a one-to-one static mapping of volumes and
MDisks. This mapping allows the data on the MDisk to be presented directly through the
volume layer and allows the data to be moved between volumes and the associated backing
MDisks. This function provides a facility to use the SVC as a migration tool. Otherwise, you
have no recourse, such as migrating from Vendor A hardware to Vendor B hardware,
assuming that the two systems have no other compatibility.

Volume mirroring migration is a clever use of the facility that the SVC offers to mirror data on
a volume between two sets of storage pools. As with the logical volume management portion
of certain operating systems, the SVC can mirror data transparently between two sets of
physical hardware. You can use this feature to move data between MDisk groups with no host
I/O interruption by removing the original copy after the mirroring is completed. This feature is
much more limited than FlashCopy and must not be used where FlashCopy is appropriate.
Instead, use this function as an infrequent-use, hardware-refresh aid, because you now can
move between your old storage system and new storage system without interruption.

Careful planning: When you are migrating by using the volume mirroring migration, your
I/O rate is limited to the slowest of the two MDisk groups that are involved. Therefore,
planning carefully to avoid affecting the live systems is imperative.

Chapter 2. IBM SAN Volume Controller 53


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

2.11 SAN Volume Controller clustered system overview


In simple terms, a clustered system or system is a collection of servers that together provide a
set of resources to a client. The key point is that the client has no knowledge of the underlying
physical hardware of the system. The client is isolated and protected from changes to the
physical hardware. This arrangement offers many benefits including, most significantly, high
availability.

Resources on the clustered system act as highly available versions of unclustered resources.
If a node (an individual computer) in the system is unavailable or too busy to respond to a
request for a resource, the request is passed transparently to another node that can process
the request. The clients are unaware of the exact locations of the resources that they use.

The SVC is a collection of up to eight nodes, which are added in pairs that are known as I/O
Groups. These nodes are managed as a set (system), and they present a single point of
control to the administrator for configuration and service activity.

The eight-node limit for an SVC system is a limitation that is imposed by the microcode and
not a limit of the underlying architecture. Larger system configurations might be available in
the future.

Although the SVC code is based on a purpose-optimized Linux kernel, the clustered system
feature is not based on Linux clustering code. The clustered system software within the SVC,
that is, the event manager cluster framework, is based on the outcome of the COMPASS
research project. It is the key element that isolates the SVC application from the underlying
hardware nodes. The clustered system software makes the code portable. It provides the
means to keep the single instances of the SVC code that are running on separate systems’
nodes in sync. Therefore, restarting nodes during a code upgrade, adding new nodes, or
removing old nodes from a system or failing nodes cannot affect the SVC’s availability.

All active nodes of a system must know that they are members of the system, especially in
situations where it is key to have a solid mechanism to decide which nodes form the active
system, such as the split-brain scenario where single nodes lose contact with other nodes. A
worst case scenario is a system that splits into two separate systems.

Within an SVC system, the voting set and a quorum disk are responsible for the integrity of
the system. If nodes are added to a system, they are added to the voting set. If nodes are
removed, they are removed quickly from the voting set. Over time, the voting set and the
nodes in the system can completely change so that the system migrates onto a separate set
of nodes from the set on which it started.

The SVC clustered system implements a dynamic quorum. Following a loss of nodes, if the
system can continue to operate, it adjusts the quorum requirement so that further node failure
can be tolerated.

54 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

The lowest Node Unique ID in a system becomes the boss node for the group of nodes, and it
proceeds to determine (from the quorum rules) whether the nodes can operate as the
system. This node also presents the maximum two-cluster IP addresses on one or both of its
nodes’ Ethernet ports to allow access for system management.

2.11.1 Quorum disks


A quorum disk is an MDisk or a managed drive that contains a reserved area that is used
exclusively for system management. A system automatically assigns quorum disk candidates.
Quorum disks are used when there is a problem in the SAN fabric or when nodes are shut
down, which leaves half of the nodes remaining in the system. This type of problem causes a
loss of communication between the nodes that remain in the system and those that do not
remain. The nodes are split into groups where the remaining nodes in each group can
communicate with each other but not with the other group of nodes that were formerly part of
the system. In this situation, some nodes must stop operating and processing I/O requests
from hosts to preserve data integrity while maintaining data access. If a group contains less
than half the nodes that were active in the system, the nodes in that group stop operating and
processing I/O requests from hosts.It is possible for a system to split into two groups with
each group containing half the original number of nodes in the system. A quorum disk
determines which group of nodes stops operating and processing I/O requests. In this
tie-break situation, the first group of nodes that accesses the quorum disk is marked as the
owner of the quorum disk and as a result continues to operate as the system, handling all I/O
requests. If the other group of nodes cannot access the quorum disk or finds the quorum disk
is owned by another group of nodes, it stops operating as the system and does not handle I/O
requests.A system can have only one active quorum disk used for a tie-break situation.
However the system uses three quorum disks to record a backup of system configuration
data to be used in the event of a disaster. The system automatically selects one active
quorum disk from these three disks. The other quorum disk candidates provide redundancy if
the active quorum disk fails before a system is partitioned. To avoid the possibility of losing all
the quorum disk candidates with a single failure, assign quorum disk candidates on multiple
storage systems.

Quorum disk requirements: To be considered eligible as a quorum disk, a LUN must


meet the following criteria:
򐂰 It must be presented by a disk subsystem that is supported to provide SVC quorum
disks.
򐂰 It was manually allowed to be a quorum disk candidate by using the chcontroller
-allowquorum yes command.
򐂰 It must be in managed mode (no image mode disks).
򐂰 It must have sufficient free extents to hold the system state information and the stored
configuration metadata.
򐂰 It must be visible to all of the nodes in the system.

Quorum disk placement: If possible, the SVC places the quorum candidates on separate
disk subsystems. However, after the quorum disk is selected, no attempt is made to ensure
that the other quorum candidates are presented through separate disk subsystems.

Chapter 2. IBM SAN Volume Controller 55


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Important: Quorum disk placement verification and adjustment to separate storage


systems (if possible) reduce the dependency from a single storage system and can
increase the quorum disk availability significantly.

You can list the quorum disk candidates and the active quorum disk in a system by using the
lsquorum command.

When the set of quorum disk candidates is chosen, it is fixed. However, a new quorum disk
candidate can be chosen in one of the following conditions:
򐂰 When the administrator requests that a specific MDisk becomes a quorum disk by using
the chquorum command
򐂰 When an MDisk that is a quorum disk is deleted from a storage pool
򐂰 When an MDisk that is a quorum disk changes to image mode

An offline MDisk is not replaced as a quorum disk candidate.

For disaster recovery purposes, a system must be regarded as a single entity, so the system
and the quorum disk must be colocated.

Special considerations are required for the placement of the active quorum disk for a
stretched or split cluster and split I/O Group configurations. For more information check the
Information Center.

Important: Running an SVC system without a quorum disk can seriously affect your
operation. A lack of available quorum disks for storing metadata prevents any migration
operation (including a forced MDisk delete).

Mirrored volumes can be taken offline if no quorum disk is available. This behavior occurs
because the synchronization status for mirrored volumes is recorded on the quorum disk.

During the normal operation of the system, the nodes communicate with each other. If a node
is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the system. If a
node fails for any reason, the workload that is intended for the node is taken over by another
node until the failed node is restarted and readmitted into the system (which happens
automatically).

If the microcode on a node becomes corrupted, which results in a failure, the workload is
transferred to another node. The code on the failed node is repaired, and the node is
readmitted into the system (which is an automatic process).

IP quorum configuration
In a stretched configuration or HyperSwap configuration, you must use a third, independent
site to house quorum devices.To use a quorum disk as the quorum device, this third site must
use Fibre Channel connectivity together with an external storage system. Sometimes, Fibre
Channel connectivity is not possible. In a local environment, no extra hardware or networking
such as Fibre Channel or SAS attached storage is required beyond what is normally always
provisioned within a system. To use an IP-based quorum application as the quorum device for
the third site, no Fibre Channel connectivity is used. Java applications are run on hosts at the
third site. However, there are strict requirements on the IP network and some disadvantages
with using IP quorum applications. Unlike quorum disks, all IP quorum applications must be
reconfigured and redeployed to hosts when certain aspects of the system configuration
change. These aspects include adding or removing a node from the system or when node

56 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

service IP addresses are changed. For stable quorum resolutions, an IP network must
provide the following requirements: Connectivity from the hosts to the service IP addresses of
all nodes. The network must also deal with possible security implications of exposing the
service IP addresses, as this connectivity can also be used to access the service GUI if IP
quorum is configured incorrectly. Port 1260 is used by IP quorum applications to
communicate from the hosts to all nodes. The maximum round-trip delay must not exceed 80
ms, which means 40 ms each direction. A minimum bandwidth of 2 megabytes per second is
guaranteed for node-to-quorum traffic. Even with IP quorum applications at the third site,
quorum disks at site one and site two are required, as they are used to store metadata. To
provide quorum resolution, use the mkquorumapp command to generate a Java application
that is copied from the system and run on a host at a third site. The maximum number of
applications that can be deployed is five. Currently, supported Java Runtime Environments
are IBM Java 7.1 and IBM Java 8.

2.11.2 Stretched cluster


An I/O Group is formed by a pair of SVC nodes. These nodes act as failover nodes for each
other, and hold mirrored copies of cached volume writes. For more information about I/O
Groups, see 2.4.2, “I/O Groups” on page 18. For more information about stretched cluster
configuration, see Appendix C, “Stretched Cluster” on page 939.

2.11.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive experience seek time and latency time at the drive level, which can result
in 1 ms - 10 ms of response time (for an enterprise-class disk).

The 2145-DH8 nodes that are combined with the SVC provide 32 GiB (optional 64 GiB, with
the second CPU installed, which offers more processor power and memory for the Real-time
Compression (RtC) feature) memory per node, or 128 GiB per I/O Group, or 512 GiB per
SVC system. The SVC provides a flexible cache model, and the node’s memory can be used
as read or write cache. The size of the write cache is limited to a maximum of 12 GiB of the
node’s memory. Depending on the current I/O conditions on a node, the entire 32 GiB of
memory can be fully used as read cache.

Cache is allocated in 4 KiB segments. A segment holds part of one track. A track is the unit of
locking and destaging granularity in the cache. The cache virtual track size is 32 KiB (eight
segments). A track might be only partially populated with valid pages. The SVC coalesces
writes up to a 256 KiB track size if the writes are in the same tracks before destage. For
example, if 4 KiB is written into a track, another 4 KiB is written to another location in the
same track. Therefore, the blocks that are written from the SVC to the disk subsystem can be
any size between 512 bytes up to 256 KiB. The large cache and advanced cache
management algorithms within the 2145-DH8 allow it to improve on the performance of many
types of underlying disk technologies. The SVC’s capability to manage, in the background,
the destaging operations that are incurred by writes (in addition to still supporting full data
integrity) assists with SVC’s capability in achieving good database performance.

The cache is separated into two layers: an upper cache and a lower cache.

Figure 2-19 shows the separation of the upper and lower cache.

Chapter 2. IBM SAN Volume Controller 57


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

New Dual Layer Cache


Architecture

 Upper Cache –simple


write cache

 Lower Cache –
algorithm intelligence

 Shared buffer space


between two layers

Figure 2-19 Separation of upper and lower cache

The upper cache delivers the following functionality, which allows the SVC to streamline data
write performance:
򐂰 Provides fast write response times to the host by being as high up in the I/O stack as
possible
򐂰 Provides partitioning

The lower cache delivers the following additional functionality:


򐂰 Ensures that the write cache between two nodes is in synch
򐂰 Caches partitioning to ensure that a slow back end cannot consume the entire cache
򐂰 Uses a destage algorithm that adapts to the amount of data and the back-end
performance
򐂰 Provides read caching and prefetching

Combined, the two levels of cache also deliver the following functionality:
򐂰 Pins data when the LUN goes offline
򐂰 Provides enhanced statistics for Tivoli® Storage Productivity Center and maintains
compatibility with an earlier version
򐂰 Provides trace for debugging
򐂰 Reports medium errors
򐂰 Re synchronizes cache correctly and provides the atomic write functionality
򐂰 Ensures that other partitions continue operation when one partition becomes 100% full of
pinned data
򐂰 Supports fast-write (two-way and one-way), flush-through, and write-through
򐂰 Integrates with T3 recovery procedures

58 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

򐂰 Supports two-way operation


򐂰 Supports none, read-only, and read/write as user-exposed caching policies
򐂰 Supports flush-when-idle
򐂰 Supports expanding cache as more memory becomes available to the platform
򐂰 Supports credit throttling to avoid I/O skew and offer fairness/balanced I/O between the
two nodes of the I/O Group
򐂰 Enables switching of the preferred node without needing to move volumes between I/O
Groups

Depending on the size, age, and technology level of the disk storage system, the total
available cache in the SVC can be larger, smaller, or about the same as the cache that is
associated with the disk storage. Because hits to the cache can occur in either the SVC or the
disk controller level of the overall system, the system as a whole can take advantage of the
larger amount of cache wherever the cache is located. Therefore, if the storage controller
level of the cache has the greater capacity, expect hits to this cache to occur, in addition to
hits in the SVC cache.

Also, regardless of their relative capacities, both levels of cache tend to play an important role
in allowing sequentially organized data to flow smoothly through the system. The SVC cannot
increase the throughput potential of the underlying disks in all cases because this increase
depends on both the underlying storage technology and the degree to which the workload
exhibits hotspots or sensitivity to cache size or cache algorithms.

2.11.4 Clustered system management


The SVC can be managed by one of the following interfaces:
򐂰 A text command-line interface (CLI), which is accessed through a Secure Shell (SSH)
connection, for example, PuTTY
򐂰 A web browser-based graphical user interface (GUI)
򐂰 IBM Spectrum Control

The GUI and a web server are installed in the SVC system nodes. Therefore, any browser
can access the management GUI if the browser is pointed at the system IP address.

Management console
The management console for the SVC is referred to as the IBM System Storage Productivity
Center (SSPC). This appliance is no longer needed. The SVC can be reached through the
internal management GUI.

2.12 User authentication


The SVC provides the following methods of user authentication to control access to the
web-based management interface (GUI) and CLI:
򐂰 Local authentication is performed within the SVC system.
The available local CLI authentication methods are SSH key authentication and user
name and password.
Local GUI authentication is done by using the user name and password.

Chapter 2. IBM SAN Volume Controller 59


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Remote authentication means that the validation of a user’s permission to access the
SVC’s management CLI or GUI is performed at a remote authentication server. That is,
except for the superuser account, no local user account administration is necessary on the
SVC.
You can use an existing user management system in your environment to control the SVC
user access, which implements a single sign-on (SSO) for the SVC.

2.12.1 Remote authentication through LDAP


Until SVC 6.2, the only supported remote authentication service was the Tivoli Embedded
Security Services, which is part of the Tivoli Integrated Portal. Beginning with SVC 6.3,
remote authentication through native Lightweight Directory Access Protocol (LDAP) was
introduced. The supported types of LDAP servers are IBM Security Directory Server,
Microsoft Active Directory (MS AD), and OpenLDAP, for example, running on a Linux system.

Users that are authenticated by an LDAP server can log in to the SVC web-based GUI and
the CLI. Unlike remote authentication through Tivoli Integrated Portal, users do not need to be
configured locally for CLI access. An SSH key is not required for CLI login in this scenario,
either. However, locally administered users can coexist with remote authentication enabled.
The default administrative user that uses the name superuser must be a local user. The
superuser cannot be deleted or manipulated, except for the password and SSH key.

If multiple LDAP servers are available, you can assign multiple LDAP servers to improve
availability. Authentication requests are processed by those LDAP servers that are marked as
preferred unless the connections fail or a user is not found. Requests are distributed across
all preferred servers for load balancing in a round-robin fashion.

A user that is authenticated remotely by an LDAP server is granted permissions on the SVC
according to the role that is assigned to the group of which it is a member. That is, any SVC
user group with its assigned role, for example, CopyOperator, must exist with an identical
name on the SVC system and on the LDAP server, if users in that role are to be authenticated
remotely.

You must adhere to the following guidelines:


򐂰 Native LDAP authentication or Tivoli Integrated Portal can be selected, but not both.
򐂰 If more than one LDAP server is defined, they all must be of the same type, for example,
MS AD.
򐂰 The SVC user group must be enabled for remote authentication.
򐂰 The user group name must be identical in the SVC user group management and on the
LDAP server; the user group name is case sensitive.
򐂰 The LDAP server must transmit a group membership attribute for the user. The default
attribute name for MS AD and OpenLDAP is memberOf. The default attribute name for
Tivoli Directory Server is ibm-allGroups. For OpenLDAP implementations, you might need
to configure the memberOf overlay if it is not in place.

In the following example, we demonstrate LDAP user authentication that uses a Microsoft
Windows Server domain controller that is acting as an LDAP server.

Complete the following steps to configure remote authentication:


1. Configure the SVC for remote authentication by selecting Settings → Security, as shown
in Figure 2-20.

60 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Figure 2-20 Configure Remote Authentication

2. Click Configure Remote Authentication.


3. Select the authentication type, as shown in Figure 2-21 on page 61. Select LDAP and
then click Next.

Figure 2-21 Select the authentication type

4. You must configure the following parameters in the Configure Remote Authentication
window, as shown in Figure 2-22 and Figure 2-23 on page 62:
– For LDAP Type, select Microsoft Active Directory. (For an OpenLDAP server, select
Other for the type of LDAP server.)
– For Security, choose None. (If your LDAP server requires a secure connection, select
Transport Layer Security; the LDAP server’s certificate is configured later.)
– Click Advanced Settings to expand the bottom part of the window. Leave the User
Name and Password fields empty if your LDAP server supports anonymous bind. For
our MS AD server, we enter the credentials of an existing user on the LDAP server with
permission to query the LDAP directory. You can enter this information in the format of
an email address, for example, [email protected], or in the distinguished
format, for example, cn=Administrator,cn=users,dc=itso,dc=corp. Note the common
name portion cn=users for MS AD servers.

Chapter 2. IBM SAN Volume Controller 61


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

– If your LDAP server uses separate attributes from the predefined attributes, you can
edit them here. You do not need to edit the attributes when MS AD is used as the LDAP
service.

Figure 2-22 Configure Remote Authentication

Figure 2-23 Configure Remote Authentication Advanced Settings

5. Figure 2-24 shows the Configure Remote Authentication window, where we configure the
following LDAP server details:
– Enter the IP address of at least one LDAP server.
– Although it is marked as optional, it might be required to enter a Base DN in the
distinguished name format, which defines the starting point in the directory at which to
search for users, for example, dc=itso,dc=corp.
– You can add more LDAP servers by clicking the plus (+) icon.
– Check Preferred if you want to use preferred LDAP servers.

62 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

– Click Finish to save the settings.

Figure 2-24 LDAP servers configuration

Now that the SVC for Remote Authentication is enabled and configured, we work with the
user groups. For remote authentication through LDAP, no local SVC users are maintained, but
the user groups must be set up correctly. The existing built-in SVC user groups can be used
and groups that are created in SVC user management can be used. However, the use of
self-defined groups might be advisable to avoid the SVC default groups from interfering with
the existing group names on the LDAP server. Any user group, whether built-in or
self-defined, must be enabled for remote authentication.

Complete the following steps to create a user group:


1. Select Access → Users → Create User Group. As shown in Figure 2-25, we create a
user group.

Figure 2-25 Create User Group window

2. In the Create User Group window that is shown in Figure 2-25, complete the following
steps:

Chapter 2. IBM SAN Volume Controller 63


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

a. Enter a meaningful Group Name (for example, SVC_LDAP_CopyOperator), according to


its intended role.
b. Select the Role that you want to use by clicking Copy Operator.
c. To mark LDAP for Remote Authentication, select Enable for this group and then click
Create.
You can modify these settings in a group’s properties at any time.

Next, we complete the following steps to create a group with the same name on the LDAP
server, that is, in the Active Directory Domain:
1. On the Domain Controller, start the Active Directory Users and Computers management
console and browse your domain structure to the entity that contains the user groups.
Click the Create new user group icon as highlighted in Figure 2-26 on page 64 to create
a group.

Figure 2-26 Create a user group on the LDAP server

2. Enter the same name, SVC_LDAP_CopyOperator, in the Group Name field, as shown in
Figure 2-27. (The name is case sensitive.) Select the correct Group scope for your
environment and select Security for Group type. Click OK.

Figure 2-27 Edit the group properties

3. Edit the user’s properties so that the user can log in to the SVC. Make the user a member
of the appropriate user group for the intended SVC role, as shown in Figure 2-28 on
page 65, and click OK to save and apply the settings.

64 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Figure 2-28 Make the user a member of the appropriate group

We are now ready to authenticate the users for the SVC through the remote server. To ensure
that everything works correctly, we complete the following steps to run a few tests to verify the
communication between the SVC and the configured LDAP service:
1. Select Settings → Security, and then select Global Actions → Test LDAP
Connections, as shown in Figure 2-29.

Figure 2-29 LDAP connections test

Figure 2-30 on page 66 shows the result of a successful connection test.

Chapter 2. IBM SAN Volume Controller 65


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 2-30 Successful LDAP connection test

2. We test a real user authentication attempt. Select Settings → Security, then select
Global Actions → Test LDAP Authentication, as shown in Figure 2-31.

Figure 2-31 Test LDAP Authentication

3. As shown in Figure 2-32, enter the User Credentials of a user that was defined on the
LDAP server, and then click Test.

66 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Figure 2-32 LDAP authentication test

The message, CMMVC7148I Task completed successfully, is shown after a successful


test.

Both the LDAP connection test and the LDAP authentication test must complete successfully
to ensure that the LDAP authentication works correctly. In our example, an error message
points to user authentication problems during the LDAP authentication test. It might help to
analyze the LDAP server’s response outside of the SVC. You can use any native LDAP query
tool, for example, the no-charge software LDAPBrowser tool, which is available at this
website:
http://www.ldapbrowser.com/

For a pure MS AD environment, you can use the Microsoft Sysinternals ADExplorer tool,
which is available at this website:
http://technet.microsoft.com/en-us/sysinternals/bb963907

Assuming that the LDAP connection and the authentication test succeeded, users can log in
to the SVC GUI and CLI by using their network credentials, for example, their Microsoft
Windows domain user name and password.

Figure 2-33 shows the web GUI login window with the Windows domain credentials entered.
A user can log in with their short name (that is, without the domain component) or with the
fully qualified user name in the form of an email address.

Figure 2-33 GUI login

Chapter 2. IBM SAN Volume Controller 67


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

After a successful login, the user name is displayed in a welcome message in the upper-right
corner of the window, as highlighted in Figure 2-34 on page 68.

Figure 2-34 Welcome message after a successful login

Logging in by using the CLI is possible with the short user name or the fully qualified name.
The lscurrentuser CLI command displays the user name of the currently logged in user and
their role.

2.12.2 SAN Volume Controller modified login message


You can create or change a message that displays when users log on to the system. When
users log on to the system with the management GUI, command-line interface, or service
assistant, the message displays before they log on to the system.

2.12.3 SAN Volume Controller user names


User names must be unique and they can contain up to 256 printable ASCII characters.

Forbidden characters are the single quotation mark (‘), colon (:), percent symbol (%),
asterisk (*), comma (,), and double quotation marks (“).

Also, a user name cannot begin or end with a blank space.

Passwords for local users can be up to 64 printable ASCII characters. There are no forbidden
characters; however, passwords cannot begin or end with blanks.

2.12.4 SAN Volume Controller superuser


A special local user that is called the superuser always exists on every system. It cannot be
deleted. Its password is set by the user during clustered system initialization. The superuser
password can be reset to its default value of passw0rd by using the technician port on SAN

68 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Volume Controller 2145-DH8 nodes or the front panel on earlier models of the system. To
meet varying security requirements, this functionality can be enabled or disabled using the
CLI. However, disabling the superuser makes the system inaccessible if all the users forget
their passwords or lose their SSH keys.

To register an SSH key for the superuser to provide command-line access, select Service
Assistant Tool → Configure CLI Access to assign a temporary key. However, the key is lost
during a node restart. The permanent way to add the key is through the normal GUI; select
User Management → superuser → Properties to register the SSH key for the superuser.
The superuser is always a member of user group 0, which has the most privileged role within
the SVC.

2.12.5 SAN Volume Controller Service Assistant Tool


The SVC has a tool for performing service tasks on the system.You can service a node
through an Ethernet connection by using a web browser to access a GUI interface. The
function is called the Service Assistant Tool and requires you to enter the superuser password
during login. You can access the Service Assistant Tool over your browser, simply entering
the GUI IP extended with /service.

Figure 2-35 shows the Service Assistant Tool.

Figure 2-35 Service Assistant Tool

2.12.6 SAN Volume Controller roles and user groups


Each user group is associated with a single role. The role for a user group cannot be
changed, but user groups (with one of the defined roles) can be created.

Chapter 2. IBM SAN Volume Controller 69


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

User groups are used for local and remote authentication. Because the SVC knows of five
roles, by default, five user groups are defined in an SVC system, as shown in Table 2-4.

Table 2-4 User groups


User group ID User group Role

0 SecurityAdmin SecurityAdmin

1 Administrator Administrator

2 CopyOperator CopyOperator

3 Service Service

4 Monitor Monitor

5 VASA Provider VASA Provider

The access rights for a user who belongs to a specific user group are defined by the role that
is assigned to the user group. It is the role that defines what a user can or cannot do on an
SVC system.

Table 2-5 on page 70 shows the roles ordered (from the top) by the least privileged Monitor
role down to the most privileged SecurityAdmin role. The NasSystem role has no special user
group.

Table 2-5 Commands that are permitted for each role


Role Commands that are allowed by role

VASA Provider Users with this role can manage VMware vSphere Virtual Volumes.
chauthservice,chldap,chldapserver,chsecurity,chuser,chusergrp,
mkldapserver,mkuser,mkusergrp,rmldapserver,rmuser,rmusergro,
setpwdreset

Monitor All svcinfo or informational commands and svctask finderr, dumperrlog,


dumpinternallog, chcurrentuser, ping, svcconfig backup, and svqueryclock

Service All commands that are allowed for the Monitor role and applysoftware,
setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk,
includemdisk, clearerrlog, cleardumps, settimezone, stopcluster,
startstats, stopstats, and setsystemtime

CopyOperator All commands allowed for the Monitor role and prestartfcconsistgrp,
startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp,
switchrcconsistgrp, chrcconsistgrp, startrcrelationship,
stoprcrelationship, switchrcrelationship, chrcrelationship, and
chpartnership

Administrator All commands, except chauthservice, mkuser, rmuser, chuser, mkusergrp,


rmusergrp, chusergrp, and setpwdreset

SecurityAdmin All commands, except those commands that are allowed by the NasSystem
role

NasSystem svctask: addmember, activatemember, and expelmember


Create and delete file system volumes.

70 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

2.12.7 SAN Volume Controller local authentication


Local users are users that are managed entirely on the clustered system without the
intervention of a remote authentication service. Local users must have a password or an SSH
public key, or both. Key authentication is attempted first with the password as a fallback. The
password and the SSH key are used for command-line or file transfer (SecureCopy) access.
For GUI access, only the password is used.

Local users: Local users are created for each SVC system. Each user has a name, which
must be unique across all users in one system.

If you want to allow access for a user on multiple systems, you must define the user in each
system with the same name and the same privileges.

A local user always belongs to only one user group. Figure 2-36 on page 71 shows an
overview of local authentication within the SVC.

Figure 2-36 Simplified overview of SVC local authentication

2.12.8 SAN Volume Controller remote authentication and single sign-on


You can configure an SVC system to use a remote authentication service. Remote users are
users that are managed by the remote authentication service and require command-line or
file-transfer access.

Remote users must be defined only in the SVC system if command-line access is required.
No local user is required for GUI-only remote access. For command-line access, the remote
authentication flag must be set and its password must be defined for the user. For users that
require CLI access with remote authentication, the password must be defined locally for the
users.

Remote users cannot belong to any user group because the remote authentication service,
for example, an LDAP directory server, such as IBM Tivoli Directory Server or Microsoft
Active Directory, delivers the user group information.

Chapter 2. IBM SAN Volume Controller 71


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 2-37 on page 72 gives an overview of SVC remote authentication.

Figure 2-37 Simplified overview of SVC remote authentication

The authentication service that is supported by the SVC is the Tivoli Embedded Security
Services server component level 6.2.

The Tivoli Embedded Security Services server provides the following key features:
򐂰 Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in
use, which means that the SVC communicates only with Tivoli Embedded Security
Services to get its authentication information. The type of protocol that is used to access
the central directory or the kind of the directory system that is used is not apparent to the
SVC.
򐂰 Tivoli Embedded Security Services provides a secure token facility that is used to enable
single sign-on (SSO). SSO means that users do not have to log in multiple times when
they are using what appears to them to be a single system. SSO is used within Tivoli
Productivity Center. When the SVC access is started from within Tivoli Productivity
Center, the user does not have to log in to the SVC because the user logged in to Tivoli
Productivity Center.

Using a remote authentication service


Complete the following steps to use the SVC with a remote authentication service:
1. Configure the system with the location of the remote authentication server:
– Change settings by using the following command:
svctask chauthservice
– View current settings by using the following command:
svcinfo lscluster

72 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

The SVC supports an HTTP or HTTPS connection to the Tivoli Embedded Security
Services server. If the HTTP option is used, the user and password information is
transmitted in clear text over the IP network.
2. Configure user groups on the system that match those user groups that are used by the
authentication service. For each group of interest that is known to the authentication
service, an SVC user group must exist with the same name and the remote setting
enabled.
For example, you can have a group that is called sysadmins, whose members require the
SVC Administrator role. Configure this group by using the following command:
svctask mkusergrp -name sysadmins -remote -role Administrator
If none of a user’s groups match any of the SVC user groups, the user is not permitted to
access the system.
3. Configure users that do not require SSH access. Any SVC users that use the remote
authentication service and do not require SSH access must be deleted from the system.
The superuser cannot be deleted; it is a local user and cannot use the remote
authentication service.
4. Configure users that require SSH access. Any SVC users that use the remote
authentication service and require SSH access must have their remote setting enabled
and the same password set on the system and the authentication service. The remote
setting instructs the SVC to consult the authentication service for group information after
the SSH key authentication step to determine the user’s role. The need to configure the
user’s password on the system in addition to the authentication service is because of a
limitation in the Tivoli Embedded Security Services server software.
5. Configure the system time. For correct operation, the SVC system and the system that is
running the Tivoli Embedded Security Services server must have the same view of the
current time. The easiest way is to have them both use the same Network Time Protocol
(NTP) server.

Note: Failure to follow this step can lead to poor interactive performance of the SVC
user interface or incorrect user-role assignments.

Also, Tivoli Storage Productivity Center uses the Tivoli Integrated Portal infrastructure and its
underlying IBM WebSphere® Application Server capabilities to use an LDAP registry and
enable SSO.

2.13 SAN Volume Controller hardware overview


As defined in the underlying COMPASS architecture, the hardware nodes are based on Intel
processors with standard PCI-X adapters to interface with the SAN and the LAN.

The new SVC 2145-DH8 Storage Engine has the following key hardware features:
򐂰 One or two Intel Xeon E5 v2 Series eight-core processors, each with 32 GB memory
򐂰 16 Gb FC, 8 Gb FC, 10 Gb Ethernet, and 1 Gb Ethernet I/O ports for FC, iSCSI, and Fibre
Channel over Ethernet (FCoE) connectivity
򐂰 Optional feature: Hardware-assisted compression acceleration
򐂰 Optional feature: 12 Gb SAS expansion enclosure attachment for internal flash storage)
򐂰 Two integrated battery units

Chapter 2. IBM SAN Volume Controller 73


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 2U, 19-inch rack mount enclosure with ac power supplies

Model 2145-DH8 includes three 1 Gb Ethernet ports standard for iSCSI connectivity. Model
2145-DH8 can be configured with up to four I/O adapter features that provide up to sixteen
16 Gb FC ports, or up to twelve 8 Gb FC ports, or up to four 10 Gb Ethernet (iSCSI/Fibre
Channel over Ethernet (FCoE)) ports or a mixture of all above. For more information, see the
optional feature section in the knowledge center:
https://ibm.biz/BdHnKF

Real-time Compression workloads can benefit from Model 2145-DH8 configurations with two
eight-core processors with 64 GB of memory (total system memory). Compression workloads
can also benefit from the hardware-assisted acceleration that is offered by the addition of up
to two compression accelerator cards. The SVC Storage Engines can be clustered to help
deliver greater performance, bandwidth, and scalability. An SVC clustered system can contain
up to four node pairs or I/O Groups. Model 2145-DH8 storage engines can be added into
existing SVC clustered systems that include previous generation storage engine models.

For more information, see IBM SAN Volume Controller Software Installation and
Configuration Guide, GC27-2286.

For more information about integration into existing clustered systems, compatibility, and
interoperability with installed nodes and uninterruptible power supplies, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1002999

The SVC 2145-DH8 includes preinstalled IBM Spectrum Virtualize 7.6 software.

Figure 2-38 shows the front view of the SVC 2145-DH8 node.

Figure 2-38 SVC 2145-DH8 storage engine

74 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

2.13.1 Fibre Channel interfaces


The SVC provides link speeds of 8/16 Gbps on SVC 2145-DH8 nodes. The nodes include up
to three 4-port 8 Gbps HBAs or up to four 4-port 16 Gbps HBAs. The FC ports on these node
types auto-negotiate the link speed that is used with the FC switch. The ports normally
operate at the maximum speed that is supported by the SVC port and the switch. However, if
many link errors occur, the ports might operate at a lower speed than what is supported.

The actual port speed for each of the ports can be displayed through the GUI, CLI, the node’s
front panel, and by light-emitting diodes (LEDs) that are placed at the rear of the node.

For more information, see SAN Volume Controller Model 2145-DH8 Hardware Installation
Guide, GC27-6490. The PDF is at this website:
https://ibm.biz/BdEzM7

The SVC imposes no limit on the FC optical distance between SVC nodes and host servers.
FC standards, with small form-factor pluggable optics (SFP) capabilities and cable type,
dictate the maximum FC distances that are supported.

If longwave SFPs are used in the SVC nodes, the longest supported FC link between the
SVC and switch is 40 km (24.85 miles).

Table 2-6 shows the cable length that is supported by shortwave SFPs.

Table 2-6 Overview of supported cable length


FC-O OM1 (M6) OM2 (M5) OM3 (M5E) OM4 (M5F)
standard 62.2/125 standard 50/125 optimized 50/125 optimized 50/125
microseconds microseconds microseconds microseconds

2 Gbps FC 150 m (492.1 ft) 300 m (984.3 ft) 500 m (1640.5 ft) N/A

4 Gbps FC 70 m (229.7 ft) 150 m (492.1 ft) 380 m (1246.9 ft) 400 m (1312.34 ft)

8 Gbps FC limiting 20 m (68.10 ft) 50 m (164 ft) 150 m (492.1 ft) 190 m (623.36 ft)

16 Gbps FC 15 m (49.21 ft) 35 m (114.82 ft) 100 m (382.08 ft) 125 m (410.10 ft)

Table 2-7 shows the applicable rules that relate to the number of inter-switch link (ISL) hops
that is allowed in a SAN fabric between the SVC nodes or the system.

Table 2-7 Number of supported ISL hops


Between the nodes Between the nodes Between the nodes and Between the nodes and
in an I/O Group in separate I/O Groups the disk subsystem the host server

0 0 1 Maximum 3
(Connect to the same (Connect to the same (Recommended: 0,
switch.) switch.) connect to the same
switch.)

2.13.2 LAN interfaces


The 2145-DH8 node has three 1 Gbps LAN ports available. Also, this node supports 10 Gbps
Ethernet ports that can be used for iSCSI I/O.

Chapter 2. IBM SAN Volume Controller 75


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

The system configuration node can be accessed over the Technican Port. Therefore, the
clustered system can be managed by SSH clients or GUIs on System Storage Productivity
Centers on separate physical IP networks. This capability provides redundancy if one of these
IP networks fails.

Support for iSCSI introduces one other IPv4 and one other IPv6 address for each SVC node
port. These IP addresses are independent of the system configuration IP addresses. An IP
address overview is shown in Figure 2-16 on page 47.

2.13.3 FCoE interfaces


The SVC also includes FCoE support. FCoE is still in its infancy, but 16 Gbit native Fibre
Channel might be the last speed increase in mass production use. After that, SANs and
Ethernet networks finally converge with 40 Gbit and beyond. The various Fibre Channel
Forwarder (FCF) requirements on the Converged Enhanced Ethernet (CEE) FCoE
infrastructure mean that FCoE host attachment is supported. Disk and fabric attach are still
by way of native Fibre Channel. As with most SVC features, the FCoE support is a software
upgrade.

If you have SVC with the 10 Gbit features, FCoE support is added with an upgrade to version
6.4. The same 10 Gbit ports are iSCSI and FCoE capable. For performance, the FCoE ports
compare (regarding transport speed) with the native Fibre Channel ports (8 Gbit versus 10
Gbit), and recent enhancements to the iSCSI support mean that performance levels are
similar to iSCSI and Fibre Channel performance levels.

2.14 Flash drives


Flash drives can be used, or more specifically, single-layer cell (SLC) or multilayer cell (MLC)
NAND Flash-based disks, to overcome a growing problem that is known as the memory or
storage bottleneck.

2.14.1 Storage bottleneck problem


The memory or storage bottleneck describes the steadily growing gap between the time that
is required for a CPU to access data that is in its cache memory (typically in nanoseconds)
and data that is on external storage (typically in milliseconds).

Although CPUs and cache/memory devices continually improve their performance; in


general, mechanical disks that are used as external storage do not improve their
performance. Figure 2-39 on page 77 shows these access time differences.

76 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

Figure 2-39 The memory or storage bottleneck

The actual times that are shown are not that important, but a dramatic difference exists
between accessing data that is in cache and data that is on an external disk.

We added a second scale to Figure 2-39, which gives you an idea of how long it takes to
access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you
an idea of the importance of future storage technologies closing or reducing the gap between
access times for data that is stored in cache/memory versus access times for data that is
stored on an external medium.

Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they showed
remarkable performance regarding capacity growth, form factor, and size reduction, price
decrease (cost per GB), and reliability.

However, the number of I/Os that a disk can handle and the response time that it takes to
process a single I/O did not improve at the same rate, although they certainly did improve. In
actual environments, we can expect from today’s enterprise-class FC serial-attached SCSI
(SAS) disk up to 200 IOPS per disk with an average response time (a latency) of
approximately 6 ms per I/O.

Table 2-7 shows a comparison of drive types and IOPS.

Table 2-8 Comparison of drive types to IOPS


Drive type IOPS

Nearline - SAS 90

SAS 10K 140

SAS 15K 200

Flash > 50000

Today’s rotating disks continue to advance in capacity (several TBs), form factor/footprint
(8.89 cm (3.5 inches), 6.35 cm (2.5 inches), and 4.57 cm (1.8 inches)), and price (cost per
GB), but they are not getting much faster.

Chapter 2. IBM SAN Volume Controller 77


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

The limiting factor is the number of revolutions per minute (RPM) that a disk can perform
(approximately 15,000). This factor defines the time that is required to access a specific data
block on a rotating device. Small improvements likely will occur in the future; however, a
significant step, such as doubling the RPM (if technically even possible), inevitably has an
associated increase in power usage and price that will be an inhibitor.

2.14.2 Flash Drive solution


Flash Drives can provide a solution for this dilemma. No rotating parts mean improved
robustness and lower power usage. A remarkable improvement in I/O performance and a
massive reduction in the average I/O response times (latency) are the compelling reasons to
use Flash Drives in today’s storage subsystems.

Enterprise-class Flash Drives typically deliver 85,000 read and 36,000 write IOPS with typical
latencies of 50 µs for reads and 800 µs for writes. Their form factors of 4,57 cm(1,8
inches)/6.35 cm (2.5 inches)/8.89 cm (3.5 inches) and their interfaces (FC/SAS/SATA) make
them easy to integrate into existing disk shelves.

2.14.3 Flash Drive market


The Flash Drive storage market is rapidly evolving. The key differentiator among today’s Flash
Drive products that are available on the market is not the storage medium, but the logic in the
disk internal controllers. The top priorities in today’s controller development are optimally
handling what is referred to as wear-out leveling, which defines the controller’s capability to
ensure a device’s durability, and closing the remarkable gap between read and write I/O
performance.

Today’s Flash Drive technology is only a first step into the world of high-performance
persistent semiconductor storage. A group of the approximately 10 most promising
technologies is collectively referred to as Storage Class Memory (SCM).

Storage Class Memory


SCM promises a massive improvement in performance (IOPS), areal density, cost, and
energy efficiency compared to today’s Flash Drive technology. IBM Research is actively
engaged in these new technologies.

For more information about nanoscale devices, see this website:


http://researcher.watson.ibm.com/researcher/view_project.php?id=4284

For more information about SCM, see this website:


https://ibm.biz/BdEPQ7

For a comprehensive overview of the Flash Drive technology in a subset of the well-known
Storage Networking Industry Association (SNIA) Technical Tutorials, see these websites:
򐂰 http://www.snia.org/education/tutorials/2010/spring#solid
򐂰 http://www.snia.org/education/tutorials/fms2015

When these technologies become a reality, it will fundamentally change the architecture of
today’s storage infrastructures.

78 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

2.14.4 Flash Drives and SAN Volume Controller


The SVC supports the use of internal (up to Model 2145-CG8) or external Flash Drives.

Internal Flash Drives


Certain SVC models support 0.76 m (2.5-inch) Flash Drives as internal storage. A maximum
of four drives can be installed per node, and up to 32 drives can be installed in a clustered
system. These drives can be used to create RAID MDisks that, in turn, can be used to create
volumes.

Internal Flash Drives can be configured in the following two RAID levels:
򐂰 RAID 1 - RAID 10: In this configuration, one half of the mirror is in each node of the I/O
Group, which provides redundancy if a node failure occurs.
򐂰 RAID 0: In this configuration, all the drives are assigned to the same node. This
configuration is intended to be used with Volume Mirroring because no redundancy is
provided if a node failure occurs.

External Flash Drives


The SVC can manage Flash Drives in externally attached storage controllers or enclosures.
The Flash Drives are configured as an array with a LUN and are presented to the SVC as a
normal MDisk. The solid-state MDisk tier then must be set by using the chmdisk -tier
generic_ssd command or the GUI.

The Flash MDisks can then be placed into a single Flash Drive tier storage pool.
High-workload volumes can be manually selected and placed into the pool to gain the
performance benefits of Flash Drives.

For a more effective use of Flash Drives, place the Flash Drive MDisks into a multitiered
storage pool that is combined with HDD MDisks (generic_hdd tier). Then, with Easy Tier
turned on, Easy Tier automatically detects and migrates high-workload extents onto the
solid-state MDisks.

For more information about IBM Flash Storage, see this website:
http://www.ibm.com/systems/storage/flash/

2.15 What is new with the SAN Volume Controller 7.6


This section highlights the new features.

2.15.1 Withdrawal of the SAN Volume Controller 2145-8xx


With SVC software version 7.6 the SVC 2145-8xx are no longer supported. These models
can stay on their current version or can be upgraded to software version 7.5. This version is
the last version which will be available for the SVC 2145-8xx. This code version will be
maintained until a notice period is given.

Chapter 2. IBM SAN Volume Controller 79


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

2.15.2 SAN Volume Controller 7.6 supported hardware list, device driver, and
firmware levels
As with all new software versions, 7.6 offers functional enhancements and new hardware that
can be integrated into existing or new SVC systems and interoperability enhancements or
new support for servers, SAN switches, and disk subsystems. For the most current
information, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658

2.15.3 SAN Volume Controller 7.6 new features


The SVC 7.6.0 includes the following new features:
򐂰 Encryption on SAN Volume Controller 2145-DH8
The SAN Volume Controller 2145-DH8 system provides optional encryption of data at rest,
which protects against the potential exposure of sensitive user data and user metadata
that is stored on discarded, lost, or stolen storage devices. Encryption of system data and
system metadata is not required, so system data and metadata are not encrypted. To use
encryption on the system, you must purchase an encryption license.
򐂰 Maximum number of iSCSI host sessions increased
A maximum of 512 hosts per I/O group for all supported node types. Up to 1024 sessions
per system iSCSI target from different iSCSI hosts. A maximum of 2048 sessions per I/O
group from iSCSI hosts with up to four sessions from one iSCSI host to each system
iSCSI target.
򐂰 IP quorum configuration
In some stretched configurations or HyperSwap configurations, IP quorum applications
can be used as a third site to house quorum devices. To use an IP-based quorum
application as the quorum device for the third site, no Fibre Channel connectivity is used.
Java applications are run on hosts at the third site. However, there are strict requirements
on the IP network and some disadvantages with using IP quorum applications. Unlike
quorum disks, all IP quorum applications must be reconfigured and redeployed to hosts
when certain aspects of the system configuration change.
򐂰 Changing the login message
You can create or change a message that displays when users log on to the system.
When users log on to the system with the management GUI, command-line interface, or
service assistant, the message displays before they log on to the system.
򐂰 Secure communications and SSL certificates
During system setup, an initial certificate is created to use for secure connections between
web browsers. Based on the security requirements for your system, you can create either
a new self-signed certificate or install a signed certificate that is created by a third-party
certificate authority. Self-signed certificates are generated automatically by the system
and encrypt communications between the browser and the system. Self-signed
certificates can generate web browser security warnings and might not comply with
organizational security guidelines.
򐂰 Setting the maximum replication delay latency between systems
You can use the chsystem command to set the maximum replication delay for the system.
This value ensures that the single slow write operation does not affect the entire primary
site.

80 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm

򐂰 There have been changes in the way that you can work with volumes. These changes
include:
– Changing the system topology
– Enabling volume protection
– Disabling volume protection
– Adding a copy to a volume
– Adding a copy to a basic volume
– Deleting a copy from a volume
– Deleting a copy from a basic volume
– Deleting a volume
– Deleting an image mode volume
– Adding a copy to a HyperSwap volume or a stretched volume
– Deleting a copy from a HyperSwap volume or a stretched volume

2.16 Useful SAN Volume Controller web links


For more information about the SVC-related topics, see the following websites:
򐂰 SVC support:
http://www.ibm.com/support/docview.wss?uid=ssg1S1005253
򐂰 SVC home page:
http://www.ibm.com/systems/storage/software/virtualization/svc/
򐂰 SVC Interoperability page:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
򐂰 SVC online documentation:
http://www.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp
򐂰 IBM Redbooks publications about the SVC:
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
򐂰 IBM developerWorks® is the premier web-based technical resource and professional
network for IT practitioners:
http://www.ibm.com/developerworks/

Chapter 2. IBM SAN Volume Controller 81


7933 02 SVC Overview Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

82 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

Chapter 3. Planning and configuration


In this chapter, we describe the required steps when you plan the Installation of an IBM SAN
Volume Controller (SVC) for 2145-CF8, 2145-CG8 and 2145-DH8 in your storage network.
For more detailed informations about 2145-DH8, see IBM SAN Volume Controller 2145-DH8
Introduction and Implementation, SG24-8229:
http://www.redbooks.ibm.com/abstracts/sg248229.html

We also review the implications for your storage network and describe performance
considerations.

This chapter includes the following topics:


򐂰 General planning rules
򐂰 Physical planning
򐂰 Logical planning
򐂰 Performance considerations

© Copyright IBM Corp. 2015. All rights reserved. 83


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

3.1 General planning rules

Important: At the time of writing, the statements provided in this book are correct, but they
might change. Always verify any statements that are made in this book with the SAN
Volume Controller supported hardware list, device driver, firmware, and recommended
software levels that are available at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658

To achieve the most benefit from the SVC, pre-installation planning must include several
important steps. These steps ensure the SVC provides the best possible performance,
reliability, and ease of management for your application needs. The correct configuration also
helps minimize downtime by avoiding changes to the SVC and the storage area network
(SAN) environment to meet future growth needs.

Note: For more information, see the Pre-sale Technical and Delivery Assessment (TDA)
document that is available at this website:
https://www.ibm.com/partnerworld/wps/servlet/mem/ContentHandler/salib_SA572/lc=
en_ALL_ZZ

A pre-sale TDA needs to be conducted before a final proposal is submitted to a client and
must be conducted before an order is placed to ensure that the configuration is correct and
the solution that is proposed is valid. The preinstall System Assurance Planning Review
(SAPR) Package includes various files that are used in preparation for an SVC preinstall
TDA. A preinstall TDA needs to be conducted shortly after the order is placed and before
the equipment arrives at the client’s location to ensure that the client’s site is ready for the
delivery and responsibilities are documented regarding the client and IBM or the IBM
Business Partner roles in the implementation.

Tip: For more information about the topics that are described, see the following resources:
򐂰 IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551
򐂰 SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, which
is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Complete the following tasks when you are planning for the SVC:
򐂰 Collect and document the number of hosts (application servers) to attach to the SVC, the
traffic profile activity (read or write, sequential, or random), and the performance
requirements, which are I/O per second (IOPS).
򐂰 Collect and document the following storage requirements and capacities:
– The total back-end storage that is present in the environment to be provisioned on the
SVC
– The total back-end new storage to be provisioned on the SVC
– The required virtual storage capacity that is used as a fully managed virtual disk
(volume) and used as a Space-Efficient (SE) volume
– The required storage capacity for local mirror copy (volume mirroring)
– The required storage capacity for point-in-time copy (FlashCopy)

84 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

– The required storage capacity for remote copy (Metro Mirror and Global Mirror)
– The required storage capacity for compressed volumes
– The required storage capacity for encrypted volumes
– Per host: Storage capacity, the host logical unit number (LUN) quantity, and sizes
򐂰 Define the local and remote SAN fabrics and systems in the cluster if a remote copy or a
secondary site is needed.
򐂰 Define the number of systems in the cluster and the number of pairs of nodes (1 - 4) for
each system. Each pair of nodes (an I/O Group) is the container for the volume. The
number of necessary I/O Groups depends on the overall performance requirements.
򐂰 Design the SAN according to the requirement for high availability and best performance.
Consider the total number of ports and the bandwidth that is needed between the host and
the SVC, the SVC and the disk subsystem, between the SVC nodes, and for the
inter-switch link (ISL) between the local and remote fabric.

Note: Check and carefully count the required ports for extended links. Especially in a
stretched cluster environment, you might need many of the higher-cost longwave
gigabit interface converters (GBICs).

򐂰 Design the iSCSI network according to the requirements for high availability and best
performance. Consider the total number of ports and bandwidth that is needed between
the host and the SVC.
򐂰 Determine the SVC service IP address.
򐂰 Determine the IP addresses for the SVC system and for the host that connects through
iSCSI.
򐂰 Determine the IP addresses for IP replication.
򐂰 Define a naming convention for the SVC nodes, host, and storage subsystem.
򐂰 Define the managed disks (MDisks) in the disk subsystem.
򐂰 Define the storage pools. The storage pools depend on the disk subsystem that is in place
and the data migration requirements.
򐂰 Plan the logical configuration of the volume within the I/O Groups and the storage pools to
optimize the I/O load between the hosts and the SVC.
򐂰 Plan for the physical location of the equipment in the rack.

The SVC planning can be categorized into the following types:


򐂰 Physical planning
򐂰 Logical planning

We describe these planning types in the following sections.

3.2 Physical planning


You must consider several key factors when you are planning the physical site of an SVC
installation. The physical site must have the following characteristics:
򐂰 Power, cooling, and location requirements are present for the SVC and the uninterruptible
power supply (UPS) units. (UPS is only valid for 2145-CF8, and 2145-CG8)
򐂰 The SVC nodes and their uninterruptible power supply units must be in the same rack.

Chapter 3. Planning and configuration 85


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 You must plan for two separate power sources if you have a redundant ac-power switch,
which is available as an optional feature.
򐂰 An SVC node (valid for 2145-CF8, and 2145-CG8) is one Electronic Industries Association
(EIA) unit high, an SVC 2145-DH8 node is two units high.
򐂰 Other hardware devices can be in the same SVC rack, such as IBM Storwize V7000, IBM
Storwize V3700, SAN switches, an Ethernet switch, and other devices.
򐂰 You must consider the maximum power rating of the rack; do not exceed it. For more
information about the power requirements, see this website:
https://ibm.biz/BdHnKF

3.2.1 Preparing your uninterruptible power supply unit environment


Ensure that your physical site meets the installation requirements for the uninterruptible
power supply unit.

2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high, is included,
and can operate on the following node types only:
򐂰 SVC 2145-CF8
򐂰 SVC 2145-CG8

When the 2145 UPS-1U is configured, the voltage that is supplied to it must be 200 - 240 V,
single phase.

Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external
protection.

2145-DH8
This model includes two integrated AC power supplies and battery units, replacing the
uninterruptible power supply feature that was required on the previous generation storage
engine models.

The functionality of uninterruptible power supply units is provided by internal batteries, which
are delivered with each node’s hardware. The batteries ensure that there is sufficient internal
power to keep a node operational in the event of a disruption or external power loss to copy
the physical memory to a file in the file system on the node’s internal disk drive, so that the
contents can be recovered when external power is restored.

After dumping the content of the non-volatile part of the memory to disk, the SVC node shuts
down.

For more information about the 2145-DH8 Model refer to IBM SAN Volume Controller
2145-DH8 Introduction and Implementation, SG24-8229, on this website:
http://www.redbooks.ibm.com/abstracts/SG248229.html

86 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

3.2.2 Physical rules


The SVC must be installed in pairs to provide high availability, and each node in the clustered
system must be connected to a separate uninterruptible power supply unit.

Be aware of the following considerations:


򐂰 Each SVC node of an I/O Group must be connected to a separate uninterruptible power
supply unit.
򐂰 Each uninterruptible power supply unit pair that supports a pair of nodes must be
connected to a separate power domain (if possible) to reduce the chances of input power
loss.
򐂰 For safety reasons, the uninterruptible power supply units must be installed in the lowest
positions in the rack. If necessary, move lighter units toward the top of the rack to free up
space for the uninterruptible power supply units.
򐂰 The power and serial connection from a node must be connected to the same
uninterruptible power supply unit; otherwise, the node cannot start.
򐂰 SVC hardware model 2145-CF8 must be connected to a 5115 uninterruptible power
supply unit. It cannot start with a 5125 uninterruptible power supply unit. The 2145-CG8
uses the 8115 uninterruptible power supply unit.

Important: Do not share the SVC uninterruptible power supply unit with any other devices.

Figure 3-1 on page 88 shows a power cabling example for the 2145-CG8.

Chapter 3. Planning and configuration 87


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-1 2145-CG8 power cabling

You must follow the guidelines for Fibre Channel (FC) cable connections. Occasionally, the
introduction of a new SVC hardware model means internal changes. One example is the
worldwide port name (WWPN) mapping in the port mapping. The 2145-CF8 and 2145-CG8
have the same mapping.

Figure 3-2 on page 89 shows the WWPN mapping.

88 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

Figure 3-2 WWPN mapping

Figure 3-3 on page 90 shows a sample layout in which nodes within each I/O Group are split
between separate racks. This layout protects against power failures and other events that
affect only a single rack.

Chapter 3. Planning and configuration 89


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-3 Sample rack layout

3.2.3 Cable connections


Create a cable connection table or documentation that follows your environment’s
documentation procedure to track all of the following connections that are required for the
setup:
򐂰 Nodes
򐂰 Uninterruptible power supply unit
򐂰 Ethernet
򐂰 iSCSI or Fibre Channel over Ethernet (FCoE) connections
򐂰 FC ports

3.3 Logical planning


For logical planning, we describe the following topics:
򐂰 Management IP addressing plan
򐂰 SAN zoning and SAN connections
򐂰 iSCSI IP addressing plan
򐂰 IP Mirroring
򐂰 Back-end storage subsystem configuration
򐂰 SAN Volume Controller clustered system configuration

90 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

򐂰 Stretched cluster configuration


򐂰 Storage pool configuration
򐂰 Volume configuration
򐂰 Host mapping (LUN masking)
򐂰 Advanced Copy Services
򐂰 SAN boot support
򐂰 Data migration from a non-virtualized storage subsystem
򐂰 SVC configuration backup procedure

3.3.1 Management IP addressing plan


Starting with V6.1, the system management is performed through an embedded GUI running
on the nodes. A separate console, such as the traditional SVC Hardware Management
Console (HMC) or IBM System Storage Productivity Center (SSPC), is no longer required to
access the management interface. To access the management GUI, you direct a web browser
to the system management IP address.

The SVC 2145-DH8 node introduces a new feature called a Technician port. Ethernet port 4
is allocated as the Technician service port, and is marked with a T. All initial configuration for
each node is performed via the Technician port. The port broadcasts a DHCP service so that
a notebook or computer is automatically assigned an IP address on connection to the port.

After the cluster configuration has been completed, the Technician port automatically routes
the connected user directly to the service GUI.

Note: The default IP address for the Technician port on a 2145-DH8 Node is 192.168.0.1.
If the Technician port is connected to a switch, it is disabled and an error is logged.

Each SVC node requires one Ethernet cable to connect it to an Ethernet switch or hub. The
cable must be connected to port 1. A 10/100/1000 Mb Ethernet connection is required for
each cable. Both Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6) are
supported.

Note: For increased redundancy, an optional second Ethernet connection is supported for
each SVC node. This cable is connected to Ethernet port 2.

To ensure system failover operations, Ethernet port 1 on all nodes must be connected to the
same set of subnets. If used, Ethernet port 2 on all nodes must also be connected to the
same set of subnets. However, the subnets for Ethernet port 1 do not have to be the same as
Ethernet port 2.

Each SVC cluster has a Cluster Management IP address as well as a Service IP address for
each node in the cluster. See Example 3-1 for details.

Example 3-1 Management IP address sample


management IP add. 10.11.12.120
node 1 service IP add. 10.11.12.121
node 2 service IP add. 10.11.12.122
node 3 service IP add. 10.11.12.123
Node 4 service IP add. 10.11.12.124

Each node in an SVC clustered system needs to have at least one Ethernet connection.

Chapter 3. Planning and configuration 91


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-4 on page 92 shows the IP configuration possibilities.

Figure 3-4 IP configuration possibilities

Support for iSCSI provides one other IPv4 and one other IPv6 address for each Ethernet port
on every node. These IP addresses are independent of the clustered system configuration
IP addresses.

The SVC Model 2145-CG8 optionally can have a serial-attached SCSI (SAS) adapter with
external ports disabled or a high-speed 10 Gbps Ethernet adapter with two ports. Two more
IPv4 or IPv6 addresses are required in both cases.

When accessing the SVC through the GUI or Secure Shell (SSH), choose one of the
available IP addresses to which to connect. No automatic failover capability is available. If one
network is down, use an IP address on the alternate network. Clients might be able to use the
intelligence in domain name servers (DNSs) to provide partial fail over.

3.3.2 SAN zoning and SAN connections


SAN storage systems that use the SVC can be configured with a minimum of two (and up to
eight) SVC nodes, which are arranged in an SVC clustered system. These SVC nodes are
attached to the SAN fabric, with disk subsystems and host systems.

92 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

The SAN fabric is zoned to allow the SVC to “see” each other’s nodes and the disk
subsystems, and for the hosts to “see” the SVC nodes. The hosts cannot directly see or
operate LUNs on the disk subsystems that are assigned to the SVC system. The SVC nodes
within an SVC system must see each other and all of the storage that is assigned to the SVC
system.

The zoning capabilities of the SAN switch are used to create three distinct zones. The
software version 7.6 supports 2 Gbps, 4 Gbps, 8 Gbps, and 16 Gbps FC fabric, depending on
the hardware platform and on the switch where the SVC is connected. In an environment
where you have a fabric with multiple-speed switches, the preferred practice is to connect the
SVC and the disk subsystem to the switch operating at the highest speed.

All SVC nodes in the SVC clustered system are connected to the same SANs, and they
present volumes to the hosts. These volumes are created from storage pools that are
composed of MDisks that are presented by the disk subsystems.

The fabric must have three distinct zones:


򐂰 SVC cluster system zones: Create up to two zones per fabric, and include a single port per
node, which is designated for intracluster traffic. No more than four ports per node should
be allocated to intracluster traffic.
򐂰 Host zones: Create an SVC host zone for each server accessing storage from the SVC
system.
򐂰 Storage zone: Create one SVC storage zone for each storage subsystem that is virtualized
by the SVC.

Port designation recommendations


The port to local node communication is used for mirroring write cache as well as metadata
exchange between nodes and is critical to the stable operation of the cluster. The 2145-DH8
nodes with their 8-port and 12-port configurations provide an opportunity to isolate the port to
local node traffic from other cluster traffic on dedicated ports thereby providing a level of
protection against misbehaving devices and workloads that could compromise the
performance of the shared ports.

Additionally, there is benefit in isolating remote replication traffic on dedicated ports as well to
ensure that problems impacting the cluster-to-cluster interconnect do not adversely impact
ports on the primary cluster and thereby impact the performance of workloads running on the
primary cluster.

IBM recommends the following port designations for isolating both port to local and port to
remote node traffic, as shown in Table 3-1 on page 94.

Important: Be careful when you perform the zoning so that inter-node ports are not used
for Host/Storage traffic in the 8-port and 12-port configurations.

Chapter 3. Planning and configuration 93


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Table 3-1 Port designation recommendations for isolating traffic


Card/ SAN Four-port Eight-port Twelve-port nodes Twelve-port nodes
port fabric nodes nodes (Write data rate greater
than 3 GBps per I/O
Group)

C1P1 A Host/Storage/ Host/Storage Host/Storage Inter-node


Inter-node

C1P2 B Host/Storage/ Host/Storage Host/Storage Inter-node


Inter-node

C1P3 A Host/Storage/ Host/Storage Host/Storage Host/Storage


Inter-node

C1P4 B Host/Storage/ Host/Storage Host/Storage Host/Storage


Inter-node

C2P1 A Inter-node Inter-node Inter-node

C2P2 B Inter-node Inter-node Inter-node

C2P3 A Replication or Host/Storage Host/Storage


Host/Storage

C2P4 B Replication or Host/Storage Host/Storage


Host/Storage

C5P1 A Host/Storage Host/Storage

C5P2 B Host/Storage Host/Storage

C5P3 A Replication or Replication or Host/Storage


Host/Storage

C5P4 B Replication or Replication or Host/Storage


Host/Storage

localfcport 1111 110000 110000 110011


mask

This recommendation provides the wanted traffic isolation while also simplifying migration
from existing configurations with only 4 ports, or even later migrating from 8-port or 12-port
configurations to configurations with additional ports. More complicated port mapping
configurations that spread the port traffic across the adapters are supported and can be
considered, but these approaches do not appreciably increase availability of the solution
since the mean time between failures (MTBF) of the adapter is not significantly less than that
of the non-redundant node components.

Note that while it is true that alternate port mappings that spread traffic across HBAs may
allow adapters to come back online following a failure, they will not prevent a node from going
offline temporarily to reboot and attempt to isolate the failed adapter and then rejoin the
cluster. Our recommendation takes all these considerations into account with a view that the
greater complexity may lead to migration challenges in the future and the simpler approach is
best.

Zoning considerations for Metro Mirror and Global Mirror


Ensure that you are familiar with the constraints on zoning a switch to support Metro Mirror
and Global Mirror partnerships. SAN configurations that use intracluster Metro Mirror and
Global Mirror relationships do not require more switch zones.

94 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

The SAN configurations that use intercluster Metro Mirror and Global Mirror relationships
require the following other switch zoning considerations:
򐂰 For each node in a clustered system, zone exactly two FC ports to exactly two FC ports
from each node in the partner clustered system.
򐂰 If dual-redundant ISLs are available, evenly split the two ports from each node between
the two ISLs. That is, exactly one port from each node must be zoned across each ISL.
򐂰 Local clustered system zoning continues to follow the standard requirement for all ports on
all nodes in a clustered system to be zoned to one another.

Important: Failure to follow these configuration rules exposes the clustered system to
an unwanted condition that can result in the loss of host access to volumes.

If an intercluster link becomes severely and abruptly overloaded, the local FC fabric can
become congested so that no FC ports on the local SVC nodes can perform local
intracluster heartbeat communication. This situation can, in turn, result in the nodes
experiencing lease expiry events. In a lease expiry event, a node reboots to attempt to
reestablish communication with the other nodes in the clustered system. If the leases
for all nodes expire simultaneously, a loss of host access to volumes can occur during
the reboot events.

򐂰 Configure your SAN so that FC traffic can be passed between the two clustered systems.
To configure the SAN this way, you can connect the clustered systems to the same SAN,
merge the SANs, or use routing technologies.
򐂰 Configure zoning to allow all of the nodes in the local fabric to communicate with all of the
nodes in the remote fabric.

򐂰 Optional: Modify the zoning so that the hosts that are visible to the local clustered system
can recognize the remote clustered system. This capability allows a host to have access to
data in the local and remote clustered systems.
򐂰 Verify that clustered system A cannot recognize any of the back-end storage that is owned
by clustered system B. A clustered system cannot access logical units (LUs) that a host or
another clustered system can also access.

Figure 3-5 on page 96 shows the SVC zoning topology.

Chapter 3. Planning and configuration 95


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-5 SVC zoning topology

Figure 3-6 on page 96 shows an example of the SVC, host, and storage subsystem
connections.

Figure 3-6 Example of SVC, host, and storage subsystem connections

You must also observe the following guidelines:


򐂰 LUNs (MDisks) must have exclusive access to a single SVC clustered system and cannot
be shared between other SVC clustered systems or hosts.

96 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

򐂰 A storage controller can present LUNs to the SVC (as MDisks) and to other hosts in the
SAN. However, in this case, it is better to avoid SVC and hosts to share storage ports.
򐂰 Mixed port speeds are not permitted for intracluster communication. All node ports within
a clustered system must be running at the same speed.
򐂰 ISLs are not to be used for intracluster node communication or node-to-storage controller
access.
򐂰 The switch configuration in an SVC fabric must comply with the switch manufacturer’s
configuration rules, which can impose restrictions on the switch configuration. For
example, a switch manufacturer might limit the number of supported switches in a SAN.
Operation outside of the switch manufacturer’s rules is not supported.
򐂰 Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host must be
in separate zones. For example, IBM AIX and Microsoft hosts must be in separate zones.
In this case, “dissimilar” means that the hosts are running separate operating systems or
are using separate hardware platforms. Therefore, various levels of the same operating
system are regarded as similar. This requirement is a SAN interoperability issue, rather
than an SVC requirement.
򐂰 Host zones are to contain only one initiator (HBA) each, and as many SVC node ports as
you need, depending on the high availability and performance that you want from your
configuration.

Important: Be aware of the following considerations:


򐂰 The use of ISLs for intracluster node communication can negatively affect the
availability of the system because of the high dependency on the quality of these
links to maintain heartbeat and other system management services. Therefore, we
strongly advise that you use them only as part of an interim configuration to facilitate
SAN migrations, and not as part of the designed solution.
򐂰 The use of ISLs for SVC node to storage controller access can lead to port
congestion, which can negatively affect the performance and resiliency of the SAN.
Therefore, we strongly advise that you use them only as part of an interim
configuration to facilitate SAN migrations, and not as part of the designed solution.
With the SVC 6.3, you can use ISLs between nodes, but they must be in a dedicated
SAN, virtual SAN (CISCO technology), or logical SAN (Brocade technology).
򐂰 The use of mixed port speeds for intercluster communication can lead to port
congestion, which can negatively affect the performance and resiliency of the SAN;
therefore, it is not supported.

You can use the lsfabric command to generate a report that displays the connectivity
between nodes and other controllers and hosts. This report is helpful for diagnosing SAN
problems.

Zoning examples
Figure 3-7 shows an SVC clustered system zoning example.

Chapter 3. Planning and configuration 97


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

1 1 1 1 SVC 2 2 2 2 SVC
SVC 1 1 2 3 4 Port #
SVC 2
1 2 3 4 Port #

Fabric ID 21 Fabric ID 22

Fabric Fabric
1 2

ISL

ISL
Ports 0 1 2 3 Ports 0 1 2 3
1 2 1 2 SVC # 1 2 1 2 SVC #

1 1 3 3 Port # SVC 2 2 4 4 Port # SVC


Fabric ID 11 Zoning Info: Fabric ID 12
0 1 2 3 Port # Switch 0 1 2 3 Port # Switch
Cluster zones with all
SVC ports
SVC-Cluster Zone 1: SVC-Cluster Zone 2:
Fabric Domain ID, Port Fabric Domain ID, Port
11,0 - 11,1 - 11,2 - 11,3 12,0 - 12,1 - 12,2 - 12,3

Storwize Family

Figure 3-7 SVC clustered system zoning example

Figure 3-8 shows a storage subsystem zoning example.

1 2 1 2 SVC # 1 2 1 2 SVC #

1 1 3 3 Port # SVC 2 2 4 4 Port # SVC


Fabric ID 21 Fabric ID 22
0 1 2 3 Port # Switch 0 1 2 3 Port # Switch

Fabric Fabric
1 2
ISL

ISL

Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9

Fabric ID 11 Fabric ID 12

V1
V2

E1 E2
Storwize Family
EMC
²

SVC-Storwize V7000 Zone 1: Zoning Info: SVC-Storwize V7000 Zone 2:


Fabric Domain ID, Port All storage ports Fabric Domain ID, Port
11,0 - 11,1 - 11,2 - 11,3 – 11,8 and all SVC ports 12,0 - 12,1 - 12,2 - 12,3 – 12,8
SVC-EMC Zone 1: SVC-EMC Zone 2:
11,0 - 11,1 - 11,2 - 11,3 – 11,9 12,0 - 12,1 - 12,2 - 12,3 – 12,9

Figure 3-8 Storage subsystem zoning example

98 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

Figure 3-9 shows a host zoning example.

IBM Power System

P1 P2

Fabric ID 21 Fabric ID 22

Fabric Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9 Fabric


1 2

ISL
ISL
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9

Fabric ID 11 Fabric ID 12
AC AC

DC DC

SVC-Power System Zone P1: Zoning Info: SVC-Power System Zone P2:
Fabric Domain ID, Port One Power System Fabric Domain ID, Port
21,1 - 11,0 - 11,1 Port and one SVC 22,1 - 12,2 - 12,3
Port per SVC Node

Figure 3-9 Host zoning example

3.3.3 iSCSI IP addressing plan


Since version 6.3, the SVC supports host access through iSCSI (as an alternative to FC). The
following considerations apply:
򐂰 The SVC uses the built-in Ethernet ports for iSCSI traffic. If the optional 10 Gbps Ethernet
feature is installed, you can connect host systems through the two 10 Gbps Ethernet ports
per node.
򐂰 All node types, which can run SVC 6.1 or later, can use the iSCSI feature.
򐂰 The SVC supports the Challenge Handshake Authentication Protocol (CHAP)
authentication methods for iSCSI.
򐂰 iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This
design reduces the need for multipathing support in the iSCSI host.
򐂰 iSCSI IP addresses can be configured for one or more nodes.
򐂰 iSCSI Simple Name Server (iSNS) addresses can be configured in the SVC.
򐂰 The iSCSI qualified name (IQN) for an SVC node is
iqn.1986-03.com.ibm:2145.<cluster_name>.<node_name>. Because the IQN contains the
clustered system name and the node name, it is important not to change these names
after iSCSI is deployed.
򐂰 Each node can be given an iSCSI alias, as an alternative to the IQN.
򐂰 The IQN of the host to an SVC host object is added in the same way that you add FC
worldwide port names (WWPNs).
򐂰 Host objects can have WWPNs and IQNs.

Chapter 3. Planning and configuration 99


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Standard iSCSI host connection procedures can be used to discover and configure the
SVC as an iSCSI target.

Next, we describe several ways in which you can configure the SVC 6.1 or later.

Figure 3-10 shows the use of IPv4 management and iSCSI addresses in the same subnet.

Figure 3-10 Use of IPv4 addresses

You can set up the equivalent configuration with only IPv6 addresses.

Figure 3-11 shows the use of IPv4 management and iSCSI addresses in two separate
subnets.

Figure 3-11 IPv4 address plan with two subnets

100 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

Figure 3-12 shows the use of redundant networks.

Figure 3-12 Redundant networks

Figure 3-13 on page 101 shows the use of a redundant network and a third subnet for
management.

Figure 3-13 Redundant network with third subnet for management

Chapter 3. Planning and configuration 101


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-14 shows the use of a redundant network for iSCSI data and management.

Figure 3-14 Redundant network for iSCSI and management

Be aware of the following considerations:


򐂰 All of these examples are valid for IPv4 and IPv6 addresses.
򐂰 Using IPv4 addresses on one port and IPv6 addresses on the other port is valid.
򐂰 Having separate subnet configurations for IPv4 and IPv6 addresses is valid.

Storwize 7.4 support for VLAN for iSCSI and IP replication


When the VLAN ID is configured for the IP addresses that are used for either iSCSI host
attach or IP replication on the SVC, the appropriate VLAN settings on the Ethernet network
and servers must also be correctly configured to not experience connectivity issues. After the
VLANs are configured, the changes to VLAN settings disrupt iSCSI and IP replication traffic
to and from the SVC.

Important: During the individual VLAN configuration for each IP address, if the VLAN
settings for the local and failover ports on two nodes of an I/O Group differ, the switches
must be configured so that failover VLANs are configured on the local switch ports, too.
Then, the failover of IP addresses from the failing node to the surviving node succeeds. If
this configuration is not done, paths are lost to the SVC storage during a node failure.

3.3.4 IP Mirroring
One of the most important new functions of version 7.2 in the Storwize family is IP replication,
which enables the use of lower-cost Ethernet connections for remote mirroring. The capability
is available as a licensable option (Metro or Global Mirror) on all Storwize family systems. The
new function is transparent to servers and applications in the same way that traditional
FC-based mirroring is transparent. All remote mirroring modes (Metro Mirror, Global Mirror,
and Global Mirror with changed volumes) are supported.

102 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

The configuration of the system is straightforward. Storwize family systems normally can find
each other in the network and can be selected from the GUI. IP replication includes
Bridgeworks SANSlide network optimization technology and is available at no additional
charge. Remote mirror is a licensable option but the price does not change with IP replication.
Existing remote mirror users have access to the new function at no additional charge.

IP connections that are used for replication can have a long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many hops between
switches and other appliances in the network. Traditional replication solutions transmit data,
wait for a response, and then transmit more data, which can result in network usage as low as
20% (based on IBM measurements). This situation gets worse as the latency gets longer.

Bridgeworks SANSlide technology that is integrated with the IBM Storwize family requires no
separate appliances; therefore, no other costs and configuration are necessary. It uses AI
technology to transmit multiple data streams in parallel and adjusts automatically to changing
network environments and workloads.

Because SANSlide does not use compression, it is independent of application or data type.
Most importantly, SANSlide improves network bandwidth usage up to 3x, so clients might be
able to deploy a less costly network infrastructure or use faster data transfer to speed
replication cycles, improve remote data currency, and recover more quickly.

Note: The limiting factor of the distance is the round-trip time. The maximum supported
round-trip time between sites is 80 milliseconds (ms) for a 1 Gbps link. For a 10 Gbps link,
the maximum supported round-trip time between sites is 10 ms.

Key features of IP mirroring


IBM offers the new, enhanced function of IP-based Remote Copy services, which are
primarily targeted to small and midrange environments, where clients typically cannot afford
the cost of FC-based replication between sites.

IP-based replication offers the following new features:


򐂰 Remote Copy modes support Metro Mirror, Global Mirror, and Global Mirror with Change
Volumes.
򐂰 All platforms that support Remote Copy are supported.
򐂰 The configuration uses automatic path configuration through the discovery of a remote
cluster. You can configure any Ethernet port (10g/1G) for replication that uses Remote
Copy port groups.
򐂰 Dedicated Ethernet ports for replication.
򐂰 CHAP-based authentication is supported.
򐂰 Licensing is the same as the existing Remote Copy.
򐂰 High availability features auto-failover support across redundant links.
򐂰 Performance is based on a vendor-supplied IP connectivity solution, which has experience
in offering low bandwidth, high latency long-distance IP links. Support is for 80 ms
round-trip time at a 1 Gbps link.

Figure 3-15 shows the schematic way to connect two sides through IP mirroring.

Chapter 3. Planning and configuration 103


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-15 IP mirroring

Figure 3-16 on page 104 and Figure 3-17 on page 105 show configuration possibilities for
connecting two sites through IP mirroring. Figure 3-16 on page 104 shows the configuration
with single links.

Figure 3-16 Single link configuration

The administrator must configure at least one port on each site to use with the link.
Configuring more than one port means that replication continues, even if a node fails.
Figure 3-17 shows a redundant IP configuration with two links.

104 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

Figure 3-17 Two links with active and failover ports

As shown in Figure 3-17, the following replication group setup for dual redundant links is
used:
򐂰 Replication Group 1: Four IP addresses, each on a different node (green)
򐂰 Replication Group 2: Four IP addresses, each on a different node (orange)

The following simultaneous IP replication sessions can be used at any time:


򐂰 Possible user configuration of each Ethernet port:
– Not used for IP replication (default)
– Used for IP replication, link 1
– Used for IP replication, link 2
򐂰 IP replication status for each Ethernet port:
– Not used for IP replication
– Active (solid box)
– Standby (outline box)

Figure 3-18 shows the configuration of an IP partnership. You can obtain the requirements to
set up an IP partnership at:
https://ibm.biz/BdHnKF

Chapter 3. Planning and configuration 105


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-18 IP partnership configuration

Terminology for IP replication


This section lists the following terminology for IP replication:
򐂰 Discovery
This term refers to the process by which two SVC clusters exchange information about
their IP address configuration. For IP-based partnerships, only IP addresses that are
configured for Remote Copy are discovered. For example, the first discovery occurs and
then the user runs the mkippartnership command in CLI. Subsequent discoveries might
occur as a result of user activities (configuration changes) or as a result of hardware
failures (for example, node failure and port failure).
򐂰 Remote Copy port group
This term indicates the settings of local and remote Ethernet ports (on local and partnered
SVC systems) that can access each other through a long-distance IP link. For a
successful partnership to be established between two SVC clusters, at least two ports
must be in the same Remote Copy port group, one from the local cluster and one from the
partner cluster. More than two ports from the same system in a group can exist to allow for
TCP connection failover in a local and partnered node or port failure.
򐂰 Remote Copy port group ID
This numeric value indicates to which group the port belongs. Zero is used to indicate that
a port is not used for Remote Copy. For two SVC clusters to form a partnership, both

106 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

clusters must have at least one port that is configured with the same group ID and they
must be accessible to each other.
򐂰 RC logic
RC logic is a bidirectional full-duplex data path between two SVC clusters that are Remote
Copy partners. This path is between an IP address pair, one local and one remote. An RC
login carries Remote Copy traffic that consists of host WRITEs, background copy traffic
during initial sync within a relationship, periodic updates in Global Mirror with changed
volumes relationships, and so on.
򐂰 Path configuration
Path configuration is the act of setting up RC logins between two partnered SVC systems.
The selection of IP addresses to be used for RC logins is based on certain rules that are
specified in the Preferred practices section. Most of those rules are driven by constraints
and requirements from a vendor-supplied link management library. A simple algorithm is
run by each SVC system to arrive at the list of RC logins that must be established. Local
and remote SVC clusters are expected to arrive at the same IP address pairs for RC login
creation, even though they run the algorithm independently.

Preferred practices
The following preferred practices are suggested for IP replication:
򐂰 Configure two physical links between sites for redundancy.
򐂰 Configure Ethernet ports that are dedicated for Remote Copy. Do not allow iSCSI host
attach for these Ethernet ports.
򐂰 Configure remote copy port group IDs on both nodes for each physical link to survive node
failover.
򐂰 A minimum of four nodes are required for dual redundant links to work across node
failures. If a node failure occurs on a two-node system, one link is lost.
򐂰 Do not zone in two SVC systems over FC/FCOE when an IP partnership exists.
򐂰 Configure CHAP secret-based authentication, if required.
򐂰 The maximum supported round-trip time between sites is 80 ms for a 1 Gbps link.
򐂰 The maximum supported round-trip time between sites is 10 ms for a 10 Gbps link.
򐂰 For IP partnerships, the recommended method of copying is Global Mirror with changed
volumes because of the performance benefits. Also, Global Mirror and Metro Mirror might
be more susceptible to the loss of synchronization.
򐂰 The amount of inter-cluster heartbeat traffic is 1 Mbps per link.
򐂰 The minimum bandwidth requirement for the inter-cluster link is 10 Mbps. However, this
bandwidth scales up with the amount of host I/O that you choose to use.

For more information, see IBM SAN Volume Controller and Storwize Family Native IP
Replication, REDP-5103:
http://www.redbooks.ibm.com/abstracts/redp5103.html

3.3.5 Back-end storage subsystem configuration


Back-end storage subsystem configuration planning must be applied to all storage controllers
that are attached to the SVC.

Chapter 3. Planning and configuration 107


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

For more information about supported storage subsystems, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658

Apply the following general guidelines for back-end storage subsystem configuration
planning:
򐂰 In the SAN, storage controllers that are used by the SVC clustered system must be
connected through SAN switches. Direct connection between the SVC and the storage
controller is not supported.
򐂰 Multiple connections are allowed from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN, but it is a preferred
practice. Therefore, canister A in a Storwize V3700 subsystem can be connected to SAN
A only, or to SAN A and SAN B. Also, canister B in the Storwize V3700 subsystem can be
connected to SAN B only, or to SAN B and SAN A.
򐂰 Stretched System configurations are supported by certain rules and configuration
guidelines. For more information, see 3.3.7, “Stretched cluster configuration” on page 111.
򐂰 All SVC nodes in an SVC clustered system must be able to see the same set of ports from
each storage subsystem controller. Violating this guideline causes the paths to become
degraded. This degradation can occur as a result of applying inappropriate zoning and
LUN masking. This guideline has important implications for a disk subsystem, such as
DS3000, Storwize V3700, Storwize V5000, or Storwize V7000, which imposes exclusivity
rules as to which HBA WWPNs a storage partition can be mapped.

MDisks within storage pools: Software version 6.1 and later provide for better load
distribution across paths within storage pools.

In previous code levels, the path to MDisk assignment was made in a round-robin fashion
across all MDisks that are configured to the clustered system. With that method, no
attention is paid to how MDisks within storage pools are distributed across paths.
Therefore, it is possible and even likely that certain paths are more heavily loaded than
others.

This condition is more likely to occur with a smaller number of MDisks in the storage pool.
Starting with software version 6.1, the code contains logic that considers MDisks within
storage pools. Therefore, the code more effectively distributes their active paths that are
based on the storage controller ports that are available.

The Detect MDisk commands must be run following the creation or modification (addition
of or removal of MDisk) of storage pools for paths to be redistributed.

If you do not have a storage subsystem that supports the SVC round-robin algorithm, ensure
that the number of MDisks per storage pool is a multiple of the number of storage ports that
are available. This approach ensures sufficient bandwidth to the storage controller and an
even balance across storage controller ports.

In general, configure disk subsystems as though no SVC exists. However, we suggest the
following specific guidelines:
򐂰 Disk drives:
– Exercise caution with large disk drives so that you do not have too few spindles to
handle the load.
– RAID 5 is suggested for most workloads.

108 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

򐂰 Array sizes:
– An array size of 8+P or 4+P is suggested for the IBM DS4000® and DS5000™
families, if possible.
– Use the DS4000 segment size of 128 KB or larger to help the sequential performance.
– Upgrade to EXP810 drawers, if possible.
– Create LUN sizes that are equal to the RAID array and rank size. If the array size is
greater than 2 TB and the disk subsystem does not support MDisks that are larger than
2 TB, create the minimum number of LUNs of equal size.
– An array size of 7+P is suggested for the V3700, V5000, and V7000 Storwize families.
– When you are adding more disks to a subsystem, consider adding the new MDisks to
existing storage pools versus creating more small storage pools.
– Auto balancing was introduced in version 7.3 to restripe volume extents evenly across
all MDisks in the storage pools.
– A maximum of 1,024 worldwide node names (WWNNs) are available per cluster:
• EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port.
Each WWNN appears as a separate controller to the SVC.
• IBM, EMC CLARiiON, and HP use one WWNN per subsystem. Each WWNN
appears as a single controller with multiple ports/worldwide port names (WWPNs),
for a maximum of 16 ports/WWPNs per WWNN.
򐂰 DS8000 that uses four or eight of the four-port HA cards:
– Use ports 1 and 3 or ports 2 and 4 on each card. (It does not matter for 8 Gb cards.)
This setup provides eight or 16 ports for the SVC use.
– Use eight ports minimum, up to 40 ranks.
– Use 16 ports for 40 or more ranks. Sixteen is the maximum number of ports.
򐂰 DS4000/DS5000 (EMC CLARiiON/CX):
– Both systems have the preferred controller architecture, and the SVC supports this
configuration.
– Use a minimum of four ports, and preferably eight or more ports, up to a maximum of
16 ports, so that more ports equate to more concurrent I/O that is driven by the SVC.
– Support is available for mapping controller A ports to Fabric A and controller B ports to
Fabric B or cross-connecting ports to both fabrics from both controllers. The
cross-connecting approach is preferred to avoid Automatic Volume Transfer
(AVT)/Trespass from occurring if a fabric fails or all paths to a fabric fail.
򐂰 DS3400 subsystems: Use a minimum of four ports.
򐂰 Storwize family: Use a minimum of four ports, and preferably eight ports.
򐂰 IBM XIV requirements and restrictions:
– The use of XIV extended functions, including snaps, thin provisioning, synchronous
replication (native copy services), and LUN expansion of LUNs that are presented to
the SVC, is not supported.
– A maximum of 511 LUNs from one XIV system can be mapped to an SVC clustered
system.
򐂰 Full 15 module XIV recommendations (161 TB usable):
– Use two interface host ports from each of the six interface modules.

Chapter 3. Planning and configuration 109


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

– Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC
node ports.
– Create 48 LUNs of equal size, each of which is a multiple of 17 GB. This configuration
creates approximately 1632 GB if you are using the entire full frame XIV with the SVC.
– Map LUNs to the SVC as 48 MDisks and add all of them to the single XIV storage pool
so that the SVC drives the I/O to four MDisks and LUNs for each of the 12 XIV FC
ports. This design provides a good queue depth on the SVC to drive XIV adequately.
򐂰 Six module XIV recommendations (55 TB usable):
– Use two interface host ports from each of the two active interface modules.
– Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.)
Also, zone these four ports with all SVC node ports.
– Create 16 LUNs of equal size, each of which is a multiple of 17 GB. This configuration
creates approximately 1632 GB if you are using the entire XIV with the SVC.
– Map the LUNs to the SVC as 16 MDisks and add all of them to the single XIV storage
pool so that the SVC drives I/O to four MDisks and LUNs per each of the four XIV FC
ports. This design provides a good queue depth on the SVC to drive the XIV
adequately.
򐂰 Nine module XIV recommendations (87 TB usable):
– Use two interface host ports from each of the four active interface modules.
– Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are
inactive.) Also, zone these eight ports with all of the SVC node ports.
– Create 26 LUNs of equal size, each of which is a multiple of 17 GB. This design
creates approximately 1632 GB approximately if you are using the entire XIV with the
SVC.
– Map the LUNs to the SVC as 26 MDisks and map all of them to the single XIV storage
pool so that the SVC drives I/O to three MDisks and LUNs on each of the six ports and
four MDisks and LUNs on the other two XIV FC ports. This design provides a useful
queue depth on the SVC to drive the XIV adequately.
򐂰 Configure XIV host connectivity for the SVC clustered system:
– Create one host definition on the XIV, and include all SVC node WWPNs.
– You can create clustered system host definitions (one per I/O Group), but the
preceding method is easier to configure.
– Map all LUNs to all SVC node WWPNs.

3.3.6 SAN Volume Controller clustered system configuration


To ensure high availability in SVC installations, consider the following guidelines when you
design a SAN with the SVC:
򐂰 All nodes in a clustered system must be in the same LAN segment because the nodes in
the clustered system must assume the same clustered system or service IP address.
Ensure that the network configuration allows any of the nodes to use these IP addresses.
If you plan to use the second Ethernet port on each node, two LAN segments can be
used. However, port 1 of every node must be in one LAN segment, and port 2 of every
node must be in the other LAN segment.
򐂰 To maintain application uptime in the unlikely event of an individual SVC node failing, SVC
nodes are always deployed in pairs (I/O Groups). If a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but it is still a valid

110 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

configuration. The remaining node operates in write-through mode, which means that the
data is written directly to the disk subsystem (the cache is disabled for write IO).
򐂰 The uninterruptible power supply (for CF8 and CG8) unit must be in the same rack as the
node to which it provides power, and each uninterruptible power supply unit can have only
one connected node.
򐂰 The FC SAN connections between the SVC node and the switches are optical fiber. These
connections can run at 2 Gbps, 4 Gbps, 8 Gbps, or 16 Gbps (DH8), depending on your
SVC and switch hardware. The 2145-CG8, 2145-CF8 and 2145-DH8 SVC nodes
auto-negotiate the connection speed with the switch.
򐂰 The SVC node ports must be connected to the FC fabric only. Direct connections between
the SVC and the host, or the disk subsystem, are unsupported.
򐂰 Two SVC clustered systems cannot have access to the same LUNs within a disk
subsystem. Configuring zoning so that two SVC clustered systems have access to the
same LUNs (MDisks) will likely result in data corruption.
򐂰 The two nodes within an I/O Group can be co-located (within the same set of racks) or can
be in separate racks and separate rooms. For more information, see 3.3.7, “Stretched
cluster configuration” on page 111.
򐂰 The SVC uses three MDisks as quorum disks for the clustered system. A preferred
practice for redundancy is to have each quorum disk in a separate storage subsystem,
where possible. The current locations of the quorum disks can be displayed by using the
lsquorum command and relocated by using the chquorum command.

3.3.7 Stretched cluster configuration


You can implement a stretched cluster configuration (historically referred to as a Split I/O
Group, or Stretched Cluster) as a high-availability option.

Software version 7.6 supports the following stretched cluster configurations:


򐂰 No ISL configuration:
– Passive wave division multiplexing (WDM) devices can be used between both sites.
– No ISLs can be located between the SVC nodes (which is similar to SVC
5.1-supported configurations).
– The supported distance is up to 40 km (24.8 miles).
Figure 3-19 on page 112 shows an example of a stretched cluster configuration with no
ISL configuration.

Chapter 3. Planning and configuration 111


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-19 Stretched cluster with no ISL configuration

򐂰 ISL configuration:
– ISLs are located between the SVC nodes.
– Maximum distance is similar to Metro Mirror distances.
– Physical requirements are similar to Metro Mirror requirements.
– ISL distance extension with active and passive WDM devices is supported.
Figure 3-20 shows an example of a stretched cluster with ISL configuration.

112 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

Figure 3-20 Stretched cluster with ISL configuration

Use the stretched cluster configuration with the volume mirroring option to realize an
availability benefit. After volume mirroring is configured, use the
lscontrollerdependentvdisks command to validate that the volume mirrors are on separate
storage controllers. Having the volume mirrors on separate storage controllers ensures that
access to the volumes is maintained if a storage controller is lost.

When you are implementing a stretched cluster configuration, two of the three quorum disks
can be co-located in the same room where the SVC nodes are located. However, the active
quorum disk must be in a separate room. This configuration ensures that a quorum disk is
always available, even after a single-site failure.

For stretched cluster configuration, configure the SVC in the following manner:
򐂰 Site 1: Half of the SVC clustered system nodes and one quorum disk candidate
򐂰 Site 2: Half of the SVC clustered system nodes and one quorum disk candidate
򐂰 Site 3: Active quorum disk

Chapter 3. Planning and configuration 113


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

When a stretched cluster configuration is used with volume mirroring, this configuration
provides a high-availability solution that is tolerant of a failure at a single site. If the primary or
secondary site fails, the remaining sites can continue performing I/O operations.

For more information about stretched cluster configurations, see Appendix C, “Stretched
Cluster” on page 939.

For more information, see IBM SAN Volume Controller Enhanced Stretched Cluster with
VMware, SG24-8211:
http://www.redbooks.ibm.com/abstracts/sg248211.html

3.3.8 Storage pool configuration


The storage pool is at the center of the many-to-many relationship between the MDisks and
the volumes. It acts as a container from which managed disks contribute chunks of physical
disk capacity that are known as extents and from which volumes are created.

MDisks in the SVC are LUNs that are assigned from the underlying disk subsystems to the
SVC and can be managed or unmanaged. A managed MDisk is an MDisk that is assigned to
a storage pool. Consider the following points:
򐂰 A storage pool is a collection of MDisks. An MDisk can be contained only within a single
storage pool.
򐂰 Since software version 7.5 the SVC supports up to 1,024 storage pools.
򐂰 The number of volumes that can be allocated from a storage pool is unlimited; however, an
I/O Group is limited to 2,048, and the clustered system limit is 8,192.
򐂰 Volumes are associated with a single storage pool, except in cases where a volume is
being migrated or mirrored between storage pools.

The SVC supports extent sizes of 16 MiB, 32 MiB, 64 MiB, 128 MiB, 256 MiB, 512 MiB, 1024
MiB, 2048 MiB, 4096 MiB, and 8192 MiB. Support for extent sizes 4096 MiB and 8192 MiB
was added in SVC 6.1. The extent size is a property of the storage pool and is set when the
storage pool is created. All MDisks in the storage pool have the same extent size, and all
volumes that are allocated from the storage pool have the same extent size. The extent size
of a storage pool cannot be changed. If you want another extent size, the storage pool must
be deleted and a new storage pool configured.

Table 3-2 on page 115 lists all of the available extent sizes in an SVC.

114 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

Table 3-2 Extent size and total storage capacities per system
Extent size (MiB) Total storage capacity manageable per system

16 64 TiB

32 128 TiB

64 256 TiB

128 512 TiB

256 1 PiB

512 2 PiB

1024 4 PiB

2048 8 PiB

4096 16 PiB

8192 32 PiB

Consider the following information about storage pools:


򐂰 Maximum clustered system capacity is related to the extent size:
– 16 MiB extent = 64 TiB and doubles for each increment in extent size, for example,
32 MiB = 128 TiB. We strongly advise 128 MiB - 256 MiB. The IBM Storage
Performance Council (SPC) benchmarks used a 256 MiB extent.
– Pick the extent size and then use that size for all storage pools.
– You cannot migrate volumes between storage pools with separate extent sizes.
However, you can use volume mirroring to create copies between storage pools with
separate extent sizes.
򐂰 Storage pool reliability, availability, and serviceability (RAS) considerations:
– It might make sense to create multiple storage pools; however, you must ensure that a
host only gets volumes that are built from one storage pool. If the storage pool goes
offline, it affects only a subset of all the hosts that make up the SVC. However, creating
multiple storage pools can cause a high number of storage pools, which can approach
the SVC limits.
– If you do not isolate hosts to storage pools, create one large storage pool. Creating one
large storage pool assumes that the physical disks are all the same size, speed, and
RAID level.
– The storage pool goes offline if an MDisk is unavailable, even if the MDisk has no data
on it. Do not put MDisks into a storage pool until they are needed.
– Create at least one separate storage pool for all the image mode volumes.
– Ensure that the all host-persistent reserves are removed from LUNs that are given to
the SVC.
򐂰 Storage pool performance considerations
It might make sense to create multiple storage pools if you are attempting to isolate
workloads to separate disk spindles. Storage pools with too few MDisks cause an MDisk
overload, so having more spindle counts in a storage pool is better to meet workload
requirements.

Chapter 3. Planning and configuration 115


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 The storage pool and SVC cache relationship


The SVC employs cache partitioning to limit the potentially negative effect that a poorly
performing storage controller can have on the clustered system. The partition allocation
size is based on the number of configured storage pools. This design protects against
individual controller overloading and failures from using write cache and degrading the
performance of the other storage pools in the clustered system. For more information, see
2.11.3, “Cache” on page 57.
Table 3-3 shows the limit of the write-cache data.
Table 3-3 Limit of the cache data
Number of storage pools Upper limit

1 100%

2 66%

3 40%

4 30%

5 or more 25%

Consider the rule that no single partition can occupy more than its upper limit of cache
capacity with write data. These limits are upper limits, and they are the points at which the
SVC cache starts to limit incoming I/O rates for volumes that are created from the storage
pool. If a particular partition reaches this upper limit, the net result is the same as a global
cache resource that is full. That is, the host writes are serviced on a one-out-one-in basis
because the cache destages writes to the back-end disks.
However, only writes that are targeted at the full partition are limited. All I/O that is
destined for other (non-limited) storage pools continues as normal. The read I/O requests
for the limited partition also continue normally. However, because the SVC is destaging
write data at a rate that is greater than the controller can sustain (otherwise, the partition
does not reach the upper limit), read response times are also likely affected.

3.3.9 Volume configuration


An individual volume is a member of one storage pool and one I/O Group. When a volume is
created, you first identify the performance that you want, availability, and cost requirements
for that volume, and then select the storage pool accordingly.

The storage pool defines which MDisks that are provided by the disk subsystem make up the
volume. The I/O Group, which is made up of two nodes, defines which SVC nodes provide I/O
access to the volume.

Important: No fixed relationship exists between I/O Groups and storage pools.

Perform volume allocation that is based on the following considerations:


򐂰 Optimize performance between the hosts and the SVC by attempting to distribute volumes
evenly across available I/O Groups and nodes within the clustered system.
򐂰 Reach the level of performance, reliability, and capacity that you require by using the
storage pool that corresponds to your needs. (You can access any storage pool from any
node.) That is, choose the storage pool that fulfills the demands for your volumes
regarding performance, reliability, and capacity.

116 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

򐂰 I/O Group considerations:


– When you create a volume, it is associated with one node of an I/O Group. By default,
when you create a volume, it is associated with the next node by using a round-robin
algorithm. You can specify a preferred access node, which is the node through which
you send I/O to the volume instead of by using the round-robin algorithm. A volume is
defined for an I/O Group.
– Even if you have eight paths for each volume, all I/O traffic flows only toward one node
(the preferred node). Therefore, only four paths are used by the IBM Subsystem Device
Driver (SDD). The other four paths are used only in a failure of the preferred node or
when a concurrent code upgrade is running.
򐂰 Creating image mode volumes:
– Use image mode volumes when an MDisk already has data on it from a non-virtualized
disk subsystem. When an image mode volume is created, it directly corresponds to the
MDisk from which it is created. Therefore, volume logical block address (LBA) x =
MDisk LBA x. The capacity of image mode volumes defaults to the capacity of the
supplied MDisk.
– When you create an image mode disk, the MDisk must have a mode of unmanaged,
which does not belong to any storage pool. (A capacity of 0 is not allowed.) Image
mode volumes can be created in sizes with a minimum granularity of 512 bytes, and
they must be at least one block (512 bytes) in size.
򐂰 Creating managed mode volumes with sequential or striped policy
When a managed mode volume with sequential or striped policy is created, you must use
a number of MDisks that contain extents that are free and equal to or greater than the size
of the volume that you want to create. Sufficient extents might be available on the MDisk,
but a contiguous block that is large enough to satisfy the request might not be available.
򐂰 Thin-provisioned volume considerations:
– When the thin-provisioned volume is created, you must understand the utilization
patterns by the applications or group users that are accessing this volume. You also
must consider the actual size of the data, the rate of data creation, and modifying or
deleting existing data.
– The following operating modes for thin-provisioned volumes are available:
• Autoexpand volumes allocate storage from a storage pool on demand with minimal
required user intervention. However, a misbehaving application can cause a volume
to expand until it uses all of the storage in a storage pool.
• Non-autoexpand volumes have a fixed amount of assigned storage. In this case, the
user must monitor the volume and assign more capacity when required. A
misbehaving application can cause only the volume that it uses to fill up.
– Depending on the initial size for the real capacity, the grain size and a warning level can
be set. If a volume goes offline through a lack of available physical storage for
autoexpand or because a volume that is marked as non-autoexpand was not expanded
in time, a danger exists of data being left in the cache until storage is made available.
This situation is not a data integrity or data loss issue, but you must not rely on the SVC
cache as a backup storage mechanism.

Chapter 3. Planning and configuration 117


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Important: Keep a warning level on the used capacity so that it provides adequate
time to respond and provision more physical capacity.

Warnings must not be ignored by an administrator.

Use the autoexpand feature of the thin-provisioned volumes.

– When you create a thin-provisioned volume, you can choose the grain size for
allocating space in 32 KiB, 64 KiB, 128 KiB, or 256 KiB chunks. The grain size that you
select affects the maximum virtual capacity for the thin-provisioned volume. The default
grain size is 256 KiB, which is the recommended option. If you select 32 KiB for the
grain size, the volume size cannot exceed 260,000 GiB.
The grain size cannot be changed after the thin-provisioned volume is created.
Generally, smaller grain sizes save space but require more metadata access, which
can adversely affect performance. If you are not going to use the thin-provisioned
volume as a FlashCopy source or target volume, use 256 KiB to maximize
performance. If you are going to use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the
FlashCopy function.
– Thin-provisioned volumes require more I/Os because of directory accesses. For truly
random workloads with 70% read and 30% write, a thin-provisioned volume requires
approximately one directory I/O for every user I/O.
– The directory is two-way write-back-cached (as with the SVC fastwrite cache), so
certain applications perform better.
– Thin-provisioned volumes require more processor processing, so the performance per
I/O Group can also be reduced.
– A thin-provisioned volume feature that is called zero detect provides clients with the
ability to reclaim unused allocated disk space (zeros) when they are converting a fully
allocated volume to a thin-provisioned volume by using volume mirroring.
򐂰 Volume mirroring guidelines:
– Create or identify two separate storage pools to allocate space for your mirrored
volume.
– Allocate the storage pools that contain the mirrors from separate storage controllers.
– If possible, use a storage pool with MDisks that share characteristics. Otherwise, the
volume performance can be affected by the poorer performing MDisk.

3.3.10 Host mapping (LUN masking)


For the host and application servers, the following guidelines apply:
򐂰 Each SVC node presents a volume to the SAN through four ports. Because two nodes are
used in normal operations to provide redundant paths to the same storage, a host with two
HBAs can see multiple paths to each LUN that is presented by the SVC. Use zoning to
limit the pathing from a minimum of two paths to the available maximum of eight paths,
depending on the kind of high availability and performance that you want in your
configuration.

118 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

It is best to use zoning to limit the pathing to four paths. The hosts must run a multipathing
device driver to limit the pathing back to a single device. The multipathing driver that is
supported and delivered by SVC is the IBM Subsystem Device Driver (SDD). Native
multipath I/O (MPIO) drivers on selected hosts are supported. For more operating
system-specific information about MPIO support, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707
The actual version of the Subsystem Device Driver Device Specific Module (SDDDSM) for
IBM products is available at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000350
򐂰 The number of paths to a volume from a host to the nodes in the I/O Group that owns the
volume must not exceed eight, even if eight is not the maximum number of paths that are
supported by the multipath driver (SDD supports up to 32). To restrict the number of paths
to a host volume, the fabrics must be zoned so that each host FC port is zoned to no more
than two ports from each SVC node in the I/O Group that owns the volume.

Multipathing: We suggest the following number of paths per volume (n+1 redundancy):
򐂰 With two HBA ports, zone the HBA ports to the SVC ports 1 - 2 for a total of four
paths.
򐂰 With four HBA ports, zone the HBA ports to the SVC ports 1 - 1 for a total of four
paths.
򐂰 Optional (n+2 redundancy): With four HBA ports, zone the HBA ports to the SVC
ports 1 - 2 for a total of eight paths.

We use the term HBA port to describe the SCSI Initiator. We use the term SAN Volume
Controller port to describe the SCSI target.
The maximum number of host paths per volume must not exceed eight.

򐂰 If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports
to maximize high availability and performance.
򐂰 To configure greater than 256 hosts, you must configure the host to I/O Group mappings
on the SVC. Each I/O Group can contain a maximum of 512 host, so it is possible to
create 2,048 host objects on an eight-node SVC clustered system. Volumes can be
mapped only to a host that is associated with the I/O Group to which the volume belongs.
򐂰 Port masking
You can use a port mask to control the node target ports that a host can access, which
satisfies the following requirements:
– As part of a security policy to limit the set of WWPNs that can obtain access to any
volumes through an SVC port
– As part of a scheme to limit the number of logins with mapped volumes visible to a host
multipathing driver, such as SDD, and therefore limit the number of host objects that
are configured without resorting to switch zoning
򐂰 The port mask is an optional parameter of the mkhost and chhost commands. The port
mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all
ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value
is 1111 (all ports enabled).
򐂰 The SVC supports connection to the Cisco MDS family and Brocade family. For more
information, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707

Chapter 3. Planning and configuration 119


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

3.3.11 Advanced Copy Services


The SVC offers the following Advanced Copy Services:
򐂰 FlashCopy
򐂰 Metro Mirror
򐂰 Global Mirror

Layers: Software version 6.3 introduced a new property that is called layer for the
clustered system. This property is used when a copy services partnership exists between
an SVC and an IBM Storwize V7000. There are two layers: replication and storage. All
SVC clustered systems are replication layers and cannot be changed. By default, the IBM
Storwize V7000 is a storage layer, which must be changed by using CLI command
chsystem before you use it to make any copy services partnership with the SVC.

Figure 3-21 shows an example for replication and storage layer.

Figure 3-21 Replication and storage layer

SVC Advanced Copy Services must apply the guidelines that are described next.

FlashCopy guidelines
Consider the following FlashCopy guidelines:
򐂰 Identify each application that must have a FlashCopy function implemented for its volume.
򐂰 FlashCopy is a relationship between volumes. Those volumes can belong to separate
storage pools and separate storage subsystems.
򐂰 You can use FlashCopy for backup purposes by interacting with the Tivoli Storage
Manager Agent, or for cloning a particular environment.
򐂰 Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned,
or Incremental.
򐂰 Define which FlashCopy rate best fits your requirement in terms of the performance and
the amount of time to complete the FlashCopy. Table 3-4 shows the relationship of the
background copy rate value to the attempted number of grains to be split per second.

120 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

򐂰 Define the grain size that you want to use. A grain is the unit of data that is represented by
a single bit in the FlashCopy bitmap table. Larger grain sizes can cause a longer
FlashCopy elapsed time and a higher space usage in the FlashCopy target volume.
Smaller grain sizes can have the opposite effect. The data structure and the source data
location can modify those effects.
In an actual environment, check the results of your FlashCopy procedure in terms of the
data that is copied at every run and in terms of elapsed time, comparing them to the new
SVC FlashCopy results. Eventually, adapt the grain per second and the copy rate
parameter to fit your environment’s requirements. See Table 3-4.

Table 3-4 Grain splits per second


User percentage Data copied per 256 KiB grain per 64 KiB grain per
second second second

1 - 10 128 KiB 0.5 2

11 - 20 256 KiB 1 4

21 - 30 512 KiB 2 8

31 - 40 1 MiB 4 16

41 - 50 2 MiB 8 32

51 - 60 4 MiB 16 64

61 - 70 8 MiB 32 128

71 - 80 16 MiB 64 256

81 - 90 32 MiB 128 512

91 - 100 64 MiB 256 1024

Metro Mirror and Global Mirror guidelines


SVC supports intracluster and intercluster Metro Mirror and Global Mirror. From the
intracluster point of view, any single clustered system is a reasonable candidate for a Metro
Mirror or Global Mirror operation. However, the intercluster operation needs at least two
clustered systems that are separated by a number of moderately high-bandwidth links.

Figure 3-22 on page 122 shows a schematic of Metro Mirror connections.

Chapter 3. Planning and configuration 121


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-22 Metro Mirror connections

Figure 3-22 contains two redundant fabrics. Part of each fabric exists at the local clustered
system and at the remote clustered system. No direct connection exists between the two
fabrics.

Technologies for extending the distance between two SVC clustered systems can be broadly
divided into the following categories:
򐂰 FC extenders
򐂰 SAN multiprotocol routers

Because of the more complex interactions that are involved, IBM explicitly tests products of
this class for interoperability with the SVC. For more information about the current list of
supported SAN routers in the supported hardware list, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707

IBM has tested a number of FC extenders and SAN router technologies with the SVC. You
must plan, install, and test FC extenders and SAN router technologies with the SVC so that
the following requirements are met:
򐂰 The round-trip latency between sites must not exceed 80 - 250 ms. For Global Mirror, this
limit allows a distance between the primary and secondary sites of up to 25,000 km
(15534.28 miles).
Table 3-5 shows a table of maximum supported round-trip latency between different HW
types and installed software versions.

Table 3-5 Maximum supported round-trip latency between sites


Software Version System node hardware Max. round trip latency

7.3.0 and earlier All 80 ms

122 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

Software Version System node hardware Max. round trip latency

򐂰 SAN Volume Controller 2145-CG8, with 250 ms


second four-port Fibre Channel adapter
7.4.0 and later installed
򐂰 SAN Volume Controller 2145-DH8
All other models 80 ms

򐂰 If you use remote mirroring between systems with 80-250 ms round-trip latency, you must
meet the following additional requirements:
– All nodes used for replication must be of a supported model (see Table 3-5 on
page 122).
– There must be a Fibre Channel partnership between systems, not an IP partnership.
– All systems in the partnership must have a minimum software level of 7.4.
– The rcbuffersize setting must be set to 512 MB on each system in the partnership.
This can be accomplished by running the chsystem -rcbuffersize 512 command on
each system (note that changing this setting is disruptive to Metro Mirror and Global
Mirror operations. Use this command only before partnerships have been created
between systems or when all partnerships with the system have been stopped.).
– Two Fibre Channel ports on each node that will be used for replication must be
dedicated for replication traffic. This can be achieved using SAN zoning and port
masking.
– SAN zoning should be applied to provide separate intra-system zones for each
local-remote IO group pair that will be used for replication.
򐂰 The latency of long-distance links depends on the technology that is used to implement
them. A point-to-point dark fiber-based link often provides a round-trip latency of 1 ms per
100 km (62.13 miles) or better. Other technologies provide longer round-trip latencies,
which affect the maximum supported distance.
򐂰 The configuration must be tested with the expected peak workloads.
򐂰 When Metro Mirror or Global Mirror is used, a certain amount of bandwidth is required for
the IBM SVC intercluster heartbeat traffic. The amount of traffic depends on how many
nodes are in each of the two clustered systems.
Table 3-6 shows the amount of heartbeat traffic, in megabits per second, that is generated
by various sizes of clustered systems.

Table 3-6 Intersystem heartbeat traffic in Mbps


SVC System 1 SVC System 2

2 nodes 4 nodes 6 nodes 8 nodes

2 nodes 5 6 6 6

4 nodes 6 10 11 12

6 nodes 6 11 16 17

8 nodes 6 12 17 21

򐂰 These numbers represent the total traffic between the two clustered systems when no I/O
is taking place to mirrored volumes. Half of the data is sent by one clustered system, and
half of the data is sent by the other clustered system. The traffic is divided evenly over all
available intercluster links. Therefore, if you have two redundant links, half of this traffic is
sent over each link during fault-free operation.

Chapter 3. Planning and configuration 123


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 The bandwidth between sites must, at the least, be sized to meet the peak workload
requirements, in addition to maintaining the maximum latency that was specified
previously. You must evaluate the peak workload requirement by considering the average
write workload over a period of 1 minute or less, plus the required synchronization copy
bandwidth.
With no active synchronization copies and no write I/O disks in Metro Mirror or Global
Mirror relationships, the SVC protocols operate with the bandwidth that is indicated in
Table 3-6 on page 123. However, you can determine the true bandwidth that is required for
the link only by considering the peak write bandwidth to volumes that are participating in
Metro Mirror or Global Mirror relationships and adding it to the peak synchronization copy
bandwidth.
򐂰 If the link between the sites is configured with redundancy so that it can tolerate single
failures, you must size the link so that the bandwidth and latency statements continue to
be true, even during single failure conditions.
򐂰 The configuration is tested to simulate the failure of the primary site (to test the recovery
capabilities and procedures), including eventual failback to the primary site from the
secondary.
򐂰 The configuration must be tested to confirm that any failover mechanisms in the
intercluster links interoperate satisfactorily with the SVC.
򐂰 The FC extender must be treated as a normal link.
򐂰 The bandwidth and latency measurements must be made by, or on behalf of, the client.
They are not part of the standard installation of the SVC by IBM. Make these
measurements during installation and record the measurements. Testing must be
repeated following any significant changes to the equipment that provides the intercluster
link.

Global Mirror guidelines


Consider the following guidelines:
򐂰 When SVC Global Mirror is used, all components in the SAN must sustain the workload
that is generated by application hosts and the Global Mirror background copy workload.
Otherwise, Global Mirror can automatically stop your relationships to protect your
application hosts from increased response times. Therefore, it is important to configure
each component correctly.
򐂰 Use a SAN performance monitoring tool, such as IBM Tivoli Storage Productivity Center,
which allows you to continuously monitor the SAN components for error conditions and
performance problems. This tool helps you detect potential issues before they affect your
disaster recovery solution.
򐂰 The long-distance link between the two clustered systems must be provisioned to allow for
the peak application write workload to the Global Mirror source volumes and the
client-defined level of background copy.
򐂰 The peak application write workload ideally must be determined by analyzing the SVC
performance statistics.
򐂰 Statistics must be gathered over a typical application I/O workload cycle, which might be
days, weeks, or months, depending on the environment on which the SVC is used. These
statistics must be used to find the peak write workload that the link must support.
򐂰 Characteristics of the link can change with use. For example, latency can increase as the
link is used to carry an increased bandwidth. The user must be aware of the link’s behavior
in such situations and ensure that the link remains within the specified limits. If the
characteristics are not known, testing must be performed to gain confidence of the link’s
suitability.

124 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

򐂰 Users of Global Mirror must consider how to optimize the performance of the
long-distance link, which depends on the technology that is used to implement the link. For
example, when you are transmitting FC traffic over an IP link, you might want to enable
jumbo frames to improve efficiency.
򐂰 The use of Global Mirror and Metro Mirror between the same two clustered systems is
supported.
򐂰 The use of Global Mirror and Metro Mirror between the SVC clustered system and IBM
Storwize systems with a minimum code level of 6.3 is supported.
򐂰 Support exists for cache-disabled volumes to participate in a Global Mirror relationship;
however, this design is not a preferred practice.
򐂰 The gmlinktolerance parameter of the remote copy partnership must be set to an
appropriate value. The default value is 300 seconds (5 minutes), which is appropriate for
most clients.
򐂰 During SAN maintenance, the user must choose to reduce the application I/O workload
during maintenance (so that the degraded SAN components can manage the new
workload); disable the gmlinktolerance feature; increase the gmlinktolerance value
(which means that application hosts might see extended response times from Global
Mirror volumes); or stop the Global Mirror relationships.
If the gmlinktolerance value is increased for maintenance lasting x minutes, it must be
reset only to the normal value x minutes after the end of the maintenance activity.
If gmlinktolerance is disabled during maintenance, it must be re-enabled after the
maintenance is complete.
򐂰 Starting with software version 7.6 you can use the chsystem command to set the maximum
replication delay for the system. This value ensures that the single slow write operation
does not affect the entire primary site.
You can configure this delay for all relationships or consistency groups that exist on the
system by using the maxreplicationdelay parameter on the chsystem command. This
value indicates the amount of time (in seconds) that a host write operation can be
outstanding before replication is stopped for a relationship on the system. If the system
detects a delay in replication on a particular relationship or consistency group, only that
relationship or consistency group is stopped. In systems with large number of
relationships, a single slow relationship can cause delay for the remaining relationships on
the system. This setting isolates the potential relationship with delays so you can
investigate the cause of these issues. When the maximum replication delay is reached,
the system generates an error message that indicates the relationship that exceeded the
maximum replication delay.
򐂰 Global Mirror volumes must have their preferred nodes evenly distributed between the
nodes of the clustered systems. Each volume within an I/O Group has a preferred node
property that can be used to balance the I/O load between nodes in that group.
Figure 3-23 on page 126 shows the correct relationship between volumes in a Metro
Mirror or Global Mirror solution.

Chapter 3. Planning and configuration 125


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure 3-23 Correct volume relationship

򐂰 The capabilities of the storage controllers at the secondary clustered system must be
provisioned to allow for the peak application workload to the Global Mirror volumes, plus
the client-defined level of background copy, plus any other I/O being performed at the
secondary site. The performance of applications at the primary clustered system can be
limited by the performance of the back-end storage controllers at the secondary clustered
system to maximize the amount of I/O that applications can perform to Global Mirror
volumes.
򐂰 A complete review must be performed before Serial Advanced Technology Attachment
(SATA) for Metro Mirror or Global Mirror secondary volumes is used. The use of a slower
disk subsystem for the secondary volumes for high-performance primary volumes can
mean that the SVC cache might not be able to buffer all the writes, and flushing cache
writes to SATA might slow I/O at the production site.
򐂰 Storage controllers must be configured to support the Global Mirror workload that is
required of them. You can dedicate storage controllers to only Global Mirror volumes,
configure the controller to ensure sufficient quality of service (QoS) for the disks that are
used by Global Mirror, or ensure that physical disks are not shared between Global Mirror
volumes and other I/O, for example, by not splitting an individual RAID array.
򐂰 MDisks within a Global Mirror storage pool must be similar in their characteristics, for
example, RAID level, physical disk count, and disk speed. This requirement is true of all
storage pools, but maintaining performance is important when Global Mirror is used.
򐂰 When a consistent relationship is stopped, for example, by a persistent I/O error on the
intercluster link, the relationship enters the consistent_stopped state. I/O at the primary
site continues, but the updates are not mirrored to the secondary site. Restarting the
relationship begins the process of synchronizing new data to the secondary disk. While
this synchronization is in progress, the relationship is in the inconsistent_copying state.
Therefore, the Global Mirror secondary volume is not in a usable state until the copy
completes and the relationship returns to a Consistent state. For this reason, it is highly
advisable to create a FlashCopy of the secondary volume before the relationship is
restarted. When started, the FlashCopy provides a consistent copy of the data, even while
the Global Mirror relationship is copying. If the Global Mirror relationship does not reach
the Synchronized state (for example, if the intercluster link experiences further persistent
I/O errors), the FlashCopy target can be used at the secondary site for disaster recovery
purposes.

126 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

򐂰 If you plan to use a Fibre Channel over IP (FCIP) intercluster link, it is important to design
and size the pipe correctly.
Example 3-2 shows a best-guess bandwidth sizing formula.

Example 3-2 Wide area network (WAN) link calculation example


Amount of write data within 24 hours times 4 to allow for peaks
Translate into MB/s to determine WAN link needed
Example:
250 GB a day
250 GB * 4 = 1 TB
24 hours * 3600 secs/hr. = 86400 secs
1,000,000,000,000/ 86400 = approximately 12 MB/s,
Which means OC3 or higher is needed (155 Mbps or higher)

򐂰 If compression is available on routers or WAN communication devices, smaller pipelines


might be adequate. Workload is probably not evenly spread across 24 hours. If extended
periods of high data change rates exist, consider suspending Global Mirror during that
time frame.
򐂰 If the network bandwidth is too small to handle the traffic, the application write I/O
response times might be elongated. For the SVC, Global Mirror must support short-term
“Peak Write” bandwidth requirements. SVC Global Mirror is much more sensitive to a lack
of bandwidth than the DS8000.
򐂰 You must also consider the initial sync and resync workload. The Global Mirror
partnership’s background copy rate must be set to a value that is appropriate to the link
and secondary back-end storage. The more bandwidth that you give to the sync and
resync operation, the less workload can be delivered by the SVC for the regular data
traffic.
򐂰 Do not propose Global Mirror if the data change rate exceeds the communication
bandwidth or if the round-trip latency exceeds 80 - 250 ms. A round-trip latency that is
greater than 250 ms requires SCORE/RPQ submission.

3.3.12 SAN boot support


The IBM SVC supports SAN boot or start-up for AIX, Microsoft Windows Server, and other
operating systems. Because SAN boot support can change, check the following websites
regularly:
򐂰 http://www.ibm.com/support/docview.wss?uid=ssg1S1001707
򐂰 http://www.ibm.com/systems/support/storage/ssic/interoperability.wss

3.3.13 Data migration from a non-virtualized storage subsystem


Data migration is an important part of an SVC implementation. Therefore, you must
accurately prepare a data migration plan. You might need to migrate your data for one of the
following reasons:
򐂰 To redistribute workload within a clustered system across the disk subsystem
򐂰 To move workload onto newly installed storage
򐂰 To move workload off old or failing storage, ahead of decommissioning it
򐂰 To move workload to rebalance a changed workload
򐂰 To migrate data from an older disk subsystem to SVC-managed storage
򐂰 To migrate data from one disk subsystem to another disk subsystem

Chapter 3. Planning and configuration 127


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Because multiple data migration methods are available, choose the method that best fits your
environment, operating system platform, type of data, and application’s service-level
agreement (SLA).

We can define data migration as belonging to the following groups:


򐂰 Based on operating system Logical Volume Manager (LVM) or commands
򐂰 Based on special data migration software
򐂰 Based on the SVC data migration feature

With data migration, apply the following guidelines:


򐂰 Choose which data migration method best fits your operating system platform, type of
data, and SLA.
򐂰 Check the following interoperability matrix for the storage subsystem to which your data is
being migrated:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707
򐂰 Choose where you want to place your data after migration in terms of the storage pools
that relate to a specific storage subsystem tier.
򐂰 Check whether enough free space or extents are available in the target storage pool.
򐂰 Decide whether your data is critical and must be protected by a volume mirroring option or
whether it must be replicated in a remote site for disaster recovery.
򐂰 Prepare offline all of the zone and LUN masking and host mappings that you might need
to minimize downtime during the migration.
򐂰 Prepare a detailed operation plan so that you do not overlook anything at data migration
time.
򐂰 Run a data backup before you start any data migration. Data backup must be part of the
regular data management process.
򐂰 You might want to use the SVC as a data mover to migrate data from a non-virtualized
storage subsystem to another non-virtualized storage subsystem. In this case, you might
have to add checks that relate to the specific storage subsystem that you want to migrate.
Be careful when you are using slower disk subsystems for the secondary volumes for
high-performance primary volumes because the SVC cache might not be able to buffer all
the writes and flushing cache writes to SATA might slow I/O at the production site.

3.3.14 SVC configuration backup procedure


Save the configuration externally when changes, such as adding new nodes and disk
subsystems, are performed on the clustered system. Saving the configuration is a crucial part
of SVC management, and various methods can be applied to back up your SVC
configuration. The preferred practice is to implement an automatic configuration backup by
applying the configuration backup command. For more information about this command for
the CLI, see Chapter 10, “Operations using the CLI” on page 565. For more information about
the GUI operation, see Chapter 11, “Operations using the GUI” on page 715.

128 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

3.4 Performance considerations


Although storage virtualization with the SVC improves flexibility and provides simpler
management of a storage infrastructure, it can also provide a substantial performance
advantage for various workloads. The SVC caching capability and its ability to stripe volumes
across multiple disk arrays are the reasons why performance improvement is significant when
it is implemented with midrange disk subsystems. This technology is often provided only with
high-end enterprise disk subsystems.

Tip: Technically, almost all storage controllers provide both striping (RAID 5 or RAID 10)
and a form of caching. The real benefit is the degree to which you can stripe the data
across all MDisks in a storage pool and therefore, have the maximum number of active
spindles at one time. The caching is secondary. The SVC provides additional caching to
the caching that midrange controllers provide (usually a couple of GB). Enterprise systems
have much larger caches.

To ensure the performance that you want and capacity of your storage infrastructure,
undertake a performance and capacity analysis to reveal the business requirements of your
storage environment. When this analysis is done, you can use the guidelines in this chapter to
design a solution that meets the business requirements.

When you are considering performance for a system, always identify the bottleneck and,
therefore, the limiting factor of a specific system. You must also consider the component for
whose workload you identify a limiting factor. The component might not be the same
component that is identified as the limiting factor for other workloads.

When you are designing a storage infrastructure with the SVC or implementing an SVC in an
existing storage infrastructure, you must consider the performance and capacity of the SAN,
disk subsystems, SVC, and the known or expected workload.

3.4.1 SAN
The SVC now has the following models:
򐂰 2145-CF8
򐂰 2145-CG8
򐂰 2145-DH8

All of these models can connect to 2 Gbps, 4 Gbps, 8 Gbps, and 16 Gbps switches. From a
performance point of view, connecting the SVC to 8 Gbps or 16 Gbps switches is better.

Correct zoning on the SAN switch brings together security and performance. Implement a
dual HBA approach at the host to access the SVC.

3.4.2 Disk subsystems


From a performance perspective, the following guidelines relate to connecting to an SVC:
򐂰 Connect all storage ports to the switch up to a maximum of 16, and zone them to all of the
SVC ports.
򐂰 Zone all ports on the disk back-end storage to all ports on the SVC nodes in a clustered
system.

Chapter 3. Planning and configuration 129


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Also, ensure that you configure the storage subsystem LUN-masking settings to map all
LUNs that are used by the SVC to all the SVC WWPNs in the clustered system.

The SVC is designed to handle large quantities of multiple paths from the back-end storage.

In most cases, the SVC can improve performance, especially on mid-sized to low-end disk
subsystems, older disk subsystems with slow controllers, or uncached disk systems, for the
following reasons:
򐂰 The SVC can stripe across disk arrays, and it can stripe across the entire set of supported
physical disk resources.
򐂰 The SVC 2145-CF8 and 2145-CG8 have a 24 GB (48 GB with the optional processor
card, 2145-CG8 only) cache. The SVC 2145-DH8 has 32 GB of cache (64 GB of cache
with a second CPU used for hardware-assisted compression acceleration for Real-time
Compression workloads).
򐂰 The SVC can provide automated performance optimization of hot spots by using flash
drives and Easy Tier.

The SVC large cache and advanced cache management algorithms also allow it to improve
on the performance of many types of underlying disk technologies. The SVC capability to
manage (in the background) the destaging operations that are incurred by writes (in addition
to still supporting full data integrity) has the potential to be important in achieving good
database performance.

Depending on the size, age, and technology level of the disk storage system, the total cache
that is available in the SVC can be larger, smaller, or about the same as the cache that is
associated with the disk storage. Because hits to the cache can occur in the upper (SVC) or
the lower (disk controller) level of the overall system, the system as a whole can use the
larger amount of cache wherever it is located. Therefore, if the storage control level of the
cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the
SVC cache.

Also, regardless of their relative capacities, both levels of cache tend to play an important role
in allowing sequentially organized data to flow smoothly through the system. The SVC cannot
increase the throughput potential of the underlying disks in all cases because this increase
depends on the underlying storage technology and the degree to which the workload exhibits
hotspots or sensitivity to cache size or cache algorithms.

For more information about the SVC cache partitioning capability, see IBM SAN Volume
Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this website:
http://www.redbooks.ibm.com/abstracts/redp4426.html

3.4.3 SAN Volume Controller


The SVC clustered system is scalable up to eight nodes, and the performance is nearly linear
when more nodes are added into an SVC clustered system until it becomes limited by other
components in the storage infrastructure. Although virtualization with the SVC provides a
great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems
that can deliver the performance that you want. Essentially, SVC performance improvements
are gained by having as many MDisks as possible, which creates a greater level of concurrent
I/O to the back end without overloading a single disk or array.

Assuming that no bottlenecks exist in the SAN or on the disk subsystem, you must follow
specific guidelines when you perform the following tasks:
򐂰 Creating a storage pool

130 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm

򐂰 Creating volumes
򐂰 Connecting to or configuring hosts that must receive disk space from an SVC clustered
system

For more information about performance and preferred practices for the SVC, see SAN
Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is
available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

3.4.4 Real-Time Compression


IBM Real-Time Compression (RtC) technology in storage systems is based on the Random
Access Compression Engine technology – RACE. It is implemented in IBM SAN Volume
Controller and the IBM Storwize family, IBM FlashSystem V840 systems, IBM FlashSystem
V9000 systems and IBM XIV (IBM Spectrum Accelerate™). This technology plays a key role
in storage capacity savings and investment protection. Although the technology is easy to
implement and manage, it is helpful to understand the internal processes and IO workflow to
ensure a successful implementation of any storage solution. It is essential to understand the
internal basics of the Real-Time Compression technology especially with flash based
backends since the expectations from such systems are very high. The following suggestions
can also be used with FlashSystem V840 and FlashSystem V9000 as they are based on IBM
SAN Volume Controller technology, as well as the SVC utilizing any third-party flash storage.

To learn more refer to:


򐂰 IBM Real-time Compression Redbook:
http://www.redbooks.ibm.com/abstracts/redp4859.html
򐂰 Chapter 8, “Advanced features for storage efficiency” on page 425

General recommendations:
򐂰 Best results can be achieved if the data compression ratio stays at 25% or above. Volumes
can be scanned with the built-in Comprestimator and the appropriate decision can be
taken.
򐂰 More concurrency within the workload will give a better result than single threaded
sequential I/O streams.
򐂰 I/O is de-staged to RACE from the upper cache in 64 KiB pieces, and best results will be
achieved if the I/O size does not exceed this size.
򐂰 Volumes used for only one purpose usually have the same work patterns. Mixing
database, virtualization and general purpose data within the same volume could make the
workload inconsistent. These may have no stable I/O size and no specific work pattern,
and a below average compression ratio, making these volumes hard to investigate in a
case of performance degradation. Real-time Compression development recommends not
mixing data types within the same volume whenever possible.
򐂰 Pre-compressed data is best not to recompress, so volumes with compressed data can
stay as uncompressed volumes.
򐂰 Volumes with encrypted data have a very low compression ratio and are not a good
candidate for compression.

3.4.5 Performance monitoring


Performance monitoring must be a part of the overall IT environment. For the SVC and other
IBM storage subsystems, the official IBM tool to collect performance statistics and provide a
performance report is IBM Tivoli Storage Productivity Center.

Chapter 3. Planning and configuration 131


7933 03 Planning and Configuration Max_Lev.fm Draft Document for Review February 4, 2016 8:01 am

For more information about using IBM Tivoli Storage Productivity Center to monitor your
storage subsystem, see SAN Storage Performance Management Using Tivoli Storage
Productivity Center, SG24-7364, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247364.html

132 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Chapter 4. Initial configuration


This chapter explains the initial configuration required for the IBM SAN Volume Controller
(SVC) and includes the following topics:
򐂰 Managing the cluster
򐂰 Setting up the SAN Volume Controller cluster
򐂰 Configuring the GUI
򐂰 Secure Shell overview
򐂰 Using IPv6

© Copyright IBM Corp. 2015. All rights reserved. 133


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

4.1 Managing the cluster


You can manage the SVC in many ways. The following methods are the most common ones:
򐂰 By using the SVC Web Management GUI
򐂰 By using a PuTTY-based SVC command-line interface (CLI)
򐂰 By using IBM Tivoli Storage Productivity Center (TPC)

Figure 4-1 shows the options to manage an SVC cluster.

Figure 4-1 SVC cluster management options

You have full management control of the SVC regardless of which method you use. IBM Tivoli
Storage Productivity Center is a robust software product with various functions (including
performance and capacity features) that must be purchased separately.

If you have a previously installed SVC cluster in your environment, it is possible that you are
using the SVC Console, which is also known as the Hardware Management Console (HMC).
When you are using the specific, retail product that is called IBM System Storage Productivity
Center (SSPC), which is no longer offered, you can log in to only your SVC from one of them
at a time.

If you decide to manage your SVC cluster with the SVC CLI, it does not matter whether you
are using the SVC Console or IBM Tivoli Storage Productivity Center server because the
SVC CLI is on the cluster and accessed through Secure Shell (SSH), which can be installed
anywhere.

To access the SVC management GUI, direct a web browser to the system management IP
address.

134 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Note: Ensure that your web browser is supported and has the appropriate settings
enabled, refer to IBM SAN Volume Controller Knowledge Center link below:

https://ibm.biz/BdHnKF

4.1.1 Network requirements for SAN Volume Controller


To plan your installation, consider the TCP/IP address requirements of the SVC cluster and
the requirements for the SVC cluster to access other services. You must also plan the
address allocation, the Ethernet router, gateway, and firewall configuration to provide the
required access and proper network security.

Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.

Figure 4-2 TCP/IP ports

Table 4-1shows the list of ports and services used by SVC.

Table 4-1 TCP/IP ports and services listing


Service Traffic direction Protocol Port Service type

Email (SMTP) notification and Outbound TCP 25 optional


inventory reporting

SNMP event notification Outbound UDP 162 optional

Syslog event notification Outbound UDP 514 optional

IPv4 DHCP (Node service address) Outbound UDP 68 optional

IPv6 DHCP (Node service address) Outbound UDP 547 optional

Chapter 4. Initial configuration 135


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Service Traffic direction Protocol Port Service type

Network time server (NTP) Outbound UDP 123 optional

SSH for command line interface Inbound TCP 22 mandatory


(CLI) access

HTTPS for GUI access Inbound TCP 443 mandatory

CIMOM (HTTPS) Inbound TCP 5989 optional

CIMOM SLPD Inbound UDP 427 optional

Remote user authentication Outbound TCP 16310 optional


service - HTTP

Remote user authentication Outbound TCP 16311 optional


service - HTTPS

Remote user authentication Outbound TCP 389 optional


service - Lightweight Directory
Access Protocol (LDAP)

iSCSI Inbound TCP 3260 optional

iSCSI iSNS Outbound TCP 3260 optional

IP Partnership management IP Inbound TCP 3260 optional


communication

IP Partnership management IP Outbound TCP 3260 optional


communication

IP Partnership data path Inbound TCP 3265 optional


connections

IP Partnership data path Outbound TCP 3265 optional


connections

For more information about TCP/IP prerequisites, see Chapter 3, “Planning and
configuration” on page 83.

Ensure that the SAN Volume Controller nodes are physically installed and Ethernet and Fibre
Channel (FC) connectivity has been correctly configured.

Before configuring the cluster, ensure that the following information is available:
򐂰 License
The license indicates whether the client is permitted to use FlashCopy, Metro Mirror,
Encryption, and the Real-time Compression features. It also indicates how much capacity
the client is licensed to virtualize.
򐂰 For IPv4 addressing:
– IPv4 addresses: These addresses include one address for the cluster and one address
for each node of the cluster to be used as the service address.
– IPv4 subnet mask.
– Gateway IPv4 address.
򐂰 For IPv6 addressing:
– IPv6 addresses: These addresses include one address for the cluster and one address
for each node of the cluster to be used as the service address.

136 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

– IPv6 prefix.
– Gateway IPv6 address.

To assist you in starting an SVC initial configuration, Figure 4-3 shows a common flowchart
that covers all of the types of management.

Figure 4-3 SVC initial configuration flow

In the next sections, we describe each of the steps shown in Figure 4-3.

4.2 Setting up the SAN Volume Controller cluster


This section provides the step-by-step instructions needed to create the SVC cluster. You
must create a cluster to use SVC virtualized storage. The first phase to create a cluster
consists of a physical interaction with SVC hardware, which is performed by using the
Technician Port on 2145-DH8 models or from the front panel on 2145-CF8 and 2145-CG8
models (for more information, see 4.2.3, “Initiating the cluster from the front panel” on
page 154). The second phase of the configuration process is performed from a web browser
by accessing the management GUI IP address.

4.2.1 Initiating Cluster on 2145-DH8 SVC models


The first step to initiate a cluster in 2145-DH8 models is to connect a PC or notebook to the
Technician Port on the rear of the SVC node (Figure 4-4 on page 138 shows the Technician
Port slot). The SVC Technician Port provides a Dynamic Host Configuration Protocol (DHCP)
IP address V4, so you must ensure that your PC or notebook Ethernet card is configured for
DHCP. The “default” IP address for a new node is 192.168.0.1.You can, however, also use a
static IP, which should be set to 192.168.0.2 on your PC or notebook Ethernet card.

Note: The 2145-DH8 does not provide IPv6 IP addresses for the Technician Port.

Chapter 4. Initial configuration 137


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-4 Rear view of 2145-DH8 SVC nodes

Notes: During the initial configuration, you will see certificate warnings because the 2145
certificates are self-issued. Accept these warnings because they are not harmful.

Follow these steps for the initial configuration phase:


1. After you connect your PC or notebook to the Technician Port validate that you have a
valid IPv4 DCHP address, for example, 192.168.0.12 (the first IP address that the SVC
node assigns).
2. Open a supported browser and browse to address http://install and browser is
automatically directed to the initialization tool. You can also use the IP 192.168.0.1 if the
browser does not automatically redirect you to the initial configuration process for the
cluster.

Note: If the system cannot be initialized, you are directed to the service assistant
interface. Refer to error codes in the service console status to troubleshoot the problem.

3. Figure 4-5 on page 139 shows the Welcome panel and starts the wizard that allows you to
configure a new system. Click Next to start the wizard.

138 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-5 Welcome panel

4. This chapter focuses on setting up a new system, so we select the first option and then
click Next as shown in Figure 4-6.

Figure 4-6 Configuring first node in a new system

Important: If you are adding 2145-DH8 nodes to an existing system, ensure that the
existing systems are running software version 7.3 or higher. The 2145-DH8 only
supports software version 7.3 or higher.

5. The next panel prompts you to set an IP address for the cluster. You can choose between
an IPv4 or IPv6 address. In Figure 4-7, we set an IPv4 address.

Chapter 4. Initial configuration 139


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-7 Setting the cluster IP address

6. When you click Next, the Restarting Web Server panel opens, as shown in Figure 4-8.
Wait until the clock reaches the end and click Next.

Figure 4-8 System initialization

7. When the system initialization completes, follow the on-screen instructions (Figure 4-9):
Disconnect the Ethernet cable from the Technician port as well as from your PC or
notebook, connect the same PC or notebook to the same network as the system, and
upon clicking Finish you are redirected to the GUI for completion of the system setup. You
can connect to the System IP address from any management console that is connected to
the same network as the system.

140 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-9 Initialization complete

8. Whether you are redirected from your personal computer or notebook or connecting to the
Management IP address of the system, the License Agreement panel opens as shown in
Figure 4-10.

Figure 4-10 SAN Volume Controller License Agreement panel

9. Read the license agreement and then click the Accept arrow. The initial password setup
panel for superuser opens (Figure 4-11 on page 142). Type a new password and type it

Chapter 4. Initial configuration 141


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

again to confirm it. The password length is 6 - 63 characters. The password cannot begin
or end with a space. After you type the password twice, click the Log in arrow.

Figure 4-11 Initial password setup for superuser

Note: Default password for the account superuser is passw0rd (zero and not o).

10.The Welcome to System Setup panel opens (Figure 4-12). Click Next to continue the
installation process.

142 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-12 Welcome to System Setup panel

11.Click Next. You can choose to give the cluster a new name. We used ITSO_SVC_DH8, as
shown in Figure 4-13 on page 144.

Chapter 4. Initial configuration 143


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-13 System name panel

12.Click Apply and Next after you type the name of the cluster and the Licensed Functions
panel opens, as shown in (Figure 4-14 on page 145). Enter the total purchased capacity
for your system as authorized by your license agreement (Figure 4-14 on page 145 is only
an example).

144 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-14 Licensing functions window

13.The next step is to set the time and date, as shown in Figure 4-15. In this case, date and
time were set manually using browser settings. At this time, you cannot choose to use the
24-hour clock. You can change to the 24-hour clock after you complete the initial
configuration. We recommend that you use a Network Time Protocol (NTP) server so that
all of your SAN and storage devices have a common time stamp for troubleshooting.

Chapter 4. Initial configuration 145


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-15 Setting the date and time

14.Click Apply and Next. The Encryption panel opens, so you can select if Encryption is
enabled or not as shown in Figure 4-16 on page 147 (for this system configuration, we will
not enable encryption features in the initial setup). Click Next to continue the initialization
process.

146 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-16 Encryption feature panel

15.Next step is to configure the Call Home settings. In the first panel, set the System Location
information (Figure 4-17 on page 148), all fields are required in this step.

Chapter 4. Initial configuration 147


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-17 System Location settings

16.Next required information is the Contact Information, as shown in Figure 4-18 on


page 149. All fields are required, except for the Alternate phone.

148 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-18 Contact information

17.Setting up email server is optional, but we recommend that you set them up. The next
panel show how to set up the email server. Enter the IP addresses of the email server, as
shown in Figure 4-19 on page 150.

Important: A valid Simple Mail Transfer Protocol (SMTP) server IP address must be
available to complete this step.

Chapter 4. Initial configuration 149


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-19 Email server IP address and Port

18.You can click Ping to verify whether network access exists to the email server (SMTP
server).

Note: Notification alerts and warnings are configured after the system initialization.
Refer to Chapter 11, “Operations using the GUI” on page 715.

19.Click Apply and Next. The Summary panel opens (Figure 4-20 on page 151).

150 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-20 Summary panel

20.Click Finish to complete the initial configuration and Close in the message that will
pop-up (Figure 4-21 on page 152), and you will be automatically redirected to the System
overview panel.

Chapter 4. Initial configuration 151


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-21 Setup completion panel

21.The System Overview window will appear showing your system configuration is complete
and you can start configure Pools and Hosts as shown in Figure 4-22.

Figure 4-22 System overview window

4.2.2 SVC 2145-CF8 and 2145-CG8 service panels


This section provides an overview of the service panels that are available, depending on your
SVC nodes.

Figure 4-23 on page 153 shows the SVC node 2145-CF8 front panel.

152 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-23 SVC CF8 front and operator panel

Figure 4-24 shows the SVC node 2145-CG8 model.

Figure 4-24 SVC CG8 node front and operator panel

Note: Software version 6.1 and later code levels introduce a new method for performing
service tasks. In addition to performing service tasks from the front panel, you can service
a node through an Ethernet connection by using a web browser or the CLI. A service IP
address for each node canister is required.

Chapter 4. Initial configuration 153


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

4.2.3 Initiating the cluster from the front panel


After the hardware is installed in the racks, you need to configure (initiate) the cluster through
the physical service panels. In this book we are covering the procedures on the following
models:
򐂰 2145-CF8
򐂰 2145-CG8

For more information, see 4.2.2, “SVC 2145-CF8 and 2145-CG8 service panels” on page 152

Follow these steps to perform the cluster configuration:


1. Choose any node that is a member of the cluster that is being created.

Note: To create a system, do not repeat these instructions on more than one node.
After you complete the steps for initiating system creation from the front panel, use the
management GUI to create the system and add additional nodes to complete system
configuration.

When you create the system, you must specify either an IPv4 or an IPv6 system address
for port 1. After the system is created, you can specify additional IP addresses for port 1
and port 2 until both ports have an IPv4 address and an IPv6 address.
2. Press and release the Up or Down button until Actions is displayed.

Important: During these steps, if a timeout occurs while you are entering the input for
the fields, you must begin again from step 2. All of the changes are lost, so ensure that
you have all of the information available before you begin again.

3. Press and release the Select button.


4. Depending on whether you are creating a cluster with an IPv4 address or an IPv6
address, press and release the Up or Down button until New Cluster IPv4? or New
Cluster IPv6? is displayed.
Figure 4-25 shows the options for the cluster creation.

Figure 4-25 Cluster options flow on the front panel display

If the New Cluster IPv4? or New Cluster IPv6? action is displayed, move to step 5.

154 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

If the New Cluster IPv4? or New Cluster IPv6? action is not displayed, this node is
already a member of a cluster. Complete the following steps:
a. Press and release the Up or Down button until Actions is displayed.
b. Press and release the Select button to return to the Main Options menu.
c. Press and release the Up or Down button until Cluster: is displayed. The name of the
cluster to which the node belongs is displayed on line two of the panel.
In this case, you have two options:
Your first option is to delete this node from the cluster by completing the following
steps:
i. Press and release the Up or Down button until Actions is displayed.
ii. Press and release the Select button.
iii. Press and release the Up or Down button until Remove Cluster? is displayed.
iv. Press and hold the Up button.
v. Press and release the Select button.
vi. Press and release the Up or Down button until Confirm remove? is displayed.
vii. Press and release the Select button.
viii.Release the Up button, which deletes the cluster information from the node.
ix. Return to step 1 on page 154 and start again.
Your second option (if you do not want to remove this node from an existing cluster) is
to review the situation to determine the correct nodes to include in the new cluster.
5. Press and release the Select button to create the cluster.
6. Press and release the Select button again to modify the IP address.
7. Use the Up or Down navigation button to change the value of the first field of the
IP address to the value that was chosen.

IPv4 and IPv6: Consider the following points:


򐂰 For IPv4, pressing and holding the Up or Down buttons increments or decreases the
IP address field by units of 10. The field value rotates 0 - 255 with the Down button,
and 255 - 0 with the Up button.
򐂰 For IPv6, the address and the gateway address consist of eight 4-digit hexadecimal
values. Enter the full address by working across a series of four panels to update
each of the four-digit hexadecimal values that make up the IPv6 addresses. The
panels consist of eight fields, where each field is a four-digit hexadecimal value.

8. Use the Right navigation button to move to the next field. Use the Up or Down navigation
button to change the value of this field.
9. Repeat step 7 for each of the remaining fields of the IP address.
10.When the last field of the IP address is changed, press the Select button.
11.Press the Right arrow button:
– For IPv4, IPv4 Subnet: is displayed.
– For IPv6, IPv6 Prefix: is displayed.
12.Press the Select button to enter edit mode.
13.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were
changed. There is only a single field for IPv6 Prefix.

Chapter 4. Initial configuration 155


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

14.When the last field of IPv4 Subnet/IPv6 Mask is changed, press the Select button.
15.Press the Right navigation button:
– For IPv4, IPv4 Gateway: is displayed.
– For IPv6, IPv6 Gateway: is displayed.
16.Press the Select button.
17.Change the fields for the appropriate gateway in the same way that the IPv4/IPv6 address
fields were changed.
18.When the changes to all of the Gateway fields are made, press the Select button.
19.To review the settings before the cluster is created, use the Right and Left buttons. Make
any necessary changes, use the Right and Left buttons to see “Confirm Created?”, and
then press the Select button.
20.After you complete this task, the following information is displayed on the service display
panel:
– Cluster: is displayed on line one.
– A temporary, system-assigned cluster name that is based on the IP address is
displayed on line two.
If the cluster is not created, Create Failed: is displayed on line one of the service display.
Line two contains an error code. For more information about the error codes and to
identify the reason why the cluster creation failed and the corrective action to take, see the
product support website
http://www.ibm.com/storage/support/2145

When you create the cluster from the front panel with the correct IP address format, you can
finish the cluster configuration by accessing the management GUI, completing the Create
Cluster wizard, and adding other nodes to the cluster.

Important: At this time, do not repeat this procedure to add other nodes to the cluster.

To add nodes to the cluster, follow the steps that are described in Chapter 10, “Operations
using the CLI” on page 565 and Chapter 11, “Operations using the GUI” on page 715.

4.3 Configuring the GUI


After you complete the tasks described in 4.2, “Setting up the SAN Volume Controller cluster”
on page 137, complete the cluster setup by using the SVC Console. Follow the steps that are
described in 4.3.1, “Completing the Create Cluster wizard” on page 157 to create the cluster
and complete the configuration.

Note: Ensure that the SVC cluster IP address (svcclusterip) can be reached successfully
by using a ping command from the network.

156 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

4.3.1 Completing the Create Cluster wizard


You can access the management GUI by opening any supported web browser. Complete the
following steps:
1. Open the web GUI from a supported web browser on any workstation that can
communicate with the cluster.
2. Point to the IP address that you entered in step 7 on page 155:
http://svcclusteripaddress/
(You are redirected to https://svcclusteripaddress/, which is the default address for
access to the SVC cluster.)
Follow the steps presented to finish the cluster configuration, we do not show the panels
again in this book.

4.3.2 Post-requisites
The following steps are optional to complete the SVC cluster configuration, we strongly
recommend you complete them at some point during the SVC implementation phase.
1. Configure the SSH keys for the command-line user, as shown in 4.4, “Secure Shell
overview” on page 157.
2. Configure user authentication and authorization as shown in Chapter 11, “Operations
using the GUI” on page 715.
3. Set up event notifications and inventory reporting as shown in Chapter 11, “Operations
using the GUI” on page 715.
4. Create the storage pools.
5. Add MDisk to the storage pool.
6. Identify and create volumes.
7. Create hosts objects.
8. Map volumes to hosts.
9. Identify and configure the FlashCopy mappings and Metro Mirror relationship.
10.Back up configuration data as shown in Chapter 10, “Operations using the CLI” on
page 565.

4.4 Secure Shell overview


The system (SVC) acts as the SSH server in this relationship. The SSH client provides a
secure environment in which to connect to a remote machine. Authentication is completed
using user name and password. If you require command-line access without entering a
password, it uses the principles of public and private keys for authentication.

When SSH client (A) attempts to connect to SSH server (B), the SSH password (if you require
command-line access without entering a password, the key pair) authenticates the
connection. The key consists of two halves: the public keys and private keys. The SSH client

Chapter 4. Initial configuration 157


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

public key is put onto SSH Server (B) using some means outside of the SSH session. When
SSH client (A) tries to connect, the private key on SSH client (A) is able to authenticate with
its public half on SSH server (B).

The system supports up to 32 interactive SSH sessions on the management IP address


simultaneously.

Note: After one hour, a fixed SSH interactive session times out, which means the SSH
session is automatically closed. This session timeout limit is not configurable.

You can choose between password or SSH key authentication, or you can choose both
password and SSH key authentication for the SVC CLI. We describe SSH in the following
sections.

Tip: If you choose not to create an SSH key pair, you can still access the SVC cluster by
using the SVC CLI, if you have a user password. You are authenticated through the user
name and password.

The connection is secured by using a private key and a public key pair. Securing the
connection includes the following steps:
1. A public key and a private key are generated together as a pair.
2. A public key is uploaded to the SSH server (SVC cluster).
3. A private key identifies the client. The private key is checked against the public key during
the connection. The private key must be protected.
4. Also, the SSH server must identify itself with a specific host key.
5. If the client does not have that host key yet, it is added to a list of known hosts.

The SSH client provides a secure environment from which to connect to a remote machine. It
uses the principles of public and private keys for authentication.

SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administrative and service functions on the cluster. Each key pair is associated with a
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.

To use the CLI, an SSH client must be installed on that system, the SSH key pair must be
generated on the client system, and the client’s SSH public key must be stored on the SVC
clusters.

You must install an SSH client program on the machine you intend to use to manage SVC
clusters. For this book we use PuTTY which is a free SSH client and can provide all functions
needed for a SSH connection. This software provides the SSH client function for users who
are logged in to the SVC Console and who want to start the CLI to manage the SVC cluster.

4.4.1 Generating public and private SSH key pairs by using PuTTY
Complete the following steps to generate SSH keys on the SSH client system:
1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client
desktop, select Start → Programs → PuTTY → PuTTYgen.

158 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Tip: You can find the PuTTYgen application under the installation directory of PuTTY if
it is not showing in the Windows Programs menu.

2. In the PuTTY Key Generator GUI window (Figure 4-26), complete the following steps to
generate the keys:
a. Select SSH-2 RSA.
b. Leave the number of bits in a generated key value at 2048.
c. Click Generate.

Figure 4-26 PuTTY Key Generator GUI

3. Move the cursor onto the blank area to generate the keys as shown in Figure 4-27.

Figure 4-27 Generating PuTTY Key

Chapter 4. Initial configuration 159


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

To generate keys: The blank area is the large blank rectangle on the GUI inside the
section of the GUI labeled Key. Continue to move the mouse pointer over the blank area
until the progress bar reaches the far right. This action generates random characters to
create a unique key pair.

4. After the keys are generated, save them for later use by completing the following steps:
a. Click Save public key, as shown in Figure 4-28.

Figure 4-28 Saving the public key

b. You are prompted for a name, for example, pubkey, and a location for the public key, for
example, C:\Support Utils\PuTTY. Click Save.
If another name and location are chosen, ensure that you maintain a record of the
name and location. You must specify the name and location of this SSH public key in
the steps that are described in 4.4.2, “Uploading the SSH public key to the SAN
Volume Controller cluster” on page 161.

Tip: The PuTTY Key Generator saves the public key with no extension, by default.
Use the string pub in naming the public key, for example, pubkey, to differentiate the
SSH public key from the SSH private key easily.

c. In the PuTTY Key Generator window, click Save private key.

160 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

d. You are prompted with a warning message, as shown in Figure 4-29. Click Yes to save
the private key without a passphrase.

Figure 4-29 Saving the private key without a passphrase

e. When prompted, enter a name, for example, icat, and a location for the private key, for
example, C:\Support Utils\PuTTY. Click Save.
We suggest that you use the default name icat.ppk because this key was used for icat
application authentication and must have this default name in SVC clusters that are
running on versions before SVC 5.1.

Private key extension: The PuTTY Key Generator saves the private key with the
PPK extension.

5. Close the PuTTY Key Generator GUI.


6. Browse to the directory, for example, C:\Support Utils\PuTTY, where the private key was
saved.

4.4.2 Uploading the SSH public key to the SAN Volume Controller cluster
After you create your SSH key pair, you must upload your SSH private key onto the SVC
cluster. Complete the following steps:
1. From your browser, enter https://svcclusteripaddress/.
In the GUI interface, go to the Access Management interface and select Users as shown
in Figure 4-30.

Figure 4-30 Selecting Users Management in SVC Dashboard

Chapter 4. Initial configuration 161


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

2. In the next window, as shown in Figure 4-31, select Create User to open a new window for
user creation.

Figure 4-31 Creating user window

3. From the window to create a user, as shown in Figure 4-32, you need to provide the
following information:
a. Select Authentication Mode as Local (for Remote users configuration, see 2.12, “User
authentication” on page 59).
b. Name (user ID) that you want to create
c. Select the access level you want to assign to user. The Security Administrator
(SecurityAdmin) is the maximum access level.
d. Type the password twice.
e. Select the location from which you want to upload the SSH Public Key file that you
created for this user. Click Create.

162 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-32 Creating the user and password window

You completed the user creation process and uploaded the users’ SSH public key that is
paired later with the users’ private.ppk key, as described in 4.4.3, “Configuring the PuTTY
session for the CLI” on page 163. Figure 4-35 on page 165 shows the successful upload of
the SSH admin key.

The requirements for the SVC cluster setup by using the SVC cluster web interface are
complete.

4.4.3 Configuring the PuTTY session for the CLI


Before the CLI can be used, you must configure the PuTTY session by using the SSH keys
generated in 4.4.1, “Generating public and private SSH key pairs by using PuTTY” on
page 158, or by user name if you configured the user without an SSH key.

Complete the following steps to configure the PuTTY session on the SSH client system:
1. From the management workstation you want to connect to SVC, select Start →
Programs → PuTTY → PuTTY to open the PuTTY Configuration GUI window.
2. From the Category pane on the left in the PuTTY Configuration window (Figure 4-33), click
Session if it is not selected.

Tip: The items that you select in the Category pane affect the content that appears in
the right pane.

Chapter 4. Initial configuration 163


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-33 PuTTY Session configuration window

3. Under the “Specify the destination you want to connect to” section in the right pane, select
SSH. Under the “Close window on exit” section, select Only on clean exit, which ensures
that if any connection errors occur, they are displayed in the user’s window.
4. From the Category pane on the left, select Connection → SSH to display the PuTTY SSH
connection configuration window, as shown in Figure 4-34.

Figure 4-34 PuTTY SSH connection configuration window

164 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

5. In the right pane, for the Preferred SSH protocol version, select 2.
6. From the Category pane on the left side of the PuTTY Configuration window, select
Connection → SSH → Auth.
7. As shown in Figure 4-35, in the “Private key file for authentication:” field under the
Authentication parameters section in the right pane, browse to or enter the fully qualified
directory path and file name of the SSH client private key file (for example, C:\Support
Utils\Putty\icat.ppk) that was created earlier.
You can skip the Connection → SSH → Auth part of the process if you created the user
only with password authentication and no SSH key.

Figure 4-35 PuTTY Configuration: Private key file location for authentication

8. From the Category pane on the left side of the PuTTY Configuration window, click
Session.
9. In the right pane, complete the following steps, as shown in Figure 4-36 on page 166:
a. Under the “Load, save, or delete a stored session” section, select Default Settings,
and then click Save.
b. For the Host name (or IP address) field, enter the IP address of the SVC cluster.
c. In the Saved Sessions field, enter a name (for example, SVC) to associate with this
session.
d. Click Save again.

Chapter 4. Initial configuration 165


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-36 PuTTY Configuration: Saving a session

You can now close the PuTTY Configuration window or leave it open to continue.

Tips: Consider the following points:


򐂰 When you enter the Host name or IP address in PuTTY, enter your SVC user name
followed by an At sign (@) followed by your host name or IP address. This way, you do
not need to enter your user name each time that you want to access your SVC cluster.
If you did not create an SSH key, you are prompted for the password that you set for the
user.
򐂰 Normally, output that comes from the SVC is wider than the default PuTTY window size.
Change your PuTTY window appearance to use a font with a character size of 8.
To change, click the Appearance item in the Category tree, as shown in Figure 4-36,
and then, click Font. Choose a font with a character size of 8.

4.4.4 Starting the PuTTY CLI session


The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the
session by completing the following steps:
1. From the SVC management workstation where you previously installed PuTTY, open the
PuTTY application by selecting Start → Programs → PuTTY.
2. In the PuTTY Configuration window (Figure 4-37 on page 167), select the session that
was saved earlier (in our example, ITSO SVC) and click Load.
3. Click Open.

166 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

Figure 4-37 Open PuTTY command-line session

4. If this is the first time that you use the PuTTY application since you generated and
uploaded the SSH key pair, a PuTTY Security Alert window with a prompt opens that
warns that a mismatch exists between the private and public keys, as shown in
Figure 4-38. Click Yes. The CLI starts.

Figure 4-38 PuTTY Security Alert

5. As shown in Example 4-1, the private key that is used in this PuTTY session is now
authenticated against the public key that was uploaded to the SVC cluster.

Example 4-1 Authenticating with SSH key


Using username "admin".
Authenticating with public key "rsa-key-20151013"
IBM_2145:ITSO_SVC_DH8:admin>

Chapter 4. Initial configuration 167


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

You completed the required tasks to configure the CLI for SVC administration from the SVC
Console. You can close the PuTTY session.

4.4.5 Configuring SSH for IBM AIX clients


To configure SSH for AIX clients, complete the following steps:

Note: You must reach the SVC cluster IP address successfully by using the ping command
from the AIX workstation from which cluster access is wanted.

1. OpenSSL must be installed for OpenSSH to work. Complete the following steps to install
OpenSSH on the AIX client:
a. You can obtain the installation images from the following websites:
• https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp
• http://sourceforge.net/projects/openssh-aix
b. Follow the instructions carefully because OpenSSL must be installed before SSH is
used.
2. Complete the following steps to generate an SSH key pair:
a. Run the cd command to browse to the /.ssh directory.
b. Run the ssh-keygen -t rsa command. The following message is displayed:
Generating public/private rsa key pair. Enter file in which to save the key
(//.ssh/id_rsa)

Note: This process generates two user named files. If you select the name key, the
files are named key and key.pub. Where key is the name of the private key and
key.pub is the name of the public key.

c. Pressing Enter uses the default file that is shown in parentheses. Otherwise, enter a
file name (for example, aixkey), and then press Enter. The following prompt is
displayed:
Enter a passphrase (empty for no passphrase)
d. When you use the CLI interactively, enter a passphrase because no other
authentication exists when you are connecting through the CLI. After you enter the
passphrase, press Enter. The following prompt is displayed:
Enter same passphrase again:
Enter the passphrase again. Press Enter.
e. A message is displayed indicating that the key pair was created. The private key file
has the name that was entered previously, for example, aixkey. The public key file has
the name that was entered previously with an extension of .pub, for example,
aixkey.pub.

The use of a passphrase: If you are generating an SSH key pair so that you can use
the CLI interactively, use a passphrase so that you must authenticate whenever you
connect to the cluster. You can have a passphrase-protected key for scripted usage, but
you must use the expect command or a similar command to have the passphrase
parsed into the ssh command.

168 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

The user configuration steps for uploading the SVC key is the same as shown in 4.4.2,
“Uploading the SSH public key to the SAN Volume Controller cluster” on page 161.

4.5 Using IPv6


You can use IPv4 or IPv6 in a dual-stack configuration. Migrating remotely to (or from) IPv6 is
possible, and the migration is nondisruptive.

Using IPv6: To remotely access the SVC clusters that are running IPv6, you are required
to run a supported web browser and have IPv6 configured on your local workstation.

4.5.1 Migrating a cluster from IPv4 to IPv6


As a prerequisite, enable and configure IPv6 on your local workstation. In our case, we
configured an interface with IPv4 and IPv6 addresses on the management workstation, as
shown in Example 4-2.

Example 4-2 Output of ipconfig on the management workstation


C:\Documents and Settings\Administrator>ipconfig
Windows IP Configuration
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . :
Link-local IPv6 Address . . . . . : fe80::8dab:ed74:af80:752%11
IPv4 Address. . . . . . . . . . . : 10.18.228.172
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 10.18.228.1

To update IPv6 address, complete the following steps:


1. Select Settings → Network, as shown in Figure 4-39.

Chapter 4. Initial configuration 169


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-39 Network configuration window

2. Select Management IP Addresses and click on port 1 of one of the nodes, as shown in
Figure 4-40.

Figure 4-40 Management port IP address configuration

170 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm

3. In the window that is shown in Figure 4-41, complete the following steps:
a. Select Show IPv6.
b. Enter an IPv6 address in the IP Address field.
c. Enter an IPv6 prefix in the Subnet Mask/Prefix field. The Prefix field can have a value
of 0 - 127.
d. Enter an IPv6 gateway in the Gateway field.
e. Click OK.

Figure 4-41 Modifying the IP addresses: Adding IPv6 addresses

4. A confirmation window opens, as shown in Figure 4-42. Click Apply Changes.

Figure 4-42 Confirming the IPv6 changes

5. The Change Management task is started on the server, as shown in Figure 4-43 on
page 172. Click Close when the task completes.

Chapter 4. Initial configuration 171


7933 04 Initial Configuration Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure 4-43 Change management Task window

6. Test the IPv6 connectivity to the cluster by using a compatible IPv6 and SVC web browser
on your local workstation.
7. Remove the IPv4 address in the SVC GUI that is accessing the same windows, as shown
in Figure 4-41 on page 171. Validate this change by clicking OK.

4.5.2 Migrating a cluster from IPv6 to IPv4


The process of migrating a cluster from IPv6 to IPv4 is identical to the process that we
described in 4.5.1, “Migrating a cluster from IPv4 to IPv6” on page 169, except that you add
IPv4 addresses and remove the IPv6 addresses.

172 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Chapter 5. Host configuration


In this chapter, we describe the host configuration procedures that are required to attach
supported hosts to the IBM SAN Volume Controller (SVC).

This chapter includes the following topics:


򐂰 Host attachment overview
򐂰 IBM SAN Volume Controller setup
򐂰 iSCSI
򐂰 Microsoft Windows information
򐂰 Using SAN Volume Controller CLI from a Windows host
򐂰 Microsoft Volume Shadow Copy
򐂰 Specific Linux (on x86/x86_64) information
򐂰 VMware configuration information
򐂰 Using the SDDDSM, SDDPCM, and SDD web interface
򐂰 More information

© Copyright IBM Corp. 2015. All rights reserved. 173


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

5.1 Host attachment overview


The SVC supports a wide range of host types (both IBM and non-IBM), which makes it
possible to consolidate storage in an open systems environment into a common pool of
storage. Then, you can use and manage the storage pool more efficiently as a single entity
from a central point on the SAN.

The ability to consolidate storage for attached open systems hosts provides the following
benefits:
򐂰 Unified, easier storage management.
򐂰 Increased utilization rate of the installed storage capacity.
򐂰 Advanced Copy Services functions offered across storage systems from separate
vendors.
򐂰 Consider only one kind of multipath driver for attached hosts.

5.2 IBM SAN Volume Controller setup


In most SVC environments, where high performance and high availability requirements exist,
hosts are attached through a storage area network (SAN) by using the Fibre Channel
Protocol (FCP). Even though other supported SAN configurations are available, for example,
single fabric design, a SAN that consists of two independent fabrics is a preferred practice
and a commonly used setup. This design provides redundant paths and prevents unwanted
interference between fabrics if an incident affects one of the fabrics.

Starting with SVC V6.4, Fibre Channel over Ethernet (FCoE) is supported on models
2145-CG8 and newer. Only 10 GbE lossless Ethernet or faster is supported.

Redundant paths to volumes can be provided for both SAN-attached and iSCSI-attached
hosts. Figure 5-1 on page 174 shows the types of attachments that are supported by SVC
release 7.4.

Figure 5-1 SVC host attachment overview

174 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

5.2.1 Fibre Channel and SAN setup overview


Host attachment to the SVC through FC must be made through a SAN fabric because direct
host attachment to SVC nodes is not supported. For SVC configurations, using two redundant
SAN fabrics is a preferred practice. Therefore, we advise that you have each host equipped
with a minimum of two host bus adapters (HBAs) or at least a dual-port HBA with each HBA
connected to a SAN switch in either fabric.

The SVC imposes no particular limit on the actual distance between the SVC nodes and host
servers. Therefore, a server can be attached to an edge switch in a core-edge configuration
and the SVC cluster is at the core of the fabric.

For host attachment, the SVC supports up to three inter-switch link (ISL) hops in the fabric,
which means that the server to the SVC can be separated by up to five FC links, four of which
can be 10 km long (6.2 miles) if longwave small form-factor pluggables (SFPs) are used.

The zoning capabilities of the SAN switch are used to create three distinct zones. SVC V7.6
supports 2 GBps, 4 GBps, 8 GBps, or 16 Gbps FC fabric, depending on the hardware
platform and on the switch where the SVC is connected. In an environment where you have a
fabric with multiple-speed switches, the preferred practice is to connect the SVC and the disk
storage system to the switch that is operating at the highest speed.

The SVC nodes contain shortwave SFPs; therefore, they must be within the allowed distance
depending of the speed of the switch to which they attach. Therefore, the configuration that is
shown in Figure 5-2 on page 175 is supported.

Figure 5-2 Example of host connectivity

Table 5-1 shows the fabric type that can be used for communicating between hosts, nodes,
and RAID storage systems. These fabric types can be used at the same time.

Table 5-1 SVC communication options


Communication type Host to SVC SVC to storage SVC to SVC

Fibre Channel (FC) SAN Yes Yes Yes

iSCSI (1 Gbps or 10 Gbps Ethernet) Yes No No

Chapter 5. Host configuration 175


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Communication type Host to SVC SVC to storage SVC to SVC

Fibre Channel over Ethernet (FCoE) Yes Yes Yes


(10 Gbps Ethernet)

In Figure 5-2, the optical distance between SVC Node 1 and Host 2 is slightly over 40 km
(24.85 miles).

To avoid latencies that lead to degraded performance, we suggest that you avoid ISL hops
whenever possible. That is, in an optimal setup, the servers connect to the same SAN switch
as the SVC nodes.

Remember the following limits when you are connecting host servers to an SVC:
򐂰 Up to 512 hosts per I/O Group are supported, which results in a total of 2048 hosts per
cluster.
If the same host is connected to multiple I/O Groups of a cluster, it counts as a host in
each of these groups.
򐂰 A total of 2048 distinct, configured host worldwide port names (WWPNs) are supported
per I/O Group.
This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is
generated for each iSCSI name) that are associated with all of the hosts that are
associated with a single I/O Group.

Access from a server to an SVC cluster through the SAN fabric is defined by using switch
zoning.

Consider the following rules for zoning hosts with the SVC:
򐂰 Homogeneous HBA port zones
Switch zones that contain HBAs must contain HBAs from similar host types and similar
HBAs in the same host. For example, AIX and Microsoft Windows hosts must be in
separate zones, and QLogic and Emulex adapters must also be in separate zones.

Important: A configuration that breaches this rule is unsupported because it can


introduce instability to the environment.

򐂰 HBA to SVC port zones


Place each host’s HBA in a separate zone with one or two SVC ports. If two ports exist,
use one from each node in the I/O Group. Do not place more than two SVC ports in a zone
with an HBA because this design results in more than the recommended number of paths,
as seen from the host multipath driver.

Number of paths: For n + 1 redundancy, use the following number of paths:


򐂰 With two HBA ports, zone HBA ports to SVC ports 1:2 for a total of four paths.
򐂰 With four HBA ports, zone HBA ports to SVC ports 1:1 for a total of four paths.

Optional (n+2 redundancy): With four HBA ports, zone HBA ports to SVC ports 1:2 for a
total of eight paths.

Here, we use the term HBA port to describe the SCSI initiator and SVC port to describe
the SCSI target.

176 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

򐂰 Maximum host paths per logical unit (LU)


For any volume, the number of paths through the SAN from the SVC nodes to a host must
not exceed eight. For most configurations, four paths to an I/O Group (four paths to each
volume that is provided by this I/O Group) are sufficient.

Important: The maximum number of host paths per LUN must not exceed eight.

򐂰 Balanced host load across HBA ports


To obtain the best performance from a host with multiple ports, ensure that each host port
is zoned with a separate group of SVC ports.
򐂰 Balanced host load across SVC ports
To obtain the best overall performance of the subsystem and to prevent overloading, the
workload to each SVC port must be equal. You can achieve this balance by zoning
approximately the same number of host ports to each SVC port.

Figure 5-3 shows an overview of a configuration where servers contain two single-port HBAs
each and the configuration includes the following characteristics:
򐂰 Distribute the attached hosts equally between two logical sets per I/O Group, if possible.
Connect hosts from each set to the same group of SVC ports. This “port group” includes
exactly one port from each SVC node in the I/O Group. The zoning defines the correct
connections.
򐂰 The port groups are defined in the following manner:
– Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both
nodes, for example, N1/N2 of I/O Group zero.
– Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both
nodes of an I/O Group.
򐂰 You can create aliases for these port groups (per I/O Group):
– Fabric A: IOGRP0_PG1 → N1_P1;N2_P1,IOGRP0_PG2 → N1_P3;N2_P3
– Fabric B: IOGRP0_PG1 → N1_P4;N2_P4,IOGRP0_PG2 → N1_P2;N2_P2
򐂰 Create host zones by always using the host port WWPN and the PG1 alias for hosts in the
first host set. Always use the host port WWPN and the PG2 alias for hosts from the
second host set. If a host must be zoned to multiple I/O Groups, add the PG1 or PG2
aliases from the specific I/O Groups to the host zone.

The use of this schema provides four paths to one I/O Group for each host and helps to
maintain an equal distribution of host connections on SVC ports.

Chapter 5. Host configuration 177


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 5-3 Overview of four-path host zoning

When possible, use the minimum number of paths that are necessary to achieve a sufficient
level of redundancy. For the SVC environment, no more than four paths per I/O Group are
required to accomplish this layout.

All paths must be managed by the multipath driver on the host side. If we assume that a
server is connected through four ports to the SVC, each volume is seen through eight paths.
With 125 volumes mapped to this server, the multipath driver must support handling up to
1,000 active paths (8 x 125).

For more configuration and operational information about the IBM Subsystem Device Driver
(SDD), see the Multipath Subsystem Device Driver User’s Guide, S7000303, which is
available at this web site:
http://ibm.com/support/docview.wss?uid=ssg1S7000303

For hosts that use four HBAs/ports with eight connections to an I/O Group, use the zoning
schema that is shown in Figure 5-4. You can combine this schema with the previous four-path
zoning schema.

178 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Figure 5-4 Overview of eight-path host zoning

Port designation recommendations


The port to local node communication is used for mirroring write cache and for metadata
exchange between nodes. The port to local node communication is critical to the stable
operation of the cluster. The 2145-DH8 nodes with their 8-port, 12-port or 16-port
configurations provide an opportunity to isolate the port to local node traffic from other cluster
traffic on dedicated ports, therefore providing a level of protection against misbehaving
devices and workloads that can compromise the performance of the shared ports.

Additionally, isolating remote replication traffic on dedicated ports is beneficial and ensures
that problems that affect the cluster-to-cluster interconnection do not adversely affect the
ports on the primary cluster and therefore affect the performance of workloads running on the
primary cluster.

We recommend the following port designations for isolating both port to local and port to
remote node traffic, as shown in Table 5-2 on page 180.

Important: It is not recommended to zone host or storage ports to ports designated for
inter-node use or replication use in the 8/12/16 port configurations and in no case should
inter-node and replication use the same ports. This is to minimize any Buffer to Buffer
(B2B) credit exhaustion situations, due to long distance latencies introduced by replication,
for example, from tieing up buffers needed by hosts, storage or inter-node
communications.

Chapter 5. Host configuration 179


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Table 5-2 Port designation recommendations for isolating traffic


Slot/Port Port # SAN 4-port Nodes 8-port Nodes 12-port Nodes 16-port Nodes

S1P1 1 A Host / Storage / Host / Storage Host / Storage Host / Storage


Inter-node

S1P2 2 B Host / Storage / Host / Storage Host / Storage Host / Storage


Inter-node

S1P3 3 A Host / Storage / Host / Storage Host / Storage Host / Storage


Replication
(Inter-node if no
replication
planned)

S1P4 4 B Host / Storage / Host / Storage Host / Storage Host / Storage


Replication
(Inter-node if no
replication
planned)

S2P1 5 A Inter-node* Inter-node* Inter-node*

S2P2 6 B Inter-node* Inter-node* Inter-node*

S2P3 7 A Host / Storage or Host / Storage or Host / Storage or


Replication** Replication** Replication**

S2P4 8 B Host / Storage or Host / Storage Host / Storage


Replication**

S3P1 9 A Host / Storage Host / Storage

S3P2 10 B Host / Storage or Host / Storage or


Replication** Replication**

S3P3 11 A Host / Storage or Host / Storage or


Inter-node* Inter-node*

S3P4 12 B Host / Storage or Host / Storage or


Inter-node* Inter-node*

S5P1 13 A Host / Storage

S5P2 14 B Host / Storage

S5P3 15 A Host / Storage

S5P4 16 B Host / Storage

*localfcportmask 11 or 1111 if no 110000 110000110000 110000110000


replication

**remotefcportmask 1100 11000000 1001000000 1001000000

Assumption: The SAN column assumes an odd/even SAN port configuration.


Modifications will have to made if other SAN connection schemes are used.

180 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Note: These recommendations represent optimal configurations based on assigning


specific ports to specific uses and aligning the internal allocation of hardware CPU cores
and software I/O threads to those ports. Therefore, varying from these recommendations
may result in unexpected consequences. In addition, configuring as recommended above
will ensure the ability to replace nodes non-disruptively in the future. Although supported,
dedicating certain ports specifically for hosts and other ports for storage is not
recommended as it negates the full duplex capability of the FC HBA ports.

It is recommended that 4 ports be dedicated to inter-node use for nodes containing 12 or


more ports under the following conditions:
򐂰 When the write data rate is expected to exceed 3 GBps per IOgroup for nodes containing
8 Gbps FC ports and 6 GBps for nodes containing 16 Gbps FC ports.
򐂰 When RtC is enabled and write data rate is expected to exceed 1.5 GBps per IOgroup for
nodes containing 8 Gbps FC ports and 3GBps for nodes containing 16 Gbps FC ports.

This recommendation provides the traffic isolation that you want and also simplifies migration
from existing configurations with only four ports, or even later migrations from 8-port, 12-port
or 16-port configurations to configurations with additional ports. More complicated port
mapping configurations that spread the port traffic across the adapters are supported and can
be considered. However, these approaches do not appreciably increase availability of the
solution because the mean time between failures (MTBF) of the adapter is not significantly
less than that of the non-redundant node components.

Although alternate port mappings that spread traffic across HBAs can allow adapters to come
back online following a failure, they will not prevent a node from going offline temporarily to
reboot and attempt to isolate the failed adapter and then rejoin the cluster. Our
recommendation takes all these considerations into account with a view that the greater
complexity might lead to migration challenges in the future and the simpler approach is best.

5.3 iSCSI
The iSCSI protocol is a block-level protocol that encapsulates SCSI commands into TCP/IP
packets and therefore, uses an existing IP network instead of requiring the FC HBAs and
SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. The iSCSI
connectivity is a software feature that is provided by the SVC code.

The iSCSI-attached hosts can use a single network connection or multiple network
connections.

Restriction: Only hosts can iSCSI-attach to the SVC. The SVC back-end storage has to
use SAN.

Each SVC node is equipped with up to three onboard Ethernet network interface cards
(NICs), which can operate at a link speed of 10 Mbps, 100 Mbps, or 1000 Mbps. All NIC’s can
be used to carry iSCSI traffic. Each node’s NIC that is numbered 1 is used as the primary
SVC cluster management port. For optimal performance achievement, we advise that you
use a 1 Gb Ethernet connection between SVC-attached and iSCSI-attached hosts when the
SVC node’s onboard NICs are used.

Starting with the SVC 2145-CG8, an optional 10 Gbps 2-port Ethernet adapter (Feature Code
5700) is available. The required 10 Gbps shortwave SFPs are available as Feature Code

Chapter 5. Host configuration 181


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

5711. If the 10 GbE option is installed, you cannot install any internal solid-state drives
(SSDs). The 10 GbE option is used solely for iSCSI traffic.

Starting with the SVC 2145-DH8, an optional 10 Gbps 4-port Ethernet adapter (Feature Code
AH12) is available. This feature provides one I/O adapter card with four 10 Gb Ethernet ports
and SFP+ transceivers. It is used to add 10 Gb iSCSI/FCoE connectivity to the SVC Storage
Engine.

5.3.1 Initiators and targets


An iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP
network to an iSCSI target. We refer to a single iSCSI initiator or iSCSI target as an iSCSI
node.

You can use the following types of iSCSI initiators in host systems:
򐂰 Software initiator: Available for most operating systems, for example, AIX, Linux, and
Windows
򐂰 Hardware initiator: Implemented as a network adapter with an integrated iSCSI processing
unit, which is also known as an iSCSI HBA.

For more information about the supported operating systems for iSCSI host attachment and
the supported iSCSI HBAs, see the following websites:
򐂰 IBM SAN Volume Controller v7.6 Support Matrix:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
򐂰 IBM SAN Volume Controller Knowledge Center:
http://www.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp

An iSCSI target refers to a storage resource that is on an iSCSI server. It also refers to one of
potentially many instances of iSCSI nodes that are running on that server.

5.3.2 iSCSI nodes


One or more iSCSI nodes exist within a network entity. The iSCSI node is accessible through
one or more network portals. A network portal is a component of a network entity that has a
TCP/IP network address and can be used by an iSCSI node.

An iSCSI node is identified by its unique iSCSI name and is referred to as an iSCSI qualified
name (IQN). The purpose of this name is for the identification of the node only, not for the
node’s address. In iSCSI, the name is separated from the addresses. This separation allows
multiple iSCSI nodes to use the same addresses, or, while it is implemented in the SVC, the
same iSCSI node to use multiple addresses.

5.3.3 iSCSI qualified name


An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its
own IQN, which, by default, is in the following form:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

An iSCSI host in the SVC is defined by specifying its iSCSI initiator names. The following
example shows an IQN of a Windows server’s iSCSI software initiator:
iqn.1991-05.com.microsoft:itsoserver01

182 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

During the configuration of an iSCSI host in the SVC, you must specify the host’s initiator
IQNs.

An alias string can also be associated with an iSCSI node. The alias allows an organization to
associate a string with the iSCSI name. However, the alias string is not a substitute for the
iSCSI name.

Figure 5-5 shows an overview of the iSCSI implementation in the SVC.

Figure 5-5 SVC iSCSI overview

A host that is accessing SVC volumes through iSCSI connectivity uses one or more Ethernet
adapters or iSCSI HBAs to connect to the Ethernet network.

Both onboard Ethernet ports of an SVC node can be configured for iSCSI. If iSCSI is used for
host attachment, we advise that you dedicate Ethernet port one for the SVC management
and port two for iSCSI use. This way, port two can be connected to a separate network
segment or virtual LAN (VLAN) for iSCSI because the SVC does not support the use of VLAN
tagging to separate management and iSCSI traffic.

Note: Ethernet link aggregation (port trunking) or “channel bonding” for the SVC nodes’
Ethernet ports is not supported for the 1 Gbps ports.

For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, you
can define two IPv4 and two IPv6 addresses or iSCSI network portals.

5.3.4 iSCSI setup of the SAN Volume Controller and host server
You must perform the following procedure when you are setting up a host server for use as an
iSCSI initiator with the SVC volumes. The specific steps vary depending on the particular host
type and operating system that you use.

Chapter 5. Host configuration 183


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

To configure a host, first select a software-based iSCSI initiator or a hardware-based iSCSI


initiator. For example, the software-based iSCSI initiator can be a Linux or Windows iSCSI
software initiator. The hardware-based iSCSI initiator can be an iSCSI HBA inside the host
server.

To set up your host server for use as an iSCSI software-based initiator with the SVC volumes,
complete the following steps. (The CLI is used in this example.)
1. Complete the following steps to set up your SVC cluster for iSCSI:
a. Select a set of IPv4 or IPv6 addresses for the Ethernet ports on the nodes that are in
the I/O Groups that use the iSCSI volumes.
b. Configure the node Ethernet ports on each SVC node in the clustered system by
running the cfgportip command.
c. Verify that you configured the node and the clustered system’s Ethernet ports correctly
by reviewing the output of the lsportip command and lssystemip command.
d. Use the mkvdisk command to create volumes on the SVC clustered system.
e. Use the mkhost command to create a host object on the SVC. It defines the host’s
iSCSI initiator to which the volumes are to be mapped.
f. Use the mkvdiskhostmap command to map the volume to the host object in the SVC.
2. Complete the following steps to set up your host server:
a. Ensure that you configured your IP interfaces on the server.
b. Ensure that your iSCSI HBA is ready to use, or install the software for the iSCSI
software-based initiator on the server, if needed.
c. On the host server, run the configuration methods for iSCSI so that the host server
iSCSI initiator logs in to the SVC clustered system and discovers the SVC volumes.
The host then creates host devices for the volumes.

After the host devices are created, you can use them with your host applications.

5.3.5 Volume discovery


Hosts can discover volumes through one of the following three mechanisms:
򐂰 Internet Storage Name Service (iSNS)
The SVC can register with an iSNS name server; the IP address of this server is set by
using the chsystem command. A host can then query the iSNS server for available iSCSI
targets.
򐂰 Service Location Protocol (SLP)
The SVC node runs an SLP daemon, which responds to host requests. This daemon
reports the available services on the node. One service is the CIM object manager
(CIMOM), which runs on the configuration node; iSCSI I/O service now also can be
reported.
򐂰 SCSI Send Target request
The host can also send a Send Target request by using the iSCSI protocol to the iSCSI
TCP/IP port (port 3260). You must define the network portal IP addresses of the iSCSI
targets before a discovery can be started.

184 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

5.3.6 Authentication
The authentication of hosts is optional; by default, it is disabled. The user can choose to
enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication,
which involves sharing a CHAP secret between the cluster and the host. If the correct key is
not provided by the host, the SVC does not allow it to perform I/O to volumes. Also, you can
assign a CHAP secret to the cluster.

5.3.7 Target failover


A new feature with iSCSI is the option to move iSCSI target IP addresses between the SVC
nodes in an I/O Group. IP addresses are moved only from one node to its partner node if a
node goes through a planned or unplanned restart. If the Ethernet link to the SVC clustered
system fails due to a cause outside of the SVC (such as the cable being disconnected or the
Ethernet router failing), the SVC makes no attempt to fail over an IP address to restore IP
access to the cluster. To enable the validation of the Ethernet access to the nodes, it
responds to ping with the standard one-per-second rate without frame loss.

A concept that is used for handling the iSCSI IP address failover is called a clustered Ethernet
port. A clustered Ethernet port consists of one physical Ethernet port on each node in the
cluster. The clustered Ethernet port contains configuration settings that are shared by all of
these ports.

Figure 5-6 on page 186 shows an example of an iSCSI target node failover. This example
provides a simplified overview of what happens during a planned or unplanned node restart in
an SVC I/O Group. The example refers to the SVC nodes with no optional 10 GbE iSCSI
adapter installed.

The following numbered comments relate to the numbers in Figure 5-6:


1. During normal operation, one iSCSI node target node instance is running on each SVC
node. All of the IP addresses (IPv4/IPv6) that belong to this iSCSI target (including the
management addresses if the node acts as the configuration node) are presented on the
two ports (P1/P2) of a node.
2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal
(IPv4/IPv6) IP addresses that are defined on Port1/Port2 and the management
(IPv4/IPv6) IP addresses (if N1 acted as the configuration node), fail over to Port1/Port2 of
the partner node within the I/O Group, node N2. An iSCSI initiator that is running on a
server runs a reconnect to its iSCSI target, that is, the same IP addresses that are
presented now by a new node of the SVC cluster.
3. When the node (N1) finishes its restart, the iSCSI target node (including its IP addresses)
that is running on N2 fails back to N1. Again, the iSCSI initiator that is running on a server
runs a reconnect to its iSCSI target. The management addresses do not fail back. N2
remains in the role of the configuration node for this cluster.

Chapter 5. Host configuration 185


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 5-6 iSCSI node failover scenario

5.3.8 Host failover


From a host perspective, a multipathing driver (MPIO) is not required to handle an SVC node
failover. In an SVC node restart, the host reconnects to the IP addresses of the iSCSI target
node that reappear after several seconds on the ports of the partner node.

A host multipathing driver for iSCSI is required in the following situations:


򐂰 To protect a host from network link failures, including port failures on the SVC nodes.
򐂰 To protect a host from an HBA failure (if two HBAs are in use).
򐂰 To protect a host from network failures, if the host is connected through two HBAs to two
separate networks.
򐂰 To provide load balancing on the server’s HBA and the network links.

The commands for the configuration of the iSCSI IP addresses were separated from the
configuration of the cluster IP addresses.

The following commands are new commands that are used for managing iSCSI IP addresses:
򐂰 The lsportip command lists the iSCSI IP addresses that are assigned for each port on
each node in the cluster.
򐂰 The cfgportip command assigns an IP address to each node’s Ethernet port for iSCSI
I/O.

The following commands are new commands that are used for managing the cluster IP
addresses:
򐂰 The lssystemip command returns a list of the cluster management IP addresses that are
configured for each port.
򐂰 The chsystemip command modifies the IP configuration parameters for the cluster.

186 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

The parameters for remote services (SSH and web services) remain associated with the
cluster object. During an SVC code upgrade, the configuration settings for the clustered
system are applied to the node Ethernet port 1.

For iSCSI-based access, the use of redundant network connections and separating iSCSI
traffic by using a dedicated network or virtual LAN (VLAN) prevents any NIC, switch, or target
port failure from compromising the host server’s access to the volumes.

Because both onboard Ethernet ports of an SVC node can be configured for iSCSI, we
advise that you dedicate Ethernet port 1 for SVC management and port 2 for iSCSI usage. By
using this approach, port 2 can be connected to a dedicated network segment or VLAN for
iSCSI. Because the SVC does not support the use of VLAN tagging to separate management
and iSCSI traffic, you can assign the correct LAN switch port to a dedicated VLAN to separate
SVC management and iSCSI traffic.

5.4 Microsoft Windows information


In the following sections, we describe specific information about the connection of hosts that
are based on Windows to the SVC environment.

5.4.1 Configuring Windows Server 2008 and 2012 hosts


This section provides an overview of the requirements for attaching an SVC to a host that is
running Windows Server 2008 R2,Windows Server 2012 or Windows 2012 R2. You must
install the IBM Subsystem Device Driver Device Specific Module (SDDDSM) multipath driver
to make the Windows server capable of handling volumes that are presented by the SVC.

Important: With Windows 2012, you can use native Microsoft device drivers, but we
strongly advise that you install IBM SDDDSM drivers. More information here:

http://www.ibm.com/support/docview.wss?uid=ssg1S4000350

Before you attach the SVC to your host, ensure that all of the following requirements are
fulfilled:
򐂰 Check all prerequisites that are provided in section 2.0 of the SDDDSM readme file.
򐂰 Check the LUN limitations for your host system. Ensure that enough FC adapters are
installed in the server to handle the total number of LUNs that you want to attach.

5.4.2 Configuring Windows


To configure the Windows hosts, complete the following steps:
1. Ensure that the latest OS service pack and hot fixes are applied to your Windows server
system.
2. Use the latest supported firmware and driver levels on your host system.
3. Install the HBA or HBAs on the Windows server, as described in 5.4.4, “Installing and
configuring the host adapter” on page 188.

Chapter 5. Host configuration 187


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

4. Connect the Windows Server FC host adapters to the switches.


5. Configure the switches (zoning).
6. Install the FC host adapter driver, as described in 5.4.3, “Hardware lists, device driver,
HBAs, and firmware levels” on page 188.
7. Configure the HBA for hosts that are running Windows, as described in 5.4.4, “Installing
and configuring the host adapter” on page 188.
8. Check the HBA driver readme file for the required Windows registry settings, as described
in 5.4.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 188.
9. Check the disk timeout on Windows Server, as described in 5.4.5, “Changing the disk
timeout on Windows Server” on page 188.
10.Install and configure SDDDSM.
11.Restart the Windows Server host system.
12.Configure the host, volumes, and host mapping in the SVC.
13.Use Rescan disk in Computer Management of the Windows Server to discover the
volumes that were created on the SVC.

5.4.3 Hardware lists, device driver, HBAs, and firmware levels


For more information about the supported hardware, device driver, and firmware, see this
website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html

On this page, browse to section V7.6.x, select Supported Hardware, Device Driver,
Firmware and Recommended Software Levels, and then search for Windows.

At this website, you also can find the hardware list for supported HBAs and the driver levels
for Windows. Check the supported firmware and driver level for your HBA and follow the
manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA.
Most manufacturers’ driver readme files list the instructions for the Windows registry
parameters that must be set for the HBA driver.

5.4.4 Installing and configuring the host adapter


Install the host adapters in your system. See the manufacturer’s instructions for the
installation and configuration of the HBAs.

Also, check the documentation that is provided for the server system for the installation
guidelines of FC HBAs regarding the installation in certain PCI(e) slots, and so on.

The detailed configuration settings that you must make for the various vendors’ FC HBAs are
available in the SVC Information Center by selecting Installing → Host attachment → Fibre
Channel host attachments → Hosts running the Microsoft Windows Server operating
system.

5.4.5 Changing the disk timeout on Windows Server


This section describes how to change the disk I/O timeout value on Windows Server 2008
R2, and Windows Server 2012 systems.

188 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

On your Windows Server hosts, complete the following steps to change the disk I/O timeout
value to 60 in the Windows registry:
1. In Windows, click Start, and then select Run.
2. In the dialog text box, enter regedit and press Enter.
3. In the registry browsing tool, locate the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key.
4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the
value to 60, as shown in Figure 5-7 on page 189.

Figure 5-7 Regedit

5.4.6 Installing the SDDDSM multipath driver on Windows


This section describes how to install the SDDDSM driver on a Windows Server 2008 R2 host
and Windows Server 2012.

Windows Server 2012, Windows Server 2008 (R2), and MPIO


Microsoft Multipath I/O (MPIO) is a generic multipath driver that is provided by Microsoft,
which does not form a complete solution. It works with device-specific modules (DSMs),
which usually are provided by the vendor of the storage subsystem. This design allows the
parallel operation of multiple vendors’ storage systems on the same host without interfering
with each other because the MPIO instance interacts only with that storage system for which
the DSM is provided.

MPIO is not installed with the Windows operating system, by default. Instead, storage
vendors must pack the MPIO drivers with their own DSMs. IBM SDDDSM is the IBM multipath
I/O solution that is based on Microsoft MPIO technology. It is a device-specific module that is
designed specifically to support IBM storage devices on Windows Server 2008 (R2) and
Windows 2012 servers.

The intention of MPIO is to achieve better integration of multipath storage with the operating
system. It also allows the use of multipathing in the SAN infrastructure during the boot
process for SAN boot hosts.

SDDDSM for IBM SAN Volume Controller


The SDDDSM installation is a package for the SVC device for the Windows Server 2008 (R2)
and Windows Server 2012 operating systems. Together with MPIO, SDDDSM is designed to
support the multipath configuration environments in the SVC. SDDDSM is in a host system
with the native disk device driver and provides the following functions:
򐂰 Enhanced data availability
򐂰 Dynamic I/O load-balancing across multiple paths
򐂰 Automatic path failover protection

Chapter 5. Host configuration 189


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Enabled concurrent firmware upgrade for the storage system


򐂰 Path-selection policies for the host system

No SDDDSM support exists for Windows Server 2000 because SDDDSM requires the
STORPORT version of the HBA device drivers. Table 5-3 on page 190 lists the SDDDSM
driver levels that are supported at the time of this writing.

Table 5-3 Currently supported SDDDSM driver levels


Windows operating system SDD level

Windows Server 2012 (x64) 2.4.5.1

Windows Server 2008 R2 (x64) 2.4.5.1

Windows Server 2008 (32-bit)/Windows Server 2008 (x64) 2.4.5.1

For more information about the levels that are available, see this website:
http://ibm.com/support/docview.wss?uid=ssg1S7001350#WindowsSDDDSM

To download SDDDSM, see this website:


http://ibm.com/support/docview.wss?uid=ssg1S4000350#SVC

After you download the appropriate archive (.zip file) from this URL, extract it to your local
hard disk and start setup.exe to install SDDDSM. A command prompt window opens, as
shown in Figure 5-8. Confirm the installation by entering Y.

Figure 5-8 SDDDSM installation

After the setup completes, enter Y again to confirm the reboot request, as shown in
Figure 5-9.

190 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Figure 5-9 Reboot system after installation

After the reboot, the SDDDSM installation is complete. You can verify the installation
completion in Device Manager because the SDDDSM device appears (as shown in
Figure 5-10) and the SDDDSM tools are installed, as shown in Figure 5-11.

Figure 5-10 SDDDSM installation

The SDDDSM tools are installed, as shown in Figure 5-11.

Chapter 5. Host configuration 191


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 5-11 SDDDSM installation

5.4.7 Attaching SVC volumes to Microsoft Windows Server 2008 R2 and to


Windows Server 2012
Create the volumes on the SVC and map them to the Windows Server 2008 R2 or 2012 host.

In this example, we mapped three SVC disks to the Windows Server 2008 R2 host that is
named Diomede, as shown in Example 5-1.

Example 5-1 SVC host mapping to host Diomede

Complete the following steps to use the devices on your Windows Server 2008 R2 host:
1. Click Start → Run.
2. Run the diskmgmt.msc command, and then click OK. The Disk Management window
opens.
3. Select Action → Rescan Disks, as shown in Figure 5-12.

192 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Figure 5-12 Windows Server 2008 R2: Rescan disks

4. The SVC disks now appear in the Disk Management window, as shown in Figure 5-13 on
page 193.

Figure 5-13 Windows Server 2008 R2 Disk Management window

After you assign the SVC disks, they are also available in Device Manager. The three
assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices
in the Device Manager, as shown in Figure 5-14.

Chapter 5. Host configuration 193


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 5-14 Windows Server 2008 R2 Device Manager

5. To check that the disks are available, select Start → All Programs → Subsystem Device
Driver DSM, and then click Subsystem Device Driver DSM, as shown in Figure 5-15 on
page 194. The SDDDSM command-line utility appears.

Figure 5-15 Windows Server 2008 R2 Subsystem Device Driver DSM utility

6. Run the datapath query device command and press Enter. This command displays all of
the disks and the available paths, including their states, as shown in Example 5-2.

Example 5-2 Windows Server 2008 R2 SDDDSM command-line utility


C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000002B Reserved: No LUN SIZE: 12.0GB
HOST INTERFACE: FC
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0

194 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0


3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000002C Reserved: No LUN SIZE: 12.0GB
HOST INTERFACE: FC
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 1517 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000002D Reserved: No LUN SIZE: 12.0GB
HOST INTERFACE: FC
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0

C:\Program Files\IBM\SDDDSM>

SAN zoning: When the SAN zoning guidance is followed, we see this result, which
uses one volume and a host with two HBAs: (number of volumes) x (number of paths
per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.

7. Right-click the disk in Disk Management and then select Online to place the disk online,
as shown in Figure 5-16.

Figure 5-16 Windows Server 2008 R2: Place disk online

8. Repeat step 7 for all of your attached SVC disks.


9. Right-click one disk again and select Initialize Disk, as shown in Figure 5-17.

Chapter 5. Host configuration 195


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 5-17 Windows Server 2008 R2: Initialize Disk

10.Mark all of the disks that you want to initialize and then click OK, as shown in Figure 5-18
on page 196.

Figure 5-18 Windows Server 2008 R2: Initialize Disk

11.Right-click the deallocated disk space and then select New Simple Volume, as shown in
Figure 5-19.

Figure 5-19 Windows Server 2008 R2: New Simple Volume

12.The New Simple Volume Wizard opens. Click Next.


13.Enter a disk size and then click Next, as shown in Figure 5-20.

196 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Figure 5-20 Windows Server 2008 R2: New Simple Volume

14.Assign a drive letter and then click Next, as shown in Figure 5-21.

Figure 5-21 Windows Server 2008 R2: New Simple Volume

15.Enter a volume label and then click Next, as shown in Figure 5-22.

Chapter 5. Host configuration 197


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 5-22 Windows Server 2008 R2: New Simple Volume

16.Click Finish. Repeat steps 9 - 16 for every SVC disk on your host system (Figure 5-23 on
page 198).

Figure 5-23 Windows Server 2008 R2: Disk Management

5.4.8 Extending a volume


You can expand a volume in the SVC cluster, even if it is mapped to a host. Certain operating
systems, such as Windows Server, can handle the volumes that are expanded even if the
host has applications running.

A volume, which is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on


the SVC, cannot be expanded unless the host mapping is removed. Therefore, the
FlashCopy, Metro Mirror, or Global Mirror on that volume must be stopped before the volume
can be expanded.

198 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut
down all but one MSCS cluster node. Also, you must stop the applications in the resource that
access the volume to be expanded before the volume is expanded. Applications that are
running in other resources can continue to run. After the volume is expanded, start the
applications and the resource, and then restart the other nodes in the MSCS.

To expand a volume in use on a Windows Server host, you use the Windows DiskPart utility.

To start DiskPart, select Start → Run, and enter DiskPart.

DiskPart was developed by Microsoft to ease the administration of storage on Windows hosts.
DiskPart is a command-line interface (CLI) that you can use to manage disks, partitions, and
volumes by using scripts or direct input on the command line. You can list disks and volumes,
select them, and after selecting them, get more detailed information, create partitions, extend
volumes, and so on. For more information about DiskPart, see this web site:
http://www.microsoft.com
For more information about expanding the partitions of a cluster-shared disk, see this web
site:
http://support.microsoft.com/kb/304736

Next, we show an example of how to expand a volume from the SVC on a Windows Server
2008 R2 host.

To list a volume size, use the lsvdisk <VDisk_name> command. This command provides the
volume size information for the Senegal_bas0001 volume before expanding the volume.
Here, we can see that the capacity is 10 GB, and we can see the value of the vdisk_UID. To
see on which vpath this volume is on the Windows Server 2008 R2host, we use the datapath
query device SDD command on the Windows host (Figure 5-24).

We can see that the serial 6005076801A180E9080000000000000F of Disk1 on the Windows


host (Figure 5-24) matches the volume ID of Senegal_bas0001.

To see the size of the volume on the Windows host, we use Disk Management, as shown in
Figure 5-24.

Figure 5-24 Windows Server 2008: Disk Management

Chapter 5. Host configuration 199


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use
the svctask expandvdisksize command to increase the capacity on the volume. In this
example, we expand the volume by 1 GB, as shown in Example 5-3 on page 200.

Example 5-3 svctask expandvdisksize command


IBM_2145:ITSO_SVC DH8:admin>expandvdisksize -size 1 -unit gb Senegal_bas0001
IBM_2145:ITSO SVC DH8:admin>lsvdisk Senegal_bas0001
id 12
name Senegal_bas0001
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
capacity 11.00GB
type striped
formatted yes
formatting no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801FF0084080000000000000D
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
owner_type none
owner_id
owner_name
encrypt no
volume_id 12
volume_name Senegal_bas0001
function
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
type striped
mdisk_id
mdisk_name
fast_write_state empty

200 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

used_capacity 11.00GB
real_capacity 11.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 11.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 11.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
encrypt no

To check that the volume was expanded, we use the svcinfo lsvdisk command. In
Example 5-3, we can see that the Senegal_bas0001 volume capacity was expanded to
11 GB.

After a Disk Rescan in Windows is performed, you can see the new deallocated space in
Windows Disk Management, as shown in Figure 5-25 on page 201.

Figure 5-25 Expanded volume in Disk Management

This window shows that Disk1 now has 1 GB deallocated new capacity. To make this capacity
available for the file system, use the following commands, as shown in Example 5-4:
򐂰 diskpart: Starts DiskPart in a DOS prompt
򐂰 list volume: Shows you all available volumes
򐂰 select volume: Selects the volume to expand

Chapter 5. Host configuration 201


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 detail volume: Displays details for the selected volume, including the deallocated
capacity
򐂰 extend: Extends the volume to the available deallocated space

Example 5-4 Using the diskpart command


C:\>diskpart
Microsoft DiskPart version 6.3.6900
Copyright (C) 1999-2013 Microsoft Corporation.
On computer: SENEGAL

DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info


---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 C NTFS Partition 75 GB Healthy Boot
Volume 1 S SVC_Senegal NTFS Partition 10 GB Healthy
Volume 2 D DVD-ROM 0 B Healthy

DISKPART> select volume 1


Volume 1 is the selected volume.

DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt


-------- ---------- ------- ------- --- ---
* Disk 1 Online 11 GB 1020 MB

Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
Offline : No
Bitlocker Encrypted : No
Installable : No
Volume Capacity : 11 GB
Volume Free Space : 1024 MB

DISKPART> extend

DiskPart successfully extended the volume.

DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt


-------- ---------- ------- ------- --- ---
* Disk 1 Online 11 GB 0 B

Read-only : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
Offline : No
Bitlocker Encrypted : No
Installable : yes

202 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

After the volume is extended, the detail volume command shows no free capacity on the
volume anymore. The list volume command shows the file system size. The Disk
Management window also shows the new disk size, as shown in Figure 5-26.

Figure 5-26 Disk Management after extending the disk

The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded
by expanding the underlying SVC volume. The new space appears as deallocated space at
the end of the disk.

In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk
Management functions to allocate the new space. Expansion works irrespective of the volume
type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded
without stopping I/O, in most cases.

Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without
backing up your data. This operation is disruptive for the data because of a change in the
position of the logical block address (LBA) on the disks.

5.4.9 Removing a disk on Windows


To remove a disk from Windows, when the disk is an SVC volume, follow the standard
Windows procedure to ensure that no data that we want to preserve is on the disk, that no
applications are using the disk, and that no I/O is going to the disk. After you complete this
procedure, remove the host mapping on the SVC. Ensure that you are removing the correct
volume. To confirm, use SDD to locate the serial number of the disk. On the SVC, run the
lshostvdiskmap command to find the volume’s name and number. Also, check that the SDD
serial number on the host matches the unique identifier (UID) on the SVC for the volume.

When the host mapping is removed, perform a rescan for the disk. Disk Management on the
server removes the disk, and the vpath goes into the status of CLOSE on the server. Verify
these actions by running the datapath query device SDD command, but the vpath that is
closed is first removed after a reboot of the server.

Chapter 5. Host configuration 203


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

In the following examples, we show how to remove an SVC volume from a Windows server.
We show this example on a Windows Server 2008 operating system, but the steps also apply
to Windows Server 2008 R2 and Windows Server 2012.

Figure 5-24 on page 199 shows the Disk Management before removing the disk.

We now remove Disk 1. To find the correct volume information, we find the Serial/UID number
by using SDD, as shown in Example 5-5.

Example 5-5 Removing the SVC disk from the Windows server
C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000000F Reserved: No LUN SIZE: 11.0GB
HOST INTERFACE: FC
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E90800000000000010 Reserved: No LUN SIZE: 10.0GB
HOST INTERFACE: FC
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E90800000000000011 Reserved: No LUN SIZE: 10.0GB
HOST INTERFACE: FC
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0

Knowing the Serial/UID of the volume and that the host name is Senegal, we identify the host
mapping to remove by running the lshostvdiskmap command on the SVC. Then, we remove
the actual host mapping, as shown in Example 5-6.

Example 5-6 Finding and removing the host mapping


IBM_2145:ITSO_SVC1:admin>lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name vdisk_UID IO_group_id
IO_group_name

204 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

1 Senegal 0 7 Senegal_bas0001 6005076801A180E9080000000000000F 0


io_grp0
1 Senegal 1 8 Senegal_bas0002 6005076801A180E90800000000000010 0
io_grp0
1 Senegal 2 9 Senegal_bas0003 6005076801A180E90800000000000011 0
io_grp0

IBM_2145:ITSO_SVC1:admin>rmvdiskhostmap -host Senegal Senegal_bas0001

IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap Senegal


id name SCSI_id vdisk_id vdisk_name vdisk_UID IO_group_id
IO_group_name

1 Senegal 1 8 Senegal_bas0002 6005076801A180E90800000000000010 0


io_grp0
1 Senegal 2 9 Senegal_bas0003 6005076801A180E90800000000000011 0
io_grp0

Here, we can see that the volume is removed from the server. On the server, we then perform
a disk rescan in Disk Management, and we now see that the correct disk (Disk1) was
removed, as shown in Figure 5-27.

Figure 5-27 Disk Management: Disk is removed

SDDDSM also shows us that the status for all paths to Disk1 changed to CLOSE because the
disk is not available, as shown in Example 5-7 on page 206.

Chapter 5. Host configuration 205


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Example 5-7 SDD: Closed path


C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E9080000000000000F Reserved: No LUN SIZE: 11.0GB
HOST INTERFACE: FC
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E90800000000000010 Reserved: No LUN SIZE: 10.0GB
HOST INTERFACE: FC
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801A180E90800000000000011 Reserved: No LUN SIZE: 10.0GB
HOST INTERFACE: FC
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0

The disk (Disk1) is now removed from the server. However, to remove the SDDDSM
information about the disk, you must reboot the server at a convenient time.

5.5 Using SAN Volume Controller CLI from a Windows host


To run CLI commands, we must install and prepare the SSH client system on the Windows
host system.

We can install the PuTTY SSH client software on a Windows host by using the PuTTY
installation program. You can download PuTTY from this website:
http://www.chiark.greenend.org.uk/~sgtatham/putty/

SSH client alternatives for Windows are available at this website:


http://www.openssh.com/windows.html

206 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Cygwin software features an option to install an OpenSSH client. You can download Cygwin
from this website:
http://www.cygwin.com/

5.6 Microsoft Volume Shadow Copy


The SVC provides support for Microsoft Volume Shadow Copy Service (VSS). Microsoft VSS
can provide a point-in-time (shadow) copy of a Windows host volume while the volume is
mounted and the files are in use.

In this section, we describe how to install VSS. The following operating system versions are
supported:
򐂰 Windows Server 2008 with SP2 (x86 and x86_64)
򐂰 Windows Server 2008 R2 with SP1
򐂰 Windows Server 2012

The following components are used to support the service:


򐂰 The SVC
򐂰 IBM System Storage hardware provider, which is known as the IBM System Storage
Support for Microsoft VSS
򐂰 Microsoft Volume Shadow Copy Service

IBM System Storage Support for Microsoft VSS (IBM VSS) is installed on the Windows host.

To provide the point-in-time shadow copy, complete the following process:


1. A backup application on the Windows host starts a snapshot backup.
2. VSS notifies IBM VSS that a copy is needed.
3. The SVC prepares the volume for a snapshot.
4. VSS quiesces the software applications that are writing data on the host and flushes file
system buffers to prepare for a copy.
5. The SVC creates the shadow copy by using the FlashCopy service.
6. VSS notifies the writing applications that I/O operations can resume and notifies the
backup application that the backup was successful.

VSS maintains a free pool of volumes for use as a FlashCopy target and a reserved pool of
volumes. These pools are implemented as virtual host systems on the SVC.

5.6.1 Installation overview


The steps for implementing IBM VSS must be completed in the correct sequence. Before you
begin, you must have experience with, or knowledge of, administering a Windows operating
system. You also must have experience with, or knowledge of, administering an SVC.

You must complete the following tasks:


򐂰 Verify that the system requirements are met.
򐂰 Install IBM VSS.
򐂰 Verify the installation.
򐂰 Create a free pool of volumes and a reserved pool of volumes on the SVC.

Chapter 5. Host configuration 207


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

5.6.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install IBM VSS and
Virtual Disk Service software on the Windows operating system:
򐂰 SVC with FlashCopy enabled
򐂰 IBM System Storage Support for Microsoft VSS and Virtual Disk Service (VDS) software

5.6.3 Installing the IBM System Storage hardware provider


This section describes the steps to install the IBM System Storage hardware provider on a
Windows server. You must satisfy all of the system requirements before you start the
installation.

During the installation, you are prompted to enter information about the SVC Master Console,
including the location of the truststore file. The truststore file is generated during the
installation of the Master Console. You must copy this file to a location that is accessible to the
IBM System Storage hardware provider on the Windows server.

When the installation is complete, the installation program might prompt you to restart the
system. Complete the following steps to install the IBM System Storage hardware provider on
the Windows server:
1. Download the installation archive from the following IBM web site and extract it to a
directory on the Windows server where you want to install IBM System Storage Support
for VSS:
http://ibm.com/support/docview.wss?uid=ssg1S4000833
2. Log in to the Windows server as an administrator and browse to the directory where the
installation files were downloaded.
3. Run the installation program by double-clicking IBMVSSVDS_xx_xx_xx.exe.
4. The Welcome window opens, as shown in Figure 5-28. Click Next to continue with the
installation.

208 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Figure 5-28 IBM System Storage Support for VSS and VDS installation: Welcome

Chapter 5. Host configuration 209


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

5. Accept the license agreement in the next window. The Choose Destination Location
window opens, as shown in Figure 5-29. Click Next to accept the default directory where
the setup program installs the files, or click Change to select another directory.

Figure 5-29 Choose Destination Location

210 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

6. Click Install to begin the installation, as shown in Figure 5-30.

Figure 5-30 IBM System Storage Support for VSS and VDS installation

7. The Enter CIM Server Details window opens. Enter the following information in the fields
(Figure 5-31 on page 212):
a. The CIM Server Address field is propagated with the URL according to the CIM server
address.
b. In the CIM User field, enter the user name that the IBM VSS software used to access
to the SVC.
c. In the CIM Password field, enter the password for the SVC user name. Click Next.

Chapter 5. Host configuration 211


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Figure 5-31 Enter CIM Server Details

8. In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to
restart the system, as shown in Figure 5-32 on page 213.

212 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Figure 5-32 Installation complete

Additional information: If these settings change after installation, you can use the
ibmvcfg.exe tool to update the Microsoft Volume Shadow Copy and Virtual Disk Services
software with the new settings.

If you do not have the CIM Agent server, port, or user information, contact your CIM Agent
administrator.

5.6.4 Verifying the installation


Complete the following steps to verify the installation:
1. From the Windows server start menu, select Start → All Programs → Administrative
Tools → Services.
2. Ensure that the service that is named “IBM System Storage Support for Microsoft Volume
Shadow Copy Service and Virtual Disk Service” software appears and that the Status is
set to Started and that the Startup Type is set to Automatic.
3. Open a command prompt window and run the following command:
vssadmin list providers
4. This command ensures that the service that is named IBM System Storage Support for
Microsoft Volume Shadow Copy Service and Virtual Disk Service software is listed as a
provider, as shown in Example 5-8.

Example 5-8 Microsoft Software Shadow copy provider


PS C:\Users\Administrator> vssadmin list providers

Chapter 5. Host configuration 213


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool


(C) Copyright 2001-2013 Microsoft Corp.

Provider name: 'Microsoft File Share Shadow Copy provider'


Provider type: Fileshare
Provider Id: {89300202-3cec-4981-9171-19f59559e0f2}
Version: 1.0.0.1

Provider name: 'Microsoft Software Shadow Copy provider 1.0'


Provider type: System
Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5}
Version: 1.0.0.7

Provider name: 'IBM Storage Volume Shadow Copy Service Hardware Provider'
Provider type: Hardware
Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b}
Version: 4.10.0.1

If you can successfully perform all of these verification tasks, the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was
successfully installed on the Windows server.

5.6.5 Creating free and reserved pools of volumes


The IBM System Storage hardware provider maintains a free pool of volumes and a reserved
pool of volumes. Because these objects do not exist on the SVC, the free pool of volumes and
the reserved pool of volumes are implemented as virtual host systems. You must define these
two virtual host systems on the SVC.

When a shadow copy is created, the IBM System Storage hardware provider selects a
volume in the free pool, assigns it to the reserved pool, and then removes it from the free
pool. This process protects the volume from being overwritten by other Volume Shadow Copy
Service users.

To successfully perform a Volume Shadow Copy Service operation, enough volumes must be
available that are mapped to the free pool. The volumes must be the same size as the source
volumes.

Use the SVC GUI or SVC CLI to complete the following steps:
1. Create a host for the free pool of volumes. You can use the default name VSS_FREE or
specify another name. Associate the host with the worldwide port name (WWPN)
5000000000000000 (15 zeros), as shown in Example 5-9.

Example 5-9 Creating a mkhost for the free pool


IBM_2145:ITSO SVC 1:admin>mkhost -name VSS_FREE -hbawwpn 5000000000000000
-force Host, id [2], successfully created

2. Create a virtual host for the reserved pool of volumes. You can use the default name
VSS_RESERVED or specify another name. Associate the host with the WWPN
5000000000000001 (14 zeros), as shown in Example 5-10.

214 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Example 5-10 Creating a mkhost for the reserved pool


IBM_2145:ITSO SVC 1:admin> mkhost -name VSS_RESERVED -hbawwpn 5000000000000001
-force Host, id [3], successfully created

3. Map the logical units (volumes) to the free pool of volumes. The volumes cannot be
mapped to any other hosts. If you have volumes that are created for the free pool of
volumes, you must assign the volumes to the free pool.
4. Create host mappings between the volumes that were selected in step 3 and the
VSS_FREE host to add the volumes to the free pool. Alternatively, you can run the
ibmvcfg add command to add volumes to the free pool, as shown in Example 5-11.

Example 5-11 Host mappings


IBM_2145:ITSO SVC 1:admin>mkvdiskhostmap -host VSS_FREE msvc0001
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host VSS_FREE msvc0002
Virtual Disk to Host map, id [1], successfully created

5. Verify that the volumes were mapped. If you do not use the default WWPNs
5000000000000000 and 5000000000000001, you must configure the IBM System
Storage hardware provider with the WWPNs, as shown in Example 5-12.

Example 5-12 Verify hosts


IBM_2145:ITSO SVC 3:admin>lshostvdiskmap VSS_FREE
id name SCSI_id vdisk_id vdisk_name vdisk_UID
IO_group_id IO_group_name
2 VSS_FREE 0 0 msvc0001 60050768018282E33800000000000019 0
io_grp0
2 VSS_FREE 1 5 msvc0002 60050768018282E3380000000000001A 0
io_grp0

5.6.6 Changing the configuration parameters


You can change the parameters that you defined when you installed the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software.
Therefore, you must use the ibmvcfg.exe utility. This command-line utility is in the C:\Program
Files\IBM\Hardware Provider for VSS-VDS directory, as shown in Example 5-13.

Example 5-13 Using ibmvcfg.exe utility help


PS C:\Program Files\IBM\Hardware Provider for VSS-VDS> .\ibmvcfg.exe

IBM VSSVDS Hardware Provider Configuration Tool Commands


-----------------------------------------------------------
C:\Program Files\IBM\Hardware Provider for VSS-VDS\ibmvcfg.exe <command> <command
arguments>

Commands:
/h | /help | -? | /?
showcfg
list <all|free|reserved|assigned|unassigned|infc|pool|iogroup> <-l> (verbose)
add <volume serial number list> (separated by spaces if add more than one)
rem <volume serial number list> (separated by spaces if remove more than one)

Chapter 5. Host configuration 215


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

del <enter the target volume serial number or UUID for SVC in order to delete
incremental flashcopy relationship> (separated by spaces)
clear <one of the configuration settings>

cleanupDependentMaps
testsnapshot <Drive letter and mount point list >
Configuration:
set user <CIMOM user name>
set password <CIMOM password>
set trace [0-7]
set usingSSL <YES | NO>
set vssFreeInitiator <WWPN>
set vssReservedInitiator <WWPN>
set cimomPort <PORTNUM>
set cimomHost <Hostname>
set targetSVC
set backgroundCopy <0-100>
set incrementalFC <YES | NO>
set cimomTimeout <second, zero for unlimit >
set rescanOnceArr <sec> [CAUTION] Default is 0, time length [0-300].
set rescanOnceRem <sec> [CAUTION] Default is 0, time length [0-300].
set rescanRemMin <sec> [CAUTION] Default is 0, time length [0-300].
set rescanRemMax <sec> [CAUTION] Default is 45, time length [0-300].
set storageProtocol <auto, fc or iscsi>
set storagePool <storage pool name>
set allocateOption <option for dynamically allocating target volumes, standard(0)
or se(1)>
set ioGroup <io group(SVC Only) for dynamically allocated target volumes>
set vmhost <vmware web service address>
set vmusername <vmware web service user name>
set vmpassword <vmware web service login password>
set vmcredential <vmware web service session credential location>
set vmtimeout <vmware web service connection timeout>

216 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Table 5-4 lists the available commands.

Table 5-4 Available ibmvcfg.exe utility commands


Command Description Example

ibmvcfg showcfg This command lists the current ibmvcfg showcfg


settings.

ibmvcfg set username This command sets the user ibmvcfg set username Dan
<username> name to access the SVC
Console.

ibmvcfg set password This command sets the ibmvcfg set password
<password> password of the user name that mypassword
accesses the SVC Console.

ibmvcfg set targetSVC This command specifies the IP set targetSVC 10.43.86.120
<ipaddress> address of the SVC on which
the volumes are located when
volumes are moved to and from
the free pool with the ibmvcfg
add and ibmvcfg rem
commands. The IP address is
overridden if you use the -s flag
with the ibmvcfg add and
ibmvcfg rem commands.

set backgroundCopy This command sets the set backgroundCopy 80


background copy rate for
FlashCopy.

ibmvcfg set usingSSL This command specifies ibmvcfg set usingSSL yes
whether to use the Secure
Sockets Layer (SSL) protocol to
connect to the SVC Console.

ibmvcfg set cimomPort This command specifies the ibmvcfg set cimomPort 5999
<portnum> SVC Console port number. The
default value is 5999.

ibmvcfg set cimomHost This command sets the name of ibmvcfg set cimomHost
<server name> the server where the SVC cimomserver
Console is installed.

ibmvcfg set namespace This command specifies the ibmvcfg set namespace
<namespace> namespace value that the \root\ibm
Master Console uses. The
default value is \root\ibm.

ibmvcfg set vssFreeInitiator This command specifies the ibmvcfg set vssFreeInitiator
<WWPN> WWPN of the host. The default 5000000000000000
value is 5000000000000000.
Modify this value only if a host
exists in your environment with
a WWPN of
5000000000000000.

Chapter 5. Host configuration 217


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Command Description Example

ibmvcfg set This command specifies the ibmvcfg set


vssReservedInitiator <WWPN> WWPN of the host. The default vssReservedInitiator
value is 5000000000000001. 5000000000000001
Modify this value only if a host is
already in your environment
with a WWPN of
5000000000000001.

ibmvcfg listvols This command lists all of the ibmvcfg listvols


volumes, including information
about the size, location, and
host mappings.

ibmvcfg listvols all This command lists all of the ibmvcfg listvols all
volumes, including information
about the size, location, and
host mappings.

ibmvcfg listvols free This command lists the ibmvcfg listvols free
volumes that are in the free
pool.

ibmvcfg listvols unassigned This command lists the ibmvcfg listvols unassigned
volumes that are currently not
mapped to any hosts.

ibmvcfg add -s ipaddress This command adds one or ibmvcfg add vdisk12 ibmvcfg
more volumes to the free pool add 600507 68018700035000000
of volumes. Use the -s 0000000BA -s 66.150.210.141
parameter to specify the IP
address of the SVC where the
volumes are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.

ibmvcfg rem -s ipaddress This command removes one or ibmvcfg rem vdisk12 ibmvcfg
more volumes from the free rem 600507 68018700035000000
pool of volumes. Use the -s 0000000BA -s 66.150.210.141
parameter to specify the IP
address of the SVC where the
volumes are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.

5.7 Specific Linux (on x86/x86_64) information


The following sections describe specific information that relates to the connection of Linux on
Intel hosts to the SVC environment.

218 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

5.7.1 Configuring the Linux host


Complete the following steps to configure the Linux host:
1. Use the latest firmware levels on your host system.
2. Install the HBA or HBAs on the Linux server, as described in 5.4.4, “Installing and
configuring the host adapter” on page 188.
3. Install the supported HBA driver or firmware and upgrade the kernel, if required.
4. Connect the Linux server FC host adapters to the switches.
5. Configure the switches (zoning), if needed.
6. Install SDD for Linux, as described in 5.7.5, “Multipathing in Linux” on page 220.
7. Configure the host, volumes, and host mapping in the SVC.
8. Rescan for LUNs on the Linux server to discover the volumes that were created on the
SVC.

5.7.2 Configuration information


The SVC supports hosts that run the following Linux distributions:
򐂰 Red Hat Enterprise Linux
򐂰 SUSE Linux Enterprise Server

For the latest information, see this website:


http://www.ibm.com/storage/support/2145

This website provides the hardware list for supported HBAs and device driver levels for Linux.
Check the supported firmware and driver level for your HBA, and follow the manufacturer’s
instructions to upgrade the firmware and driver levels for each type of HBA.

5.7.3 Disabling automatic Linux system updates


Many Linux distributions give you the ability to configure your systems for automatic system
updates. Red Hat provides this ability in the form of a program that is called up2date. SUSE
Linux provides the YaST Online Update utility. These features periodically query for updates
that are available for each host. You can configure them to automatically install any new
updates that they find.

Often, the automatic update process also upgrades the system to the latest kernel level. Old
hosts that are still running SDD must turn off the automatic update of kernel levels because
certain drivers that are supplied by IBM, such as SDD, depend on a specific kernel and cease
to function on a new kernel. Similarly, HBA drivers must be compiled against specific kernels
to function optimally. By allowing automatic updates of the kernel, you risk affecting your host
systems unexpectedly.

5.7.4 Setting queue depth with QLogic HBAs


The queue depth is the number of I/O operations that can be run in parallel on a device.

Complete the following steps to set the maximum queue depth:


1. Add the following line to the /etc/modules.conf file for the 2.6 kernel (SUSE Linux
Enterprise Server 9, or later, or Red Hat Enterprise Linux 4, or later):

Chapter 5. Host configuration 219


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

options qla2xxx ql2xfailover=0 ql2xmaxqdepth=new_queue_depth


2. Rebuild the RAM disk, which is associated with the kernel that is being used, by using one
of the following commands:
– If you are running on a SUSE Linux Enterprise Server operating system, run the
mk_initrd command.
– If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd
command, and then restart.

5.7.5 Multipathing in Linux


Red Hat Enterprise Linux 5, and later, and SUSE Linux Enterprise Server 10, and later,
provide their own multipath support by the operating system. On older systems, it is
necessary to install the IBM SDD multipath driver. Installation and configuration instructions
for SDD are not provided because it is not to be deployed on newly installed Linux hosts.

Device Mapper Multipath (DM-MPIO)


Red Hat Enterprise Linux 5 (RHEL5), and later, and SUSE Linux Enterprise Server 10
(SLES10), and later, provide their own multipath support for the operating system. Therefore,
you do not have to install another device driver. Always check whether your operating system
includes one of the supported multipath drivers. This information is available by using the
links that are provided in 5.7.2, “Configuration information” on page 219.

In SLES10, the multipath drivers and tools are installed, by default. However, for RHEL5, the
user must explicitly choose the multipath components during the operating system installation
to install them. Each of the attached SVC LUNs has a special device file in the Linux /dev
directory.

Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SVC
allows. The following web site provides the current information about the maximum
configuration for the SVC:
http://www.ibm.com/storage/support/2145

Creating and preparing DM-MPIO volumes for use


First, you must start the MPIO daemon on your system. Run the following SLES command or
the following RHEL command on your host system:
򐂰 Enable MPIO for SLES by running the following commands:
/etc/init.d/boot.multipath {start|stop}
/etc/init.d/multipathd
{start|stop|status|try-restart|restart|force-reload|reload|probe}

Tip: Run insserv boot.multipath multipathd to automatically load the multipath


driver and multipathd daemon during the start.

򐂰 Enable MPIO for RHEL by running the following commands:


modprobe dm-multipath
modprobe dm-round-robin
service multipathd start
chkconfig multipathd on
Example 5-14 shows the commands that are run on a Red Hat Enterprise Linux 6.3
operating system.

220 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Example 5-14 Starting MPIO daemon on Red Hat Enterprise Linux


[root@palau ~]# modprobe dm-round-robin
[root@palau ~]# multipathd start
[root@palau ~]# chkconfig multipathd on
[root@palau ~]#

Complete the following steps to enable multipathing for IBM devices:


1. Open the multipath.conf file and follow the instructions. The file is in the /etc directory.
Example 5-15 shows editing by using vi.

Example 5-15 Editing the multipath.conf file


[root@palau etc]# vi multipath.conf

2. Add the following entry to the multipath.conf file:


device {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio_callout "/sbin/mpath_prio_alua /dev/%n"
}

Note: You can download example multipath.conf files from the following IBM
Subsystem Device Driver for Linux website:
http://ibm.com/support/docview.wss?uid=ssg1S4000107#DM

3. Restart the multipath daemon, as shown in Example 5-16.

Example 5-16 Stopping and starting the multipath daemon


[root@palau ~]# service multipathd stop
Stop multipathd.service
[root@palau ~]# service multipathd start
Start multipathd.service

4. Run the multipath -dl command to see the MPIO configuration. You see two groups with
two paths each. All paths must have the state [active][ready], and one group shows
[enabled].
5. Run the fdisk command to create a partition on the SVC, as shown in Example 5-17.

Example 5-17 fdisk command


[root@palau scsi]# fdisk -l

Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/phsical):512 bytes / 512 bytes
Disk identifier: 0x000b8617
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 41943039 20458496 8e Linux LVM

Disk /dev/mapper/rhel-swap: 4227 MB, 422785832 bytes, 8257536 sectors


Units = sectors of 1 * 512 = 512 bytes

Chapter 5. Host configuration 221


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Sector size (logical/physical): 512 bytes / 512 bytes


I/O size (minimum/optimal):512 bytes / 512 bytes

Disk /dev/mapper/rhel-root: 16.7 GB, 16718495744 bytes, 32653312 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal):512 bytes / 512 bytes

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal):512 bytes / 512 bytes

Disk /dev/sdc: 4244 MB, 4244635648 bytes, 8344356 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal):512 bytes / 512 bytes

Disk /dev/sdd: 4244 MB, 4244635648 bytes, 8344356 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal):512 bytes / 512 bytes

Disk /dev/sde: 4244 MB, 4244635648 bytes, 8344356 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal):512 bytes / 512 bytes

[root@palau scsi]# fdisk /dev/sdb


Welcome to fdisk (util-linux 2.22.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xa56d3d57

Command (m for help): n


Partition type:

p primary (0 primary, 0 extended, 4 free)


e extended
Select (default p)
Using default response p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux an of size 10 GiB is set
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.

222 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

[root@palau scsi]# shutdown -r now

6. Create a file system by running the mkfs command, as shown in Example 5-18.

Example 5-18 mkfs command


[root@palau ~]# mkfs -t ext3 /dev/sdb
mke2fs 1.42.6 (21-Sep-2012)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stride width=0 blocks
655360 inodes, 2621440 blocks
51814 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2684354560
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done


Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[root@palau ~]#

7. Create a mount point and mount the drive, as shown in Example 5-19.

Example 5-19 Mount point


[root@palau ~]# mkdir /svcdisk_0
[root@palau ~]# cd /svcdisk_0/
[root@palau svcdisk_0]# mount -t ext3 /dev/sdb /svcdisk_0
[root@palau svcdisk_0]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
73608360 1970000 67838912 3% /
/dev/mapper/rhel-root 16316416 4733960 11582456 30% /
/dev/hda1 101086 15082 80785 16% /boot
tmpfs 967984 0 967984 0% /dev/shm
/dev/sr0 4194384 4194384 0 100%
/run/media/root/RHEL Server 7
/dev/sdb 10321208 154232 9642688 2% /svcdisk_0

Chapter 5. Host configuration 223


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

5.8 VMware configuration information


This section describes the requirements and other information for attaching the SVC to
various guest host operating systems that are running on the VMware operating system.

5.8.1 Configuring VMware hosts


To configure the VMware hosts, complete the following steps:
1. Install the HBAs in your host system, as described in 5.8.3, “HBAs for hosts that are
running VMware” on page 224.
2. Connect the server FC host adapters to the switches.
3. Configure the switches (zoning), as described in 5.8.4, “VMware storage and zoning
guidance” on page 225.
4. Install the VMware operating system (if not already installed) and check the HBA timeouts,
as described in 5.8.5, “Setting the HBA timeout for failover in VMware” on page 225.
5. Configure the host, volumes, and host mapping in the SVC, as described in 5.8.7,
“Attaching VMware to volumes” on page 226.

5.8.2 Operating system versions and maintenance levels


For more information about VMware support, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1005253#_VMWareFC

At the time of this writing, the following versions are supported:


򐂰 ESXi V6.x
򐂰 ESXi V5.x

5.8.3 HBAs for hosts that are running VMware


Ensure that your hosts that are running on VMware operating systems use the correct HBAs
and firmware levels. Install the host adapters in your system. See the manufacturer’s
instructions for the installation and configuration of the HBAs.

For more information about supported HBAs for older ESX versions, see this website:
http://ibm.com/storage/support/2145

Mostly, the supported HBA device drivers are included in the ESXi server build. However, for
various newer storage adapters, you might be required to load more ESX drivers. Check the
following VMware hardware compatibility list (HCL) if you must load a custom driver for your
adapter:
http://www.vmware.com/resources/compatibility/search.php

After the HBAs are installed, load the default configuration of your FC HBAs. You must use
the same model of HBA with the same firmware in one server. Configuring Emulex and
QLogic HBAs to access the same target in one server is not supported.

SAN boot support


The SAN boot of any guest operating system is supported under VMware. The nature of
VMware means that SAN boot is a requirement on any guest operating system. The guest

224 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

operating system must be on a SAN disk and SAN-attached to the SVC. iSCSI SAN boot is
also supported by VMware but will not be covered in this section.

If you are unfamiliar with the VMware environment and the advantages of storing virtual
machines and application data on a SAN, it is useful to get an overview about VMware
products before you continue.

VMware documentation is available at this web site:


http://www.vmware.com/support/pubs/

5.8.4 VMware storage and zoning guidance


The VMware ESXi server can use a Virtual Machine File System (VMFS). VMFS is a file
system that is optimized to run multiple virtual machines as one workload to minimize disk
I/O. It also can handle concurrent access from multiple physical machines because it enforces
the appropriate access controls. Therefore, multiple ESX hosts can share the set of LUNs.

Theoretically, you can run all of your virtual machines on one LUN. However, for performance
reasons in more complex scenarios, it can be better to load balance virtual machines over
separate HBAs, storages, or arrays.

If you run an ESX host, for example, with several virtual machines, it makes sense to use one
“slow” array. For example, you can use one slow array for Print and Active Directory Services
guest operating systems without high I/O, and another fast array for database guest operating
systems.

The use of fewer volumes has the following advantages:


򐂰 More flexibility to create virtual machines without creating space on the SVC
򐂰 More possibilities for taking VMware snapshots
򐂰 Fewer volumes to manage

The use of more and smaller volumes has the following advantages:
򐂰 Separate I/O characteristics of the guest operating systems
򐂰 More flexibility (the multipathing policy and disk shares are set per volume)
򐂰 Microsoft Cluster Service requires its own volume for each cluster disk resource

For more information about designing your VMware infrastructure, see the following websites:
򐂰 http://www.vmware.com/vmtn/resources/
򐂰 http://www.vmware.com/resources/techresources/1059

Guidelines: ESX server hosts that use shared storage for virtual machine failover or load
balancing must be in the same zone. You can have only one VMFS volume per volume.

5.8.5 Setting the HBA timeout for failover in VMware


The timeout for failover for ESXi hosts must be set to 30 seconds. Consider the following
points:
򐂰 For QLogic HBAs, the timeout depends on the PortDownRetryCount parameter. The
timeout value is shown in the following example:
2 x PortDownRetryCount + 5 seconds
Set the qlport_down_retry parameter to 14.

Chapter 5. Host configuration 225


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 For Emulex HBAs, the lpfc_linkdown_tmo and the lpcf_nodev_tmo parameters must be
set to 30 seconds.

To make these changes on your system (Example 5-20), complete the following steps:
1. Back up the /etc/vmware/esx.cof file.
2. Open the /etc/vmware/esx.cof file for editing.
The file includes a section for every installed SCSI device.
3. Locate your SCSI adapters and edit the previously described parameters.
4. Repeat this process for every installed HBA.

Example 5-20 Setting the HBA timeout


[root@Teddy svc]# cp /etc/vmware/esx.conf /etc/vmware/esx.confbackup
[root@Teddy svc]# vi /etc/vmware/esx.conf

5.8.6 Multipathing in ESX


The VMware ESXi server performs native multipathing. You do not need to install another
multipathing driver, such as SDD.

5.8.7 Attaching VMware to volumes


First, ensure that the VMware host is logged in to the SVC. In our examples, we use the
VMware ESXi server V5.5 and the host name of Teddy.

Enter the following command to check the status of the host:


svcinfo lshost <hostname>

Example 5-21 shows that the host Teddy is logged in to the SVC with two HBAs.

Example 5-21 The lshost Teddy


IBM_2145:ITSO SVC 3:superuser>lshost Teddy
id 4
name Teddy
port_count 2
type generic
mask 1111_11
iogrp_count 4
status online
WWPN 2100001B321F26C6
node_logged_in_count 2
state active
WWPN 2101001B323F26C6
node_logged_in_count 2
state active
site_id
site_name

Then, the SCSI Controller Type must be set in VMware. By default, the ESXi server disables
the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS
file at the same time. See Figure 5-33 on page 227.

226 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

But, in many configurations, such as configurations for high availability, the virtual machines
must share the VMFS file to share a disk.

Complete the following steps to set the SCSI Controller Type in VMware:
1. Log in to your Infrastructure Client, shut down the virtual machine, right-click it, and select
Edit settings.
2. Highlight the SCSI Controller, and select one of the following available settings, depending
on your configuration:
– None: Disks cannot be shared by other virtual machines.
– Virtual: Disks can be shared by virtual machines on the same server.
– Physical: Disks can be shared by virtual machines on any server.
Click OK to apply the setting.

Figure 5-33 Changing SCSI bus settings

3. Create your volumes on the SVC. Then, map them to the ESX hosts.

Tips: If you want to use features, such as VMotion, the volumes that own the VMFS file
must be visible to every ESX host that can host the virtual machine.

In the SVC, select Allow the virtual disks to be mapped even if they are already
mapped to a host.

The volume must have the same SCSI ID on each ESX host.

For this configuration, we created one volume and mapped it to our ESX host, as shown in
Example 5-22.

Chapter 5. Host configuration 227


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Example 5-22 Mapped volume to ESX host Teddy


IBM_2145:ITSO SVC 3:superuser>lshostvdiskmap Teddy
id name SCSI_id vdisk_id vdisk_name vdisk_UID IO_group_id
IO_group_name
4 Teddy 0 6 Anton0 60050768018282E3380000000000001B 0
io_grp0
4 Teddy 1 7 Anton1 60050768018282E3380000000000001C 0
io_grp0
4 Teddy 2 8 Anton2 60050768018282E3380000000000001D 0
io_grp0

ESX does not automatically scan for SAN changes (except when rebooting the entire ESXi
server). If you made any changes to your SVC or SAN configuration, complete the following
steps:
1. Open your VMware Infrastructure Client.
2. Select the host.
3. In the Hardware window, choose Storage Adapters.
4. Click Rescan.

To configure a storage device to use it in VMware, complete the following steps:


1. Open your VMware Infrastructure Client.
2. Select the host for which you want to see the assigned volumes and click the
Configuration tab.
3. In the Hardware window on the left side, click Storage.
4. To create a storage pool, select click here to create a datastore or click Add storage if
the field does not appear (Figure 5-34 on page 228).

Figure 5-34 VMware add datastore

5. The Add storage wizard opens.


6. Select Create Disk/Lun, and then click Next.
7. Select the SVC volume that you want to use for the datastore, and then click Next.
8. Review the disk layout. Click Next.
9. Enter a datastore name. Click Next.
10.Select a block size and enter the size of the new partition. Click Next.
11.Review your selections. Click Finish.

228 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Now, the created VMFS datastore appears in the Storage window, as shown in Figure 5-35.
You see the details for the highlighted datastore. Check whether all of the paths are available
and that the Path Selection is set to Round Robin.

Figure 5-35 VMware storage configuration

If not all of the paths are available, check your SAN and storage configuration. After the
problem is fixed, select Refresh to perform a path rescan. The view is updated to the new
configuration.

The preferred practice is to use the Round Robin Multipath Policy for the SVC. If you need to
edit this policy, complete the following steps:
1. Highlight the datastore.
2. Click Properties.
3. Click Managed Paths.
4. Click Change.
5. Select Round Robin.
6. Click Change.
7. Click Close.

Complete this for all available data stores.

Now, your VMFS datastore is created and you can start using it for your guest operating
systems. Round Robin distributes the I/O load across all available paths. If you want to use a
fixed path, the policy setting Fixed also is supported.

5.8.8 Volume naming in VMware


In the Virtual Infrastructure Client, a volume is displayed as a sequence of three or four
numbers, which are separated by colons (Figure 5-36) and are shown under the Device and
SAN Identifier columns, as shown in the following example:

Chapter 5. Host configuration 229


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

<SCSI HBA>:<SCSI target>:<SCSI volume>:<disk partition>

The following definitions apply to the previous variables:


򐂰 SCSI HBA: The number of the SCSI HBA (can change).
򐂰 SCSI target: The number of the SCSI target (can change).
򐂰 SCSI volume: The number of the volume (never changes).
򐂰 disk partition: The number of the disk partition. (This value never changes.) If the last
number is not displayed, the name stands for the entire volume.

Figure 5-36 Volume naming in VMware

5.8.9 Setting the Microsoft guest operating system timeout


For a Windows 2008 Server operating system that is installed as a VMware guest operating
system, the disk timeout value must be set to 60 seconds.

For more information about performing this task, see 5.4.5, “Changing the disk timeout on
Windows Server” on page 188.

5.8.10 Extending a VMFS volume


VMFS volumes can be extended while virtual machines are running. First, you must extend
the volume on the SVC, and then you can extend the VMFS volume.

Note: Before you perform the steps that are described here, back up your data.

230 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

Complete the following steps to extend a volume:


1. Expand the volume by running the svctask expandvdisksize -size 1 -unit gb
<VDiskname> command, as shown in Example 5-23.

Example 5-23 Expanding volume Anton1 on the SVC


IBM_2145:ITSO SVC 3:superuser>lsvdisk Anton1
id 7
name Anton1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name VMware
capacity 10.00GB
type striped
formatted no
formatting no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018282E3380000000000001C
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time 151028033208
parent_mdisk_grp_id 2
parent_mdisk_grp_name VMware
owner_type none
owner_id
owner_name

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 2
mdisk_grp_name VMware
type striped
mdisk_id
mdisk_name
fast_write_state empty

Chapter 5. Host configuration 231


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

used_capacity 44.00MB
real_capacity 252.80MB
free_capacity 208.80MB
overallocation 4050
autoexpand on
warning 80
grainsize 256
se_copy yes
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 252.80MB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 44.00MB
parent_mdisk_grp_id 2
parent_mdisk_grp_name VMware

IBM_2145:ITSO-CLS1:admin>expandvdisksize -size 100 -unit gb Anton1


IBM_2145:ITSO SVC 3:superuser>lsvdisk Anton1
id 7
name Anton1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name VMware
capacity 110.00GB
type striped
formatted no
formatting no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018282E3380000000000001C
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0

232 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

access_IO_group_count 1
last_access_time 151028033708
parent_mdisk_grp_id 2
parent_mdisk_grp_name VMware
owner_type none
owner_id
owner_name

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 2
mdisk_grp_name VMware
type striped
mdisk_id
mdisk_name

fast_write_state not_empty
used_capacity 44.00MB
real_capacity 252.80MB
free_capacity 208.80MB
overallocation 44556
autoexpand on
warning 80
grainsize 256
se_copy yes
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 252.80MB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 44.00MB
parent_mdisk_grp_id 2
parent_mdisk_grp_name VMware
IBM_2145:ITSO SVC 3:superuser>

2. Open the Virtual Infrastructure Client.


3. Select the host.
4. Select Configuration.
5. Select Storage Adapters.
6. Click Rescan All.
7. Ensure that the Scan for new Storage Devices option is selected, and then click OK.
After the scan completes, the new capacity is displayed in the Details section.
8. Click Storage.
9. Right-click the VMFS volume and click Properties.
10.Click Increase

Chapter 5. Host configuration 233


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

11.Select the datastore which should be expanded


12.Click Next.
13.Click Next.
14.Select: Maximum available space or Custom space setting and fill in the wanted space
15.Click Next.
16.Click Finish.
17.Click Close.

The VMFS volume is now extended and the new space is ready for use.

5.8.11 Removing a datastore from an ESX host


Before you remove a datastore from an ESX host, you must migrate or delete all of the virtual
machines that are on this datastore.

To remove the datastore, complete the following steps:


1. Back up the data.
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
5. Select Storage.
6. Highlight the datastore that you want to remove.
7. Click Delete.
8. Read the warning, and if you are sure that you want to remove the datastore and delete all
of the data on it, click Yes.
9. Remove the host mapping on the SVC, or delete the volume, as shown in Example 5-24.

Example 5-24 Host mapping: Delete the volume


IBM_2145:ITSO SVC 3:superuser>rmvdiskhostmap -host Teddy Anton1
IBM_2145:ITSO SVC 3:superuser>rmvdisk Anton1

10.In the VI Client, select Storage Adapters.


11.Click Rescan.
12.Ensure that the Scan for new Storage Devices option is selected and click OK.
13.After the scan completes, the disk is removed from the view.

Your datastore is now removed successfully from the system.

5.9 Using the SDDDSM, SDDPCM, and SDD web interface


After the SDDDSM or SDD driver is installed, specific commands are available. To open a
command window for SDDDSM or SDD, select Start → Programs → Subsystem Device
Driver → Subsystem Device Driver Management from the desktop.

234 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm

You can also configure SDDDSM to offer a web interface that provides basic information.
Before this configuration can work, you must configure the web interface. SDDSRV does not
bind to any TCP/IP port, by default, but it allows port binding to be enabled or disabled
dynamically.

For all platforms except Linux, the multipath driver package includes an sddsrv.conf
template file that is named the sample_sddsrv.conf file. On all UNIX platforms except Linux,
the sample_sddsrv.conf file is in the /etc directory. On Windows platforms, the
sample_sddsrv.conf file is in the directory in which SDDDSM was installed.

You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory
as the sample_sddsrv.conf file by copying it and naming the copied file sddsrv.conf. You can
then dynamically change the port binding by modifying the parameters in the sddsrv.conf file
and changing the values of Enableport and Loopbackbind to True.

5.10 More information


For more information about host attachment and configuration to the SVC, see the IBM SAN
Volume Controller: Host Attachment User’s Guide, SC26-7905.

For more information about SDDDSM configuration, see the IBM System Storage Multipath
Subsystem Device Driver User’s Guide, S7000303, which is available from this web site:
http://ibm.com/support/docview.wss?uid=ssg1S7000303

For more information about host attachment and storage subsystem attachment, and
troubleshooting, see the IBM SAN Volume Controller Knowledge Center at this web site:
http://www.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp

5.10.1 SAN Volume Controller storage subsystem attachment guidelines


It is beyond the scope of this book to describe the attachment to each subsystem that the
SVC supports. The following material was useful in the writing of this book and when working
with clients:
򐂰 For more information about how you can optimize your back-end storage to maximize your
performance on the SVC, see SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521, which is available at this web site:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
򐂰 For more information about the guidelines and procedures to optimize the performance
that is available from your DS8000 storage subsystem when it is attached to the SVC, see
DS8800 Performance Monitoring and Tuning, SG24-8013, which is available at this web
site:
http://www.redbooks.ibm.com/abstracts/sg248013.html?Open
򐂰 For more information about how to connect and configure your storage for optimized
performance on the SVC, see the IBM Midrange System Storage Implementation and
Best Practices Guide, SG24-6363, which is available at this web site:
http://www.redbooks.ibm.com/abstracts/sg246363.html?Open

Chapter 5. Host configuration 235


7933 05 Host Configuration Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 For more information about specific considerations for attaching the XIV Storage System
to an SVC, see IBM XIV Storage System: Architecture, Implementation and Usage,
SG24-7659, which is available at this web site:
http://www.redbooks.ibm.com/abstracts/sg247659.html?Open

236 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Chapter 6. Data migration


In this chapter, we describe how to migrate from a conventional storage infrastructure to a
virtualized storage infrastructure by using the IBM Spectrum Virtualize V7.6 software running
on IBM SAN Volume Controller. We also explain how SVC can be phased out of a virtualized
storage infrastructure, for example, after a trial period or after using SVC as a data migration
tool.

We also introduce and demonstrate the SVC support of the nondisruptive movement of
volumes between SVC I/O Groups, which is referred to as nondisruptive volume move or
multinode volume access.

We describe how to migrate from a fully allocated volume to a thin-provisioned volume by


using the volume mirroring feature and the thin-provisioned volume together.

This chapter includes the following topics:


򐂰 Migration overview
򐂰 Migration operations
򐂰 Functional overview of MDisk migration
򐂰 Migrating data from image mode volume
򐂰 Data migration for Windows using the GUI
򐂰 Migrating Linux SAN disks to SVC disks
򐂰 Migrating ESX SAN disks to SVC disks
򐂰 Migrating AIX SAN disks to SVC volumes
򐂰 Using SVC for storage migration
򐂰 Using volume mirroring and thin-provisioned volumes together

© Copyright IBM Corp. 2015. All rights reserved. 237


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

6.1 Migration overview


By using the SVC, you can change the mapping of volume extents to managed disk (MDisk)
extents, without interrupting host access to the volume. This functionality is used when
volume migrations are performed. It also applies to any volume that is defined on the SVC.

This functionality can be used for the following tasks:


򐂰 Migrating data from older back-end storage to SVC managed storage.
򐂰 Migrating data from one back-end controller to another back-end controller by using SVC
as a data block mover, and afterward removing SVC from the storage area network (SAN).
򐂰 Migrating data from managed mode back into image mode before the SVC is removed
from a SAN.
򐂰 You can redistribute volumes to accomplish the following tasks:
– Moving workload onto newly installed storage
– Moving workload off old or failing storage before you decommission the storage
– Moving workload to rebalance a changed workload
򐂰 Migrating data from one SVC clustered system to another SVC system.
򐂰 Moving volumes’ I/O caching between SVC I/O Groups to redistribute workload across an
SVC clustered system.

6.2 Migration operations


You can perform migration at the volume level or the extent level, depending on the purpose
of the migration. The following migration tasks are supported:
򐂰 Migrating extents within a storage pool and redistributing the extents of a volume on the
MDisks within the same storage pool
򐂰 Migrating extents off an MDisk (which is removed from the storage pool) to other MDisks in
the same storage pool
򐂰 Migrating a volume from one storage pool to another storage pool
򐂰 Migrating a volume to change the virtualization type of the volume to image
򐂰 Moving a volume between I/O Groups non-disruptively

6.2.1 Migrating multiple extents within a storage pool


You can migrate many volume extents at one time by using the migrateexts command.

For more information about the migrateexts command parameters, see the following
resources:
򐂰 The SVC command-line interface help by entering the following command:
help migrateexts
򐂰 The IBM System Storage SAN Volume Controller Command-Line Interface User’s Guide,
GC27-2287

When this command is run, a number of extents are migrated from the source MDisk where
the extents of the specified volume are located to a defined target MDisk that must be part of
the same storage pool.

238 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

You can specify a number of migration threads to be used in parallel (1 - 4).

If the type of the volume is image, the volume type changes to striped when the first extent is
migrated. The MDisk access mode changes from image to managed.

6.2.2 Migrating extents off an MDisk that is being deleted


When an MDisk is deleted from a storage pool by using the rmmdisk -force command, any
extents on the MDisk that are used by a volume are first migrated off the MDisk and onto
other MDisks in the storage pool before it is deleted.

In this case, the extents that must be migrated are moved onto the set of MDisks that are not
being deleted. This statement is true if multiple MDisks are being removed from the storage
pool at the same time.

If a volume uses one or more extents that must be moved as a result of running the rmmdisk
command, the virtualization type for that volume is set to striped (if it was previously
sequential or image).

If the MDisk is operating in image mode, the MDisk changes to managed mode while the
extents are being migrated. Upon deletion, it changes to unmanaged mode.

Using the -force flag: If the -force flag is not used and if volumes occupy extents on one
or more of the MDisks that are specified, the command fails.

When the -force flag is used and if volumes occupy extents on one or more of the MDisks
that are specified, all extents on the MDisks are migrated to the other MDisks in the
storage pool if enough free extents exist in the storage pool. The deletion of the MDisks is
postponed until all extents are migrated, which can take time. If insufficient free extents
exist in the storage pool, the command fails.

6.2.3 Migrating a volume between storage pools


An entire volume can be migrated from one storage pool to another storage pool by using the
migratevdisk command. A volume can be migrated between storage pools regardless of the
virtualization type (image, striped, or sequential), although it changes to the virtualization type
of striped. The command varies, depending on the type of migration, as shown in Table 6-1.

Table 6-1 Migration types and associated commands


Storage pool-to-storage pool type Command

Managed to managed migratevdisk

Image to managed migratevdisk

Managed to image migratetoimage

Image to image migratetoimage

Chapter 6. Data migration 239


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Rule: For the migration to be acceptable, the source storage pool and the destination
storage pool must have the same extent size. Volume mirroring can also be used to
migrate a volume between storage pools. You can use this method if the extent sizes of the
two pools are not the same.

Figure 6-1 shows a managed volume migration to another storage pool.

Figure 6-1 Managed volume migration to another storage pool

In Figure 6-1, we show volume V3 migrating from Pool 2 to Pool 3.

Extents are allocated to the migrating volume from the set of MDisks in the target storage
pool by using the extent allocation algorithm.

The process can be prioritized by specifying the number of threads that are used in parallel
(1 - 4) while migrating; the use of only one thread puts the least background load on the
system.

The offline rules apply to both storage pools. Therefore, as shown in Figure 6-1, if any of the
M4, M5, M6, or M7 MDisks go offline, the V3 volume goes offline. If the M4 MDisk goes
offline, V3 and V5 go offline; however, V1, V2, V4, and V6 remain online.

If the type of the volume is image, the volume type changes to striped when the first extent is
migrated. The MDisk access mode changes from image to managed.

During the move, the volume is listed as being a member of the original storage pool. For
configuration, the volume moves to the new storage pool instantaneously at the end of the
migration.

240 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

6.2.4 Migrating the volume to image mode


The facility to migrate a volume to an image mode volume can be combined with the
capability to migrate between storage pools. The source for the migration can be a managed
mode or an image mode volume. This combination of functions leads to the following
possibilities:
򐂰 Migrate image mode to image mode within a storage pool.
򐂰 Migrate managed mode to image mode within a storage pool.
򐂰 Migrate image mode to image mode between storage pools.
򐂰 Migrate managed mode to image mode between storage pools.

The following conditions must be met so that migration can occur:


򐂰 The destination MDisk must be greater than or equal to the size of the volume.
򐂰 The MDisk that is specified as the target must be in an unmanaged state at the time that
the command is run.
򐂰 If the migration is interrupted by a system recovery, the migration resumes after the
recovery completes.
򐂰 If the migration involves moving between storage pools, the volume behaves as described
in 6.2.3, “Migrating a volume between storage pools” on page 239.

Regardless of the mode in which the volume starts, the volume is reported as being in
managed mode during the migration. Also, both of the MDisks that are involved are reported
as being in image mode during the migration. Upon completion of the command, the volume
is classified as an image mode volume.

6.2.5 Non-disruptive Volume Move


One of the major enhancements that was introduced in SVC V6.4 is a feature that is called
Non-disruptive Volume Move (NDVM). This feature is described in the IBM Knowledge
Center, in the Volumes section under the heading “Moving a volume between I/O groups
using the CLI”. This provides the technical overview of the commands and the procedures
used, but the GUI is also designed to support this activity and does so using the Modify I/O
groups wizard. See: Volumes → Actions → Modify I/O group.

Note: This e-learning module describes the operation https://ibm.biz/BdHnrR

In the previous versions of SVC code, volumes could be migrated between I/O Groups,
however, this operation required I/O operations to be quiesced to all volumes being
migrated.

A high level overview of NDVM is shown in Figure 6-2 on page 242.

Chapter 6. Data migration 241


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-2 Overview of NDVM

NDVM enhancements mean access to a single volume is now possible by all nodes in the
clustered system. The feature adds the concept of access I/O groups whilst maintaining the
concept of a caching I/O Group, i.e. access to a volume is possible from any node in the
cluster, however, a single I/O Group still controls the I/O caching.

This ability to dynamically rebalance the SVC workload is helpful in situations where the
natural growth of the environments I/O demands forces the client and storage administrators
to expand hardware resources. With NDVM, you can instantly rebalance the workload to the
volumes to the new set of SVC nodes (I/O Group) without needing to quiesce or interrupt
application operations and easily lower the high utilization of the original I/O Group. Although
this is now possible from an SVC perspective, we need to consider the implications this will
have from the hosts perspective.

The technical aspects of an NDVM transition, and I/O group access considerations are shown
in Figure 6-3 on page 243.

242 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-3 Technical overview of NDVM

Before you move the volumes to a new I/O Group on the SVC system, ensure that the
following prerequisites are met:
򐂰 The host has access to the new I/O Group node ports through SAN zoning.
򐂰 The host is assigned to the new I/O Group on the SVC system level.
򐂰 The host operating system and multipathing software support the NDVM feature.

For more information about supported systems, see the Supported Hardware List, Device
Driver, Firmware, and Recommended Software Levels for the SVC, which is available at:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004946
Other technical considerations include:
򐂰 The caching I/O group of a volume must be online to be able to access to the volume –
even if the volume is accessible through other IO groups.
򐂰 A volume which is in a Metro or Global Mirror relationship cannot change its caching I/O
group.
򐂰 If a volume in a FlashCopy relationship is moved, the “bitmaps” are left in the original I/O
group,
– This will cause additional inter-node messaging to allow FlashCopy to operate.
򐂰 A volume could be configured so that it can be accessed through all I/O groups at all
times.
򐂰 A volume which is mapped to a host through multiple I/O groups will often not have the
same LU number (aka SCSI ID) in each I/O group.
– Reports indicate that this may cause problems with the newest VMWare systems.
򐂰 Can be used to change the preferred node of a volume non disruptively by moving the
caching I/O group to a different I/O group and back again
– The preferred node change may not be detected by the multipathing driver without a
reboot.
򐂰 Non preferred nodes in any I/O group will have the same “priority”

Chapter 6. Data migration 243


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Compressed volumes can be used with NDVM as long as the target I/O group does not
already have 200 Compressed volumes in it.
– Be aware that this will make the target I/O group into a compressed I/O group.

Note: If customers are already using 8 paths per volumes then using NDVM will increase
the number of paths per disk to > 8 paths. Therefore this will not be supported,

In this example, we want to move one of the AIX host volumes from its existing I/O Group to
the recently added pair of SVC nodes. To perform the NDVM by using the SVC GUI, complete
the following steps:
1. Verify that the host is assigned to the source and target I/O Groups. Select Hosts from the
left menu pane (Figure 6-4) and confirm the # of I/O Groups column.

Figure 6-4 SVC Hosts I/O Groups assignment

2. Right-click the host and select Properties → Mapped Volumes. Verify the volumes and
caching I/O Group ownership, as shown in Figure 6-5.

244 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-5 Caching I/O Group ID

3. Now, we move lpar01_vol3 from the existing SVC I/O Group 0 to the new I/O Group 1.
From the left menu pane, select Volumes to see all of the volumes and optionally, filter the
output for the results that you want, as shown in Figure 6-6.

Figure 6-6 Select and filter volumes

4. Right-click volume lpar01_vol3, and in the menu, select Move Volume to a New I/O
Group.

Note: This operation can also be performed from the “volume by pool” or “volumes by host”
panels of the GUI

5. The Move Volume to a New I/O Group wizard window starts (Figure 6-7 on page 246).
Click Next.

Chapter 6. Data migration 245


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-7 Move Volume to a New I/O Group wizard: Welcome window

6. Select I/O Group and Node → New Group (and optionally the preferred SVC node) or
leave Automatic for the default node assignment. Click Apply and Next, as shown in
Figure 6-8.
You can see the progress of the task that is displayed in the task window and the SVC CLI
command sequence that is running the svctask movevdisk and svctask addvdiskaccess
commands.

Figure 6-8 Move Volume to a New I/O Group wizard: Select New I/O Group and Preferred Node

7. The task completion window opens. Next, you need to detect the new paths by the
selected host to switch over the I/O processing to the new I/O Group. Perform the path

246 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

detection that is based on the operating system-specific procedures, as shown in


Figure 6-9 on page 247. Click Apply and Next.

Figure 6-9 Move Volume to a New I/O Group wizard: Detect New Paths window

8. The SVC removes the old I/O Group access to a volume by calling the svctask
rmvdiskaccess CLI command. After the task completes, close the task window.
9. The confirmation with information about the I/O Group move is displayed on the Move
Volume to a New I/O Group wizard window. Proceed to the Summary by clicking Next.
10.Review the summary information and click Finish. The volume is successfully moved to a
new I/O Group without I/O disruption on the host side. To verify that volume is now being
cached by the new I/O Group, verify the Caching I/O Group column on the Volumes
submenu, as shown in Figure 6-10 on page 248.

Chapter 6. Data migration 247


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-10 New caching I/O Group

Note: For SVC V6.4 and higher, the CLI command svctask chvdisk is not supported for
migrating a volume between I/O Groups. Although svctask chvdisk still modifies multiple
properties of a volume, the new SVC CLI command movevdisk is used for moving a volume
between I/O Groups.

In certain conditions, you might still want to keep the volume accessible through multiple I/O
Groups. This function is possible, but only a single I/O Group can provide the caching of the
I/O to the volume. For modifying the access to a volume for more I/O Groups, use the SVC
CLI commands addvdiskaccess or rmvdiskaccess.

6.2.6 Monitoring the migration progress


To monitor the progress of ongoing migrations, use the Management GUI Running Tasks.
Bottom left corner of the home panel. (Figure 6-11)

Figure 6-11 Monitoring Migration using Running Tasks.

248 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Alternatively use the following CLI command:


lsmigrate

To determine the extent allocation of MDisks and volumes, use the following commands:
򐂰 To list the volume IDs and the corresponding number of extents that the volumes occupy
on the queried MDisk, use the following CLI command:
lsmdiskextent <mdiskname | mdisk_id>
򐂰 To list the MDisk IDs and the corresponding number of extents that the queried volumes
occupy on the listed MDisks, use the following CLI command:
lsvdiskextent <vdiskname | vdisk_id>
򐂰 To list the number of available free extents on an MDisk, use the following CLI command:
lsfreeextents <mdiskname | mdisk_id>

Important: After a migration is started, the migration cannot be stopped. The migration
runs to completion unless it is stopped or suspended by an error condition, or if the volume
that is being migrated is deleted.

If you want the ability to start, suspend, or cancel a migration or control the rate of
migration, consider the use of the volume mirroring function or migrating volumes between
storage pools.

6.3 Functional overview of MDisk migration


This section describes a functional view of data migration on MDisk level including extents
migration. The image-mode migration is outlined in 6.4, “Migrating data from image mode
volume” on page 252.

The concept of the volumes extent migration is used by a feature called Easy Tier and its
Automatic Load Balancing that is enabled by default on each storage pool.

Easy Tier: A performance function that automatically and non-disruptively migrates


frequently accessed data to a faster, high performing media (flash drives).

Automatic Load Balancing: Migrates extents within the same pool and distributes the
workload equally between MDisks.

For additional information see IBM Knowledge Center: https://ibm.biz/BdHnZB

6.3.1 Parallelism
You can perform several of the following activities in parallel.

Each system
An SVC system supports up to 32 active concurrent instances of members of the set of the
following migration tasks:
򐂰 Migrate multiple extents
򐂰 Migrate between storage pools
򐂰 Migrate off a deleted MDisk
򐂰 Migrate to image mode

Chapter 6. Data migration 249


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

The following high-level migration tasks operate by scheduling single extent migrations:
򐂰 Up to 256 single extent migrations can run concurrently. This number is made up of single
extent migrations, which result from the operations previously listed.
򐂰 The Migrate Multiple Extents and Migrate Between storage pools commands support a
flag with which you can specify the number of parallel “threads” to use (1 - 4). This
parameter affects the number of extents that are concurrently migrated for that migration
operation. Therefore, if the thread value is set to 4, up to four extents can be migrated
concurrently for that operation (subject to other resource constraints).

Each MDisk
The SVC supports up to four concurrent single extent migrations per MDisk. This limit does
not consider whether the MDisk is the source or the destination. If more than four single
extent migrations are scheduled for a particular MDisk, further migrations are queued,
pending the completion of one of the currently running migrations.

6.3.2 Error handling


The migration is suspended or stopped if one of the following conditions exists:
򐂰 A medium error occurs on a read from the source.
򐂰 The destination’s medium error table is full.
򐂰 An I/O error occurs on a read from the source repeatedly.
򐂰 The MDisks go offline repeatedly.

The migration is only suspended if any of the following conditions exist. Otherwise, the
migration is stopped:
򐂰 The migration occurs between storage pools, and the migration progressed beyond the
first extent.
These migrations are always suspended rather than stopped because stopping a
migration in progress leaves a volume that is spanning storage pools, which is not a valid
configuration other than during a migration.
򐂰 The migration is a Migrate to Image Mode (even if it is processing the first extent).
These migrations are always suspended rather than stopped because stopping a
migration in progress leaves the volume in an inconsistent state.
򐂰 A migration is waiting for a metadata checkpoint that failed.

If a migration is stopped and if any migrations are queued while waiting for the use of the
MDisk for migration, these migrations now start. However, if a migration is suspended, the
migration continues to use resources, and so, another migration is not started.

The SVC attempts to resume the migration if the error log entry is marked as fixed by using
the CLI or the GUI. If the error condition no longer exists, the migration proceeds. The
migration might resume on a node other than the node that started the migration.

6.3.3 Migration algorithm


This section describes the effect of the migration algorithm.

Chunks
Regardless of the extent size for the storage pool, data is migrated in units of 16 MBs. In this
description, this unit is referred to as a chunk.

250 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

We describe the algorithm that is used to migrate an extent:


1. Pause (pause means to queue all new I/O requests in the virtualization layer in the SVC
and to wait for all outstanding requests to complete) all I/O on the source MDisk on all
nodes in the SVC system. The I/O to other extents is unaffected.
2. Unpause (resume) I/O on all of the source MDisk extents apart from writes to the specific
chunk that is being migrated. Writes to the extent are mirrored to the source and the
destination.
3. On the node that is performing the migration, for each 256 KB section of the chunk:
– Synchronously read 256 KB from the source
– Synchronously write 256 KB to the target
4. After the entire chunk is copied to the destination, repeat the process for the next chunk
within the extent.
5. After the entire extent is migrated, pause all I/O to the extent that is being migrated,
perform a checkpoint on the extent move to on-disk metadata, redirect all further reads to
the destination, and stop mirroring writes (writes only to the destination).
6. If the checkpoint fails, the I/O is unpaused.

During the migration, the extent can be divided into the following regions, as shown in
Figure 6-12 on page 252:
򐂰 Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the
virtualization layer that is waiting for the chunk to be copied.
򐂰 Reads to Region A are directed to the destination because this data was copied. Writes to
Region A are written to the source and the destination extent to maintain the integrity of
the source extent.
򐂰 Reads and writes to Region C are directed to the source because this region is not yet
migrated.

The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During
this time, all writes to the chunk from higher layers in the software stack, such as cache
destages, are held back. If the back-end storage is operating with significant latency, this
operation might take time (minutes) to complete, which can have an adverse affect on the
overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is
still active after 1 minute, the migration is paused for 30 seconds. During this time, writes to
the chunk can proceed. After 30 seconds, the migration of the chunk is resumed. This
algorithm is repeated as many times as necessary to complete the migration of the chunk, as
shown in Figure 6-12 on page 252.

Chapter 6. Data migration 251


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Managed Disk Extents

Extent N-1 Extent N Extent N+1

Region A Region B Region C


(already (copying) (yet to be copied)
copied) reads/writes reads/writes go
reads/writes paused to source
go to
destination

Not to scale
16 MiB
Figure 6-12 Migrating an extent

The SVC ensures read stability during data migrations, even if the data migration is stopped
by a node reset or a system shutdown. This read stability is possible because the SVC
disallows writes on all nodes to the area that is being copied. On a failure, the extent
migration is restarted from the beginning. At the conclusion of the operation, we see the
following results:
򐂰 Extents were migrated in 16 MiB chunks, one chunk at a time.
򐂰 Chunks are copied in progress or not copied.
򐂰 When the extent is finished, its new location is saved.

Figure 6-13 shows the data migration and write operation relationship.

Figure 6-13 Migration and write operation relationship

6.4 Migrating data from image mode volume


This section describes migrating data from an image mode volume to a fully managed
volume. This type of migration is used to take an existing host logical unit number (LUN) and
move it into the virtualization environment as provided by the SVC system.

252 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

6.4.1 Image mode volume migration concept


First, we describe the concepts that are associated with this operation.

MDisk modes
The following MDisk modes are available:
򐂰 Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and it has no metadata that is
stored on it. The SVC does not write to an MDisk that is in unmanaged mode except when
it attempts to change the mode of the MDisk to one of the other modes.
򐂰 Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume
with no virtualization. Image mode volumes have a minimum size of one block (512 bytes)
and always occupy at least one extent. An image mode MDisk is associated with exactly
one volume.
򐂰 Managed mode MDisk
Managed mode MDisks contribute extents to the pool of available extents in the storage
pool. Zero or more managed mode volumes might use these extents.

Transitions between the modes


The following state transitions can occur to an MDisk (Figure 6-14 on page 254):
򐂰 Unmanaged mode to managed mode
This transition occurs when an MDisk is added to a storage pool, which makes the MDisk
eligible for the allocation of data and metadata extents.
򐂰 Managed mode to unmanaged mode
This transition occurs when an MDisk is removed from a storage pool.
򐂰 Unmanaged mode to image mode
This transition occurs when an image mode MDisk is created on an MDisk that was
previously unmanaged. It also occurs when an MDisk is used as the target for a migration
to image mode.
򐂰 Image mode to unmanaged mode
This transition can occur in the following ways:
– When an image mode volume is deleted, the MDisk that supported the volume
becomes unmanaged.
– When an image mode volume is migrated in image mode to another MDisk, the MDisk
that is being migrated from remains in image mode until all data is moved off it. It then
transitions to unmanaged mode.
򐂰 Image mode to managed mode
This transition occurs when the image mode volume that is using the MDisk is migrated
into managed mode.
򐂰 Managed mode to image mode is impossible
No operation is available to take an MDisk directly from managed mode to image mode.
You can achieve this transition by performing operations that convert the MDisk to
unmanaged mode and then to image mode.

Chapter 6. Data migration 253


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

add to group

Managed
Not in group
mode
remove from group

delete image
mode vdisk

complete migrate start migrate to


managed mode

create image
mode vdisk

Migrating to
Image mode
image mode start migrate to image mode

Figure 6-14 Various states of a volume

Image mode volumes have the special property that the last extent in the volume can be a
partial extent. Managed mode disks do not have this property.

To perform any type of migration activity on an image mode volume, the image mode disk first
must be converted into a managed mode disk. If the image mode disk has a partial last
extent, this last extent in the image mode volume must be the first extent to be migrated. This
migration is handled as a special case.

After this special migration operation occurs, the volume becomes a managed mode volume
and it is treated in the same way as any other managed mode volume. If the image mode disk
does not have a partial last extent, no special processing is performed. The image mode
volume is changed into a managed mode volume and it is treated in the same way as any
other managed mode volume.

After data is migrated off a partial extent, data cannot be migrated back onto the partial
extent.

254 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

6.4.2 Migration tips


The following methods are available to migrate an image mode volume to a managed mode
volume:
򐂰 If your image mode volume is in the same storage pool as the MDisks on which you want
to migrate the extents, you can perform one of these migrations:
– Migrate a single extent. You must migrate the last extent of the image mode volume
(number N-1).
– Migrate multiple extents.
– Migrate all of the in-use extents from an MDisk. Migrate extents off an MDisk that is
being deleted.
򐂰 If you have two storage pools (one storage pool for the image mode volume, and one
storage pool for the managed mode volumes), you can migrate a volume from one storage
pool to another storage pool.

Have one storage pool for all the image mode volumes and other storage pools for the
managed mode volumes, which use the migrate volume facility.

Be sure to verify that enough extents are available in the target storage pool.

6.5 Non-disruptive Volume Move


The following section describes the procedure how to perform non-disruptive volume
migration.

In our example, we want to move one of the AIX host volumes from its existing I/O Group to
the recently added pair of SVC nodes. To perform the NDVM by using the SVC GUI, complete
the following steps:
1. Verify that the host is assigned to the source and target I/O Groups. Select Hosts from the
left menu pane (Figure 6-15) and confirm the # of I/O Groups column.

Figure 6-15 SVC Hosts I/O Groups assignment

Chapter 6. Data migration 255


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

2. Right-click the host and select Properties → Mapped Volumes. Verify the volumes and
caching I/O Group ownership, as shown in Figure 6-16.

Figure 6-16 Caching I/O Group ID

3. Now, we move Dat1 from the existing SVC I/O Group 0 to the new I/O Group 1. From the
left menu pane, select Volumes to see all of the volumes and optionally, filter the output
for the results that you want, as shown in Figure 6-6.

Figure 6-17 Select and filter volumes

4. Right-click Dat1, and in the menu, select Migrate to another pool. As shown in
Figure 6-18 on page 257.

256 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-18 Migrating to another pool

5. The Migrate to another pool window opens, Select target pool you migrating to Click →
Migrate, as shown in (Figure 6-19).

Figure 6-19 Migrate Volume to a New I/O Group window

6. You can see the progress of the task that is displayed in the task window and the SVC CLI
command sequence that is running the migratevdisk command.

Figure 6-20 Migrating Volume to a New I/O Group The task completion window opens.

Chapter 6. Data migration 257


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

6.6 Data migration for Windows using the GUI


In this section, we move two LUNs from a Microsoft Windows Server 2008 server that is
attached to a DS3400 storage subsystem over to SVC. The migration examples include the
following scenarios:
򐂰 Moving a Microsoft server’s SAN LUNs from a storage subsystem and virtualizing those
same LUNs through the SVC
Use this method when you are introducing the SVC into your environment. This section
shows that your host downtime is only a few minutes while you remap and remask
disks by using your storage subsystem LUN management tool. For more information
about this step, see 6.6.2, “Adding SVC between the host system and DS3400” on
page 261.
򐂰 Migrating your image mode volume to a volume while your host is still running and
servicing your business application
Use this method if you are removing a storage subsystem from your SAN environment,
or if you want to move the data onto LUNs that are more appropriate for the type of
data that is stored on those LUNs, considering availability, performance, and
redundancy. For more information about this step, see 6.6.6, “Migrating volume from
image mode to image mode” on page 279.
򐂰 Migrating your volume to an image mode volume
Use this method if you are removing the SVC from your SAN environment after a trial
period. For more information about this step, see 6.6.5, “Migrating volume from
managed mode to image mode” on page 275.
򐂰 Moving an image mode volume to another image mode volume
Use this method to migrate data from one storage subsystem to another storage
subsystem. For more information, see 6.7.5, “Migrating volumes to image mode
volumes” on page 301.

You can use these methods individually or together to migrate your server’s LUNs from one
storage subsystem to another storage subsystem by using the SVC as your migration tool.

The only downtime that is required for these methods is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.

6.6.1 Windows Server 2008 host connected directly to DS3400


In our example configuration, we use a Windows Server 2008 host and a DS3400 storage
subsystem. The host has two LUNs (drives X and Y). The two LUNs are part of one DS3400
array. Before the migration, LUN masking is defined in the DS3400 to give access to the
Windows Server 2008 host system for the volumes from DS3400 labeled X and Y
(Figure 6-22 on page 259).

Figure 6-21 shows the starting zoning scenario.

258 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Zoning for Migration Scenarios

W2k8

SAN

IBM or OEM Green Zone


Storage
Subsystem

Figure 6-21 Starting zoning scenario

Figure 6-22 on page 259 shows the two LUNs (drive X and drive Y).

Figure 6-22 Drives X and Y

Chapter 6. Data migration 259


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-23 shows the properties of one of the DS3400 disks that uses the Subsystem
Device Driver DSM (SDDDSM). The disk appears as an FAStT Multi-Path Disk Device.

Figure 6-23 Disk properties

260 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

6.6.2 Adding SVC between the host system and DS3400


Figure 6-24 shows the new environment with the SVC and a second storage subsystem that
is attached to the SAN. The second storage subsystem is not required to migrate to the SAN.
However, we show in the following examples that it is possible to move data across storage
subsystems without any host downtime.

Zoning for Migration Scenarios

W2k8

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 6-24 Add the SVC and second storage subsystem

To add the SVC between the host system and the DS3400 storage subsystem, complete the
following steps:
1. Check that you installed the supported device drivers on your host system.
2. Check that your SAN environment fulfills the supported zoning configurations.
3. Shut down the host.
4. Change the LUN masking in the DS3400. Mask the LUNs to the SVC, and remove the
masking for the host.
Figure 6-25 on page 262 shows the two LUNs (win2008_lun_01 and win2008_lun_02)
with LUN IDs 2 and 3 that are remapped to the SVC Host ITSO_SVC_DH8.

Chapter 6. Data migration 261


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-25 LUNs remapped

Important: To avoid potential data loss, back up all the data that is stored on your
external storage before you use the wizard.

5. Log in to your SVC Console and open Pools → System Migration, as shown in
Figure 6-26.

Figure 6-26 Pools and System Migration

6. Click Start New Migration, which starts a wizard, as shown in Figure 6-27.

Figure 6-27 Start New Migration

7. Follow the Storage Migration Wizard, as shown in Figure 6-28 on page 263, and then click
Next.

262 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-28 Storage Migration Wizard (Step 1 of 8)

Chapter 6. Data migration 263


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

8. Figure 6-29 shows the Prepare Environment for Migration information window. Click Next.

Figure 6-29 Storage Migration Wizard: Preparing the environment for migration (Step 2 of 8)

9. Click Next to complete the storage mapping, as shown in Figure 6-30.

Figure 6-30 Storage Migration Wizard: Mapping storage (Step 3 of 8)

10.Figure 6-31 shows the device discovery. Click Close.

264 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-31 Discovering devices

11.Figure 6-32 shows the available MDisks for migration.

Figure 6-32 Storage Migration Wizard (Step 4 of 8)

12.Mark desired MDisks (in our example mdisk0 and mdisk2) for migrating, as shown in
Figure 6-33, and then click Next.

Figure 6-33 Storage Migration Wizard: Selecting MDisks for migration

13.Figure 6-34 shows the MDisk import process. During the import process, a storage pool is
automatically created, in our case, MigrationPool_8192. You can see that the command
that is issued by the wizard creates an image mode volume with a one-to-one mapping to
mdisk10 and mdisk12. Click Close to continue.

Chapter 6. Data migration 265


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-34 Storage Migration Wizard: MDisk import process (Step 5 of 8)

14.To create a host object to which we map the volume later, click Add Host, as shown in
Figure 6-35.

Figure 6-35 Storage Migration Wizard: Adding a host

15.Figure 6-36 shows the empty fields that we must complete to match our host
requirements.

Figure 6-36 Storage Migration Wizard: Add Host information fields

16.Enter the host name that you want to use for the host, add the Fibre Channel (FC) port,
and select a host type. In our case, the host name is Win_2008. Click Add Host, as shown
in Figure 6-37 on page 267.

266 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-37 Storage Migration Wizard: Completed host information

17.Figure 6-38 shows the progress of creating a host. Click Close.

Figure 6-38 Progress status: Creating a host

18.Figure 6-39 shows that the host was added successfully. Click Next to continue.

Figure 6-39 Storage Migration Wizard: Adding a host was successful

Chapter 6. Data migration 267


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

19.Figure 6-40 shows all of the available volumes to map to a host.

Figure 6-40 Storage Migration Wizard: Volumes that are available for mapping (Step 6 of 8)

20.Mark both volumes and click Map to Host, as shown in Figure 6-41.

Figure 6-41 Storage Migration Wizard: Mapping volumes to host

21.Modify the host mapping by choosing a host by using the drop-down menu, as shown in
Figure 6-42. Click Map.

Figure 6-42 Storage Migration Wizard: Modify Host Mappings

22.Figure 6-43 shows the progress of the volume mapping to the host. Click Close when you
are finished.

268 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-43 Modify Mappings: Task completed

23.After the volume to host mapping task is completed, a host that is beneath the column
heading Host Mappings is shown as marked Yes (Figure 6-44 on page 269). Click Next.

Figure 6-44 Storage Migration Wizard: Map Volumes to Hosts

24.Select the storage pool that you want to use for migration, in our case, V7000_2_Test, as
shown in Figure 6-45. Click Next.

Figure 6-45 Storage Migration Wizard: Selecting a storage pool to use for migration (Step 7 of 8)

25.Migration starts automatically by performing a volume copy, as shown in Figure 6-46.

Chapter 6. Data migration 269


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-46 Start Migration: Task completed

26.The window that is shown in Figure 6-47 opens. This window states that the migration has
begun. Click Finish.

Figure 6-47 Storage Migration Wizard: Data migration begins (Step 8 of 8)

27.The window that is shown in Figure 6-48 opens automatically to show the progress of the
migration.If not, go back to System Migration menu.

Figure 6-48 Progress of the migration process

28.Click Volumes → Volumes by host, as shown in Figure 6-49, to see all the volumes that
are served by the new host for this migration step.

270 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-49 Selecting to view volumes by host

As you can see in Figure 6-50, the migrated volume is a mirrored volume with one copy on
the image mode pool and another copy in a managed mode storage pool. The administrator
can choose to leave the volume or split the initial copy from the mirror.

You can see the progress of the volumes synchronization using the running tasks indicator
(Figure 6-50).

Figure 6-50 Running tasks

6.6.3 Importing the migrated disks into a Windows Server 2008 host
To import the migrated disks into an online Windows Server 2008 Server host, complete the
following steps:
1. Start the Windows Server 2008 host system again, go to Disk Management of the
DS3400-allocated disks and see the new disk properties that changed to a 2145
Multi-Path Disk Device, as shown in Figure 6-51 on page 272.

Chapter 6. Data migration 271


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-51 Disk Management: See the new disk properties

Figure 6-52 shows the Disk Management window.

Figure 6-52 Migrated disks are available

2. Select Start → Programs → Subsystem Device Driver DSM → Subsystem Device


Driver DSM to open the SDDDSM command-line utility, as shown in see Figure 6-53 on
page 273.

272 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-53 Subsystem Device Driver DSM CLI

3. Run the datapath query device command to check whether all paths are available as
planned in your SAN environment (Example 6-1).

Example 6-1 The datapath query device command


C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 2

DEV#: 4 DEVICE NAME: Disk4 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801FF80284000000000000000 LUN SIZE: 500.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 76 0
1 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 98 0
2 * Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 0 0
3 * Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 0 0
4 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 87 0
5 * Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 0 0
6 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 72 0
7 * Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 0 0

DEV#: 5 DEVICE NAME: Disk5 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076801FF80284000000000000001 LUN SIZE: 500.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 0 0
1 * Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 0 0
2 Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 115 0
3 Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 88 0
4 * Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 0 0
5 Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 104 0
6 * Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 0 0
7 Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 105 0

6.6.4 Adding SVC between host and DS3400 using CLI


In this section, we use only CLI commands to add direct-attached storage to the SVC’s
managed storage. For more information about our preparation of the environment, see 6.6.1,
“Windows Server 2008 host connected directly to DS3400” on page 258.
1. Verify currently used storage pools
Verify the currently used storage pool on the SVC to discover the storage pool’s free
capacity using command lsmdiskgrp.

Chapter 6. Data migration 273


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

2. Create a storage pool


When we move the two LUNs to the SVC, we use them initially in image mode. Therefore,
we need a storage pool to hold those disks.
First, we add a new empty storage pool (in our case imagepool) for the import of the LUNs,
as shown in Example 6-2. It is better to have a separate pool in case a problem occurs
during the import. That way, the import process cannot affect the other storage pools.

Example 6-2 Adding a storage pool

IBM_2145:ITSO_SVC2:ITSO_admin>mkmdiskgrp -name imagepool -tier generic_hdd


-easytier off -ext 256
MDisk Group, id [4], successfully created

3. Verify the creation of new storage pool


Now, we verify whether the new storage pool was added correctly using command
lsmdiskgrp -id 4.
4. Create image volumes
As shown in Example 6-3, we must create two image volumes (image1 and image2) within
our storage pool imagepool. We need one for each MDisk to import LUNs from the storage
controller to the SVC.

Example 6-3 Creating the image volumes


IBM_2145:ITSO_SVC2:ITSO_admin> mkvdisk -name image1 -iogrp 0 -mdiskgrp
imagepool -vtype image -mdisk mdisk13 -syncrate 80
Virtual Disk, id [0], successfully created

IBM_2145:ITSO_SVC2:ITSO_admin> mkvdisk -name image2 -iogrp 0 -mdiskgrp


imagepool -vtype image -mdisk mdisk14 -syncrate 80
Virtual Disk, id [1], successfully created

5. Verifying the image volumes


Now, we check again whether the volumes are created within the storage pool imagepool,
as shown in Example 6-4.

Example 6-4 Verifying the image volumes


IBM_2145:ITSO_SVC2:ITSO_admin>lsvdisk

0 image1 0 io_grp0 online 4 imagepool 10.00GB image


6005076801FD80284000000000000015 0 imagepool

1 image2 0 io_grp0 online 4 imagepool 20.00GB image


6005076801FD80284000000000000016 0 imagepool

6. Creating the host


We check whether our host exists or if we need to create it, as shown in Example 6-5 on
page 274. In our case, the server is created.

Example 6-5 Listing the host


IBM_2145:ITSO_SVC2:ITSO_admin>lshost
id name port_count iogrp_count status
0 Win_2008 2 4 online

274 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

7. Mapping the image volumes to the host


Next, we map the image volumes to host Win_2008, as shown in Example 6-6. This
mapping is also known as LUN masking.

Example 6-6 Mapping the volumes


IBM_2145:ITSO_SVC2:ITSO_admin>mkvdiskhostmap -force -host Win_2008 image2
Virtual Disk to Host map, id [0], successfully created

IBM_2145:ITSO_SVC2:ITSO_admin> mkvdiskhostmap -force -host Win_2008 image1


Virtual Disk to Host map, id [1], successfully created

8. Adding the image volumes to a storage pool


Add the image volumes to storage pool DS3400_pool1 (as shown in Example 6-7) to have
them mapped as fully allocated volumes that are managed by the SVC.

Example 6-7 Adding the volumes to the storage pool


IBM_2145:ITSO_SVC2:ITSO_admin>addvdiskcopy -mdiskgrp DS3400_pool1 image1
Vdisk [0] copy [1] successfully created
IBM_2145:ITSO_SVC2:ITSO_admin> addvdiskcopy -mdiskgrp DS3400_pool1 image2
Vdisk [1] copy [1] successfully created

9. Checking the status of the volumes


Both volumes now have a second copy, which is shown as type many in Example 6-8. They
are available to be used by the host.

Example 6-8 Status check


IBM_2145:ITSO_SVC2:ITSO_admin>lsvdisk

0 image1 0 io_grp0 online many many 10.00GB many


6005076801FD80284000000000000015 0 2 not_empty 0

1 image2 0 io_grp0 online many many 20.00GB many


6005076801FD80284000000000000016 0 2 not_empty 0

6.6.5 Migrating volume from managed mode to image mode


Complete the following steps to migrate a managed volume to an image mode volume:
1. Create an empty storage pool for each volume that you want to migrate to image mode.
These storage pools host the target MDisk that you map later to your server at the end of
the migration.
2. Click Pools → MDisks by Pools to create a pool from the drop-down menu, as shown in
Figure 6-54.

Chapter 6. Data migration 275


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-54 Selecting Pools to create a pool

3. To create an empty storage pool for migration, complete the following steps:
a. You are prompted for the pool name and extent size, as shown in Figure 6-55. After you
enter the information, click Next.

Figure 6-55 Creating a storage pool

b. You are then prompted to optionally select the MDisk to include in the storage pool, as
shown in Figure 6-56 on page 276. Click Create.

Figure 6-56 Selecting the MDisk

4. Figure 6-57 on page 277 shows the progress status for pool migration as the system
creates a storage pool for migration. Click Close to continue.

276 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-57 Progress status

5. From the Create Volumes panel, select the volume that you want to migrate to image
mode and select Export to Image Mode from the drop-down menu, as shown in
Figure 6-58.

Figure 6-58 Selecting the volume

6. Select the MDisk onto which you want to migrate the volume, as shown in Figure 6-59 on
page 277. Click Next.

Figure 6-59 Migrate to image mode

Chapter 6. Data migration 277


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

7. Select a storage pool into which the image mode volume is placed after the migration
completes, in our case, the For Migration storage pool. Click Finish, as shown in
Figure 6-60.

Figure 6-60 Select storage pool

8. The volume is exported to image mode and placed in the For Migration storage pool, as
shown in Figure 6-61. Click Close.

Figure 6-61 Export Volume to image process

9. Browse to Pools → MDisk by Pools. Click the plus sign (+) (expand icon) to the left of the
name. Now, mdisk22 is an image mode MDisk, as shown in Figure 6-62.

Figure 6-62 MDisk is in image mode

10.Repeat these steps for every volume that you want to migrate to an image mode volume.
11.Delete the image mode data from SVC by using the procedure that is described in 6.6.7,
“Removing image mode data from SVC” on page 284.

278 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

6.6.6 Migrating volume from image mode to image mode


Use the process that is described in this section to move image mode volumes from one
storage subsystem to another storage subsystem without going through the SVC fully
managed mode. The data stays available for the applications during this migration. This
procedure is nearly the same as the procedure that is described in 6.6.5, “Migrating volume
from managed mode to image mode” on page 275.

In our example, we migrate the Windows Server W2K8 volume to another disk subsystem as
an image mode volume. The second storage subsystem is a V7000_ITSO2; a new LUN is
configured on the storage and mapped to the SVC system. The LUN is available to the SVC
as an unmanaged mdisk23, as shown in Figure 6-63.

Figure 6-63 Unmanaged disk on a storage subsystem

To migrate the image mode volume to another image mode volume, complete the following
steps:
1. Mark the unmanaged mdisk23 and click Actions or right-click and select Import from the
list, as shown in Figure 6-64.

Figure 6-64 Import the unmanaged MDisk into the SVC

2. The Import Wizard window opens, which describes the process of importing the MDisk
and mapping an image mode volume to it. Select a temporary pool because you do not
want to migrate the volume into an SVC managed volume pool. Define the name of the
new volume, select the extent size from the drop-down menu, as shown in Figure 6-65 on
page 280. If Copy Services are enabled on the original storage system for this volume,
notify your target pool about it by ticking the checkbox. Click Finish.

Chapter 6. Data migration 279


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-65 Import Wizard (Step 2 of 2)

3. The import process starts (as shown in Figure 6-66) by creating a temporary storage pool
MigrationPool_1024 (500 GiB) and an image volume. Click Close to continue.

Figure 6-66 Import of MDisk and creation of temporary storage pool MigrationPool_1024

4. As shown in Figure 6-67, an image mode mdisk15 now shows with the import controller
name and SCSI ID as its name.

Figure 6-67 Imported mdisk15 within the created storage pool

5. Create a storage pool Migration_Out with the same extent size (1 GiB) as the
automatically created storage pool MigrationPool_1024 for transferring the image mode
disk. Go to Pools → MDisks by Pools, as shown in Figure 6-68 on page 281.

280 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-68 MDisk by Pools

6. Click Create Pool to create an empty storage pool and give your new storage pool the
meaningful name Migration_Out. Click the Advanced Settings drop-down menu. Choose
1.00 GiB as the extent size for your new storage pool, as shown in Figure 6-69.Click Next
to continue.

Figure 6-69 Creating an empty storage pool (Step 1 of 2)

7. Figure 6-70 on page 281 shows an empty pool without MDisks.

Figure 6-70 Creating an empty storage pool (Step 2 of 2)

8. Now, the empty storage pool for the image to image migration is created. Go to Pools →
MDisks by Pools, as shown Figure 8. Go to Volumes → Volumes by Pool, as shown in
Figure 6-71.

Chapter 6. Data migration 281


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-71 Volumes by Pool

9. In the left pane, select the storage pool of the imported disk, which is called
MigrationPool_1024. Then, mark the image disk that you want to migrate out and select
Actions. From the drop-down menu, select Export to Image Mode, as shown in
Figure 6-72.

Figure 6-72 Export to Image Mode

10.Select the target MDisk mdisk24 on the new disk controller to which you want to migrate.
Click Next, as shown in Figure 6-73.

282 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-73 Selecting the target MDisk

11.Select the target Migration_Out (empty) storage pool, as shown in Figure 6-74 on
page 283. Click Finish.

Figure 6-74 Selecting the target storage pool

12.Figure 6-75 shows the progress status of the Export Volume to Image process. Click
Close to continue.

Figure 6-75 Export Volume to Image progress status

13.Figure 6-76 on page 284 shows that the MDisk location changed to the new storage pool
Migration_Out. This is done once the volume migration process finishes.

Chapter 6. Data migration 283


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-76 Image disk migrated to new storage pool

14.Repeat these steps for all image mode volumes that you want to migrate.
15.If you want to delete the data from the SVC, use the procedure that is described in 6.6.7,
“Removing image mode data from SVC” on page 284.

6.6.7 Removing image mode data from SVC


If your data is in an image mode volume inside SVC, you can remove the volume from the
SVC, which allows you to free the original LUN for reuse. The following sections describe how
to migrate data to an image mode volume. Depending on your environment, you might need
to complete the following procedures before you delete the image volume:
򐂰 6.6.5, “Migrating volume from managed mode to image mode” on page 275
򐂰 6.6.6, “Migrating volume from image mode to image mode” on page 279

To remove the image mode volume from the SVC, we use the delete vdisk command.

If the command succeeds on an image mode volume, the underlying back-end storage
controller is consistent with the data that a host might previously read from the image mode
volume. That is, all fast write data was flushed to the underlying LUN. Deleting an image
mode volume causes the MDisk that is associated with the volume to be ejected from the
storage pool. The mode of the MDisk is returned to unmanaged.

Image mode volumes only: This situation applies to image mode volumes only. If you
delete a normal volume, all of the data is also deleted.

As shown in Example 6-1 on page 273, the SAN disks are on the SVC.

Check that you installed the supported device drivers on your host system.

To switch back to the storage subsystem, complete the following steps:


1. Shut down your host system.
2. Open the Volumes by Host window to see which volumes are mapped to your host, as
shown in Figure 6-77 on page 285.

284 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-77 Volume by host mapping

3. Check your Host and select your volume. Then, right-click and select Unmap all Hosts,
as shown in Figure 6-78.

Figure 6-78 Unmap the volume from the host

4. Verify your unmap process, as shown in Figure 6-79, and click Unmap.

Figure 6-79 Verifying your unmapping process

5. Repeat steps 3 - 5 for every image mode volume that you want to remove from the SVC.
6. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN
masking, and add the host to the masking.
7. Power on your host system.

Chapter 6. Data migration 285


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

6.6.8 Mapping free disks on Windows Server 2008


To detect and map the disks that were freed from SVC management, go to Windows Server
2008 and complete the following steps:
1. Using your DS3400 Storage Manager interface, remap the two LUNs that were MDisks
back to your Windows Server 2008 server.
2. Open your Device Manager window. Figure 6-80 shows that the LUNs are now back to an
IBM 1726-4xx FAStT Multi-Path Disk Device type.

Figure 6-80 IBM 1726-4xx FAStT Multi-Path Disk Device type

3. Open your Disk Management window; the disks appeared, as shown in Figure 6-81 on
page 287. You might need to reactivate your disk by right-clicking each disk.

286 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-81 Windows Server 2008 Disk Management

6.7 Migrating Linux SAN disks to SVC disks


In this section, we move the two LUNs from a Linux server that is booting directly off our
DS4000 storage subsystem over to the SVC. We then manage those LUNs with the SVC and
move them between other managed disks. Finally, we move them back to image mode disks
so that those LUNs can be masked and mapped back to the Linux server directly.

This example can help you to perform any of the following tasks in your environment:
򐂰 Move a Linux server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC.
Perform this task first when you are introducing the SVC into your environment. This
section shows that your host downtime is only a few minutes while you remap and remask
disks by using your storage subsystem LUN management tool. For more information
about this task, see 6.7.1, “Preparing SVC to virtualize Linux disks” on page 289.
򐂰 Move data between storage subsystems while your Linux server is still running and
servicing your business application.
Perform this task if you are removing a storage subsystem from your SAN environment.
You also can perform this task if you want to move the data onto LUNs that are more
appropriate for the type of data that is stored on those LUNs, taking availability,
performance, and redundancy into consideration. For more information about this task,
see 6.7.3, “Migrating image mode volumes to managed MDisks” on page 296.
򐂰 Move your Linux server’s LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the Linux server.
For more information about this step, see 6.7.4, “Preparing to migrate from SVC” on
page 298.

Chapter 6. Data migration 287


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

You can use these three activities individually or together to migrate your Linux server’s LUNs
from one storage subsystem to another storage subsystem by using the SVC as your
migration tool. If you do not use all three tasks, you can introduce or remove the SVC from
your environment.

The only downtime that is required for these tasks is the time that it takes to remask and
remap the LUNs between the storage subsystems and your SVC.

Our Linux environment is shown in Figure 6-82.

Zoning for Migration Scenarios

LINUX
Host

SAN

Green Zone
IBM or OEM
Storage
Subsystem

Figure 6-82 Linux SAN environment

Figure 6-82 shows our Linux server that is connected to our SAN infrastructure. The following
LUNs are masked directly to our Linux server from our storage subsystem:
򐂰 The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise
Linux V5.1). This LUN is used to boot the system directly from the storage subsystem. The
operating system identifies this LUN as /dev/mapper/VolGroup00-LogVol00.

SCSI LUN ID 0: To successfully boot a host off the SAN, you must assign the LUN as
SCSI LUN ID 0.

Linux sees this LUN as our /dev/sda disk.


򐂰 We also mapped a second disk (SCSI LUN ID 1) to the host. It is 5 GB and mounted in the
/data folder on the /dev/dm-2 disk.

Example 6-9 on page 289 shows our disks that attach directly to the Linux hosts.

288 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Example 6-9 Directly attached disks


[root@Linux data]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
10093752 1971344 7601400 21% /
/dev/sda1 101086 12054 83813 13% /boot
tmpfs 1033496 0 1033496 0% /dev/shm
/dev/dm-2 5160576 158160 4740272 4% /data
[root@Linux data]#

Our Linux server represents a typical SAN environment with a host that directly uses LUNs
that were created on a SAN storage subsystem, as shown in Figure 6-82 on page 288. The
Linux server has the following configuration:
򐂰 The Linux server’s host bus adapter (HBA) cards are zoned so that they are in the Green
Zone with our storage subsystem.
򐂰 The two LUNs that were defined on the storage subsystem by using LUN masking are
directly available to our Linux server.

6.7.1 Preparing SVC to virtualize Linux disks


This section describes the preparation tasks that we performed before our Linux server was
taken offline. These activities are all non-disruptive. They do not affect your SAN fabric or your
existing SVC configuration (if you have a production SVC).

Creating a storage pool


When we move the two Linux LUNs to the SVC, we use them initially in image mode.
Therefore, we need a storage pool to hold those disks.

We must create an empty storage pool for each of the disks by using the commands that are
shown in Example 6-10.

Example 6-10 Create an empty storage pool


IBM_2145:ITSO-CLS1:ITSO_admin>mkmdiskgrp -name Linux_Pool1 -ext 512
MDisk Group, id [2], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity
used_capacity real_capacity overallocation warning easy_tier easy_tier_status
2 Linux_Pool1 online 0 0 0 512 0 0.00MB
0.00MB 0.00MB 0 0 auto inactive
3 Linux_Pool2 online 0 0 0 512 0 0.00MB
0.00MB 0.00MB 0 0 auto inactive

Creating your host definition


If you prepared your zones correctly, the SVC can see the Linux server’s HBAs on the fabric.
(Our host had only one HBA.)

The use of the lshbaportcandidate command on the SVC lists all of the worldwide names
(WWNs), which are not yet allocated to a host, that the SVC can see on the SAN fabric.
Example 6-11 shows the output of the nodes that it found on our SAN fabric. (If the port did
not show up, a zone configuration problem exists.)

Chapter 6. Data migration 289


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Example 6-11 Display HBA port candidates

IBM_2145:ITSO-CLS1:ITSO_admin>lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA
210000E08B0548BC
210000E08B0541BC
210000E08B89CCC2

If you do not know the WWN of your Linux server, you can review which WWNs are currently
configured on your storage subsystem for this host. Figure 6-83 shows our configured ports
on old IBM DS4700 storage subsystem.

Figure 6-83 Display port WWNs

After it is verified that SVC can see our host (Palau), we create the host entry and assign the
WWN to this entry. Example 6-12 shows these commands.

Example 6-12 Create the host entry


IBM_2145:ITSO-CLS1:ITSO_admin>mkhost -name Palau -hbawwpn
210000E08B054CAA:210000E08B89C1CD
Host, id [0], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin>lshost Palau
id 0
name Palau
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B89C1CD
node_logged_in_count 4
state inactive
WWPN 210000E08B054CAA
node_logged_in_count 4
state inactive

290 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Verifying that we can see our storage subsystem


If we set up our zoning correctly, the SVC can see the storage subsystem by using the
lscontroller command, as shown in Example 6-13.

Example 6-13 Discover the storage controller


IBM_2145:ITSO-CLS1:ITSO_admin> lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814 FAStT

You can rename the storage subsystem to a more meaningful name by using the
chcontroller -name command. If you have multiple storage subsystems that connect to your
SAN fabric, renaming the storage subsystems makes it considerably easier to identify them.

Getting the disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available, unmanaged
MDisks (if the SVC sees many available, unmanaged MDisks), we get the LUN serial
numbers from our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.

If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are
shown in Figure 6-84 on page 291 (which shows the disk serial number SAN_Boot_palau)
and Figure 6-85 on page 292.

Figure 6-84 Obtaining the disk serial number: SAN_Boot_palau

Figure 6-85 shows the disk serial number Palau_data.

Chapter 6. Data migration 291


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-85 Obtaining the disk serial number: Palau_data

Before we move the LUNs to the SVC, we must configure the host multipath configuration for
the SVC. Add the following entry to your multipath.conf file, as shown in Example 6-14, and
then add the content of Example 6-15 to the file.

Example 6-14 Edit the multipath.conf file


[root@Palau ~]# vi /etc/multipath.conf
[root@Palau ~]# service multipathd stop
Stopping multipathd daemon: [ OK ]
[root@Palau ~]# service multipathd start
Starting multipathd daemon: [ OK ]

Example 6-15 Data to add to the multipath.conf file

# SVC
device {
vendor "IBM"
product "2145DH8"
path_grouping_policy group_by_serial}

We are now ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as volumes.

6.7.2 Moving LUNs to SVC


In this step, we move the LUNs that are assigned to the Linux server and reassign them to
SVC. Our Linux server has two LUNs: one LUN is for our boot disk and operating system file

292 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.

If we wanted to move only the LUN that holds our application and data files, we do not have to
reboot the host. The only requirement is that we unmount the file system and vary off the
volume group (VG) to ensure the data integrity between the reassignment.

Because we intend to move both LUNs at the same time, we must complete the following
steps:
1. Confirm that the multipath.conf file is configured for the SVC.
2. Shut down the host.
If you are moving only the LUNs that contain the application and data, complete the
following steps instead:
a. Stop the applications that use the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are a Logical Volume Manager (LVM) volume, deactivate that VG by
using the vgchange -a n VOLUMEGROUP_NAME command.
d. If possible, also unload your HBA driver by using the rmmod DRIVER_MODULE command.
This command removes the SCSI definitions from the kernel. (We reload this module
and rediscover the disks later.) It is possible to tell the Linux SCSI subsystem to rescan
for new disks without requiring you to unload the HBA driver; however, we do not
provide those details here.
3. By using Storage Manager (our storage subsystem management tool), we can unmap and
unmask the disks from the Linux server and remap and remask the disks to the SVC.

LUN IDs: Although we are using boot from SAN, you can also map the boot disk with
any LUN to the SVC. The LUN does not have to be 0 until later when we configure the
mapping in the SVC to the host.

4. From the SVC, discover the new disks by using the detectmdisk command. The disks are
discovered and named mdiskN, where N is the next available MDisk number (starting from
0). Example 6-16 shows the commands that we used to discover our MDisks and to verify
that we have the correct MDisks.

Example 6-16 Discover the new MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 mdisk26 online unmanaged 12.0GB 0000000000000008
DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000
generic_hdd
27 mdisk27 online unmanaged 5.0GB 0000000000000009
DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000
generic_hdd

Important: Match your discovered MDisk serial numbers (unique identifier UID on the
lsmdisk task display) with the serial number that you recorded earlier (in Figure 6-84 on
page 291 and Figure 6-85 on page 292).

Chapter 6. Data migration 293


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

5. After we verify that we have the correct MDisks, we rename them to avoid confusion in the
future when we perform other MDisk-related tasks, as shown in Example 6-17.

Example 6-17 Rename the MDisks


IBM_2145:ITSO-CLS1:ITSO_admin>chmdisk -name md_LinuxS mdisk26
IBM_2145:ITSO-CLS1:ITSO_admin>chmdisk -name md_LinuxD mdisk27
IBM_2145:ITSO-CLS1:ITSO_admin>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
tier
26 md_LinuxS online unmanaged 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_LinuxD online unmanaged 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd

6. We create our image mode volumes by using the mkvdisk command and the -vtype
image option, as shown in Example 6-18 on page 294. This command virtualizes the disks
in the same layout as though they were not virtualized.

Example 6-18 Create the image mode volumes


IBM_2145:ITSO-CLS1:ITSO_admin>mkvdisk -mdiskgrp Linux_Pool1 -iogrp 0 -vtype image -mdisk
md_LinuxS -name Linux_SANB
Virtual Disk, id [29], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin>mkvdisk -mdiskgrp Linux_Pool2 -iogrp 0 -vtype image -mdisk


md_LinuxD -name Linux_Data
Virtual Disk, id [30], successfully create

IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
26 md_LinuxS online image 2 Linux_Pool1 12.0GB 0000000000000008
DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000
generic_hdd
27 md_LinuxD online image 3 Linux_Pool2 5.0GB 0000000000000009
DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000
generic_hdd
IBM_2145:ITSO-CLS1:ITSO_admin>

IBM_2145:ITSO-CLS1:ITSO_admin>lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name
RC_id RC_name vdisk_UID fc_map_count copy_count fast_wri
te_state se_copy_count
29 Linux_SANB 0 io_grp0 online 4
Linux_Pool1 12.0GB image
60050768018301BF280000000000002B 0 1 empty 0
30 Linux_Data 0 io_grp0 online 4
Linux_Pool2 5.0GB image
60050768018301BF280000000000002C 0 1 empty 0

7. Map the new image mode volumes to the host, as shown in Example 6-19.

294 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Important: Ensure that you map the boot volume with SCSI ID 0 to your host. The host
must identify the boot volume during the boot process.

Example 6-19 Map the volumes to the host


IBM_2145:ITSO-CLS1:ITSO_admin>mkvdiskhostmap -host Linux -scsi 0 Linux_SANB
Virtual Disk to Host map, id [0], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin>mkvdiskhostmap -host Linux -scsi 1 Linux_Data


Virtual Disk to Host map, id [1], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin>lshostvdiskmap Linux
id name SCSI_id vdisk_id vdisk_name wwpn
vdisk_UID
0 Linux 0 29 Linux_SANB
210000E08B89C1CD 60050768018301BF280000000000002B
0 Linux 1 30 Linux_Data
210000E08B89C1CD 60050768018301BF280000000000002C

FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process completes before you start your application.

8. Power on your host server and enter your FC HBA BIOS before booting the operating
system. Ensure that you change the boot configuration so that it points to the SVC.
Complete the following steps on a QLogic HBA:
a. Press Ctrl+Q to enter the HBA BIOS.
b. Open Configuration Settings.
c. Open Selectable Boot Settings.
d. Change the entry from your storage subsystem to the IBM SAN Volume Controller
2145 LUN with SCSI ID 0.
e. Exit the menu and save your changes.
9. Boot up your Linux operating system.
If you moved only the application LUN to the SVC and left your Linux server running, you
must complete only these steps to see the new volume:
a. Load your HBA driver by using the modprobe DRIVER_NAME command. If you did not (and
cannot) unload your HBA driver, you can run commands to the kernel to rescan the
SCSI bus to see the new volumes. (These details are beyond the scope of this book.)
b. Check your syslog to verify that the kernel found the new volumes. On Red Hat
Enterprise Linux, the syslog is stored in the /var/log/messages directory.
c. If your application and data are on an LVM volume, rediscover the VG and then run the
vgchange -a y VOLUME_GROUP command to activate the VG.
10.Mount your file systems by using the mount /MOUNT_POINT command, as shown in
Example 6-20. The df output shows us that all of the disks are available again.

Example 6-20 Mount data disk


[root@Linux data]# mount /dev/dm-2 /data
[root@Linux data]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00

Chapter 6. Data migration 295


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

10093752 1938056 7634688 21% /


/dev/sda1 101086 12054 83813 13% /boot
tmpfs 1033496 0 1033496 0% /dev/shm
/dev/dm-2 5160576 158160 4740272 4% /data

11.You are now ready to start your application.

6.7.3 Migrating image mode volumes to managed MDisks


While the Linux server is still running and our file systems are in use, we migrate the image
mode volumes onto striped volumes, spreading the extents over the other three MDisks. In
our example, the three new LUNs are on a DS4500 storage subsystem, so we also move to
another storage subsystem in this example.

Preparing MDisks for striped mode volumes


From our second storage subsystem, we performed the following tasks:
򐂰 Created and allocated three new LUNs to the SVC
򐂰 Discovered them as MDisks
򐂰 Renamed these LUNs to more meaningful names
򐂰 Created a storage pool
򐂰 Placed all of these MDisks into this storage pool

The output of our commands is shown in Example 6-21.

Example 6-21 Create a storage pool


IBM_2145:ITSO-CLS1:ITSO_admin>detectmdisk

IBM_2145:ITSO-CLS1:ITSO_admin>mkmdiskgrp -name MD_LinuxVD -ext 512


MDisk Group, id [8], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
26 md_LinuxS online image 2 Linux_Pool1 12.0GB 0000000000000008
DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000
generic_hdd
27 md_LinuxD online image 3 Linux_Pool2 5.0GB 0000000000000009
DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000
generic_hdd
28 mdisk28 online unmanaged 8.0GB 0000000000000010
DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000
generic_hdd
29 mdisk29 online unmanaged 8.0GB 0000000000000011
DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000
generic_hdd
30 mdisk30 online unmanaged 8.0GB 0000000000000012
DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000
generic_hdd

IBM_2145:ITSO-CLS1:ITSO_admin>chmdisk -name Linux-md1 mdisk28


IBM_2145:ITSO-CLS1:ITSO_admin>chmdisk -name Linux-md2 mdisk29
IBM_2145:ITSO-CLS1:ITSO_admin>chmdisk -name Linux-md3 mdisk30
IBM_2145:ITSO-CLS1:ITSO_admin>addmdisk -mdisk Linux-md1 MD_LinuxVD
IBM_2145:ITSO-CLS1:ITSO_admin>addmdisk -mdisk Linux-md2 MD_LinuxVD
IBM_2145:ITSO-CLS1:ITSO_admin>addmdisk -mdisk Linux-md3 MD_LinuxVD

296 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

IBM_2145:ITSO-CLS1:ITSO_admin>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
26 md_LinuxS online image 2 Linux_Pool1 12.0GB 0000000000000008
DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000
generic_hdd
27 md_LinuxD online image 3 Linux_Pool2 5.0GB 0000000000000009
DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000
generic_hdd
28 Linux-md1 online unmanaged 8 MD_LinuxVD 8.0GB 0000000000000010
DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000
generic_hdd
29 Linux-md2 online unmanaged 8 MD_LinuxVD 8.0GB 0000000000000011
DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000
generic_hdd
30 Linux-md3 online unmanaged 8 MD_LinuxVD 8.0GB 0000000000000012
DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000
generic_hdd

Migrating the volumes


We are now ready to migrate the image mode volumes onto striped volumes in the
MD_LinuxVD storage pool by using the migratevdisk command, as shown in Example 6-22.

Example 6-22 Migrating image mode volumes to striped volumes


IBM_2145:ITSO-CLS1:ITSO_admin>migratevdisk -vdisk Linux_SANB -mdiskgrp MD_LinuxVD
IBM_2145:ITSO-CLS1:ITSO_admin>migratevdisk -vdisk Linux_Data -mdiskgrp MD_LinuxVD

IBM_2145:ITSO-CLS1:ITSO_admin>lsmigrate
migrate_type MDisk_Group_Migration
progress 25
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 70
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0

While the migration is running, our Linux server is still running.

To check the overall progress of the migration, we use the lsmigrate command, as shown in
Example 6-22. Listing the storage pool by using the lsmdiskgrp command shows that the free
capacity on the old storage pools is slowly increasing while those extents are moved to the
new storage pool.

After this task completes, the volumes are now spread over three MDisks, as shown in
Example 6-23.

Example 6-23 Migration complete


IBM_2145:ITSO-CLS1:ITSO_admin>lsmdiskgrp MD_LinuxVD
id 8
name MD_LinuxVD

Chapter 6. Data migration 297


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

status online
mdisk_count 3
vdisk_count 2
capacity 24.0GB
extent_size 512
free_capacity 7.0GB
virtual_capacity 17.00GB
used_capacity 17.00GB
real_capacity 17.00GB
overallocation 70
warning 0
IBM_2145:ITSO-CLS1:ITSO_admin>lsvdiskmember Linux_SANB
id
28
29
30

IBM_2145:ITSO-CLS1:ITSO_admin>lsvdiskmember Linux_Data
id
28
29
30

Our migration to striped volumes on another storage subsystem (DS4500) is now complete.
The original MDisks (Linux-md1, Linux-md2, and Linux-md3) can now be removed from the
SVC, and these LUNs can be removed from the storage subsystem.

If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can
remove the DS4700 storage subsystem from our SAN fabric.

6.7.4 Preparing to migrate from SVC


Before we move the Linux server’s LUNs from being accessed by the SVC as volumes to
being directly accessed from the storage subsystem, we must convert the volumes into image
mode volumes.

You might want to perform this task for any one of the following reasons:
򐂰 You purchased a new storage subsystem and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
򐂰 You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you
no longer need that host connected to the SVC.
򐂰 You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
򐂰 Changes to your environment no longer require this host to use the SVC.

We can perform other preparation tasks before we must shut down the host and reconfigure
the LUN masking and mapping. We describe these tasks in this section.

If you are moving the data to a new storage subsystem, it is assumed that the storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, which is shown in Figure 6-86 on
page 299.

298 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Zoning for Migration Scenarios

LINUX
Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 6-86 Environment with the SVC

The process of the migration is described as follows:


1. Making fabric zone changes
The first step is to set up the SAN configuration so that all of the zones are created. You
must add the new storage subsystem to the Red Zone so that the SVC can communicate
with it directly.
We also need a Green Zone for our host to use when we are ready for it to directly access
the disk after it is removed from the SVC.
It is assumed that you created the necessary zones, and after your zone configuration is
set up correctly, the SVC sees the new storage subsystem controller by using the
lscontroller command, as shown in Example 6-24.

Example 6-24 Check controller name


IBM_2145:ITSO-CLS1:ITSO_admin>lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
0 controller0 IBM 1814
1 controller1 2076 IBM 2145
2 controller2 2076 IBM 2145

It is also a good idea to rename the new storage subsystem’s controller to a more useful
name, which can be done by using the chcontroller -name command, as shown in
Example 6-25.

Example 6-25 Rename controller


IBM_2145:ITSO-CLS1:ITSO_admin>chcontroller -name ITSO-4700 0

Chapter 6. Data migration 299


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Also, verify that controller name was changed as you wanted, as shown in Example 6-26.

Example 6-26 Recheck controller name


IBM_2145:ITSO-CLS1:ITSO_admin>lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
0 ITSO-4700 IBM 1814

2. Creating LUNs
We created two LUNs and masked the LUNs on our storage subsystem so that the SVC
can see them. Eventually, we give these two LUNs directly to the host and remove the
volumes that the host features. To check that the SVC can use these two LUNs, run the
detectmdisk command, as shown in Example 6-27.

Example 6-27 Discover the new MDisks


IBM_2145:ITSO-CLS1:ITSO_admin> detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 mdisk0 online managed
600a0b800026b282000042f84873c7e100000000000000000000000000000000
28 Linux-md1 online managed 8
MD_LinuxVD 8.0GB 0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000
29 Linux-md2 online managed 8
MD_LinuxVD 8.0GB 0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000
30 Linux-md3 online managed 8
MD_LinuxVD 8.0GB 0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000
31 mdisk31 online unmanaged
6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
32 mdisk32 online unmanaged
12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000

Even though the MDisks do not stay in the SVC for long, we suggest that you rename
them to more meaningful names so that they are not confused with other MDisks that are
used by other activities.
Also, we create the storage pools to hold our new MDisks, as shown in Example 6-28.

Example 6-28 Rename the MDisks


IBM_2145:ITSO-CLS1:ITSO_admin> chmdisk -name mdLinux_ivd mdisk32

IBM_2145:ITSO-CLS1:ITSO_admin> mkmdiskgrp -name MDG_Linuxivd -ext 512


MDisk Group, id [9], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin> mkmdiskgrp -name MDG_Linuxivd -ext 512
CMMVC5758E Object name already exists.

IBM_2145:ITSO-CLS1:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count capacity
extent_size free_capacity virtual_capacity used_capacity real_capacity
overallocation warning easy_tier easy_tier_status

300 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

8 MD_LinuxVD online 3 2 24.0GB


512 7.0GB 17.00GB 17.00GB 17.00GB 70
0 auto inactive
9 MDG_Linuxivd online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0 auto inactive

Our SVC environment is now ready for the volume migration to image mode volumes.

6.7.5 Migrating volumes to image mode volumes


While our Linux server is still running, we migrate the managed volumes onto the new MDisks
by using image mode volumes. Use the migratetoimage command to perform this task, as
shown in Example 6-29.

Example 6-29 Migrate the volumes to image mode volumes


IBM_2145:ITSO-CLS1:ITSO_admin> migratetoimage -vdisk Linux_SANB -mdisk mdLinux_ivd
-mdiskgrp MD_LinuxVD
IBM_2145:ITSO-CLS1:ITSO_admin> migratetoimage -vdisk Linux_Data -mdisk mdLinux_ivd1
-mdiskgrp MD_LinuxVD

IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
28 Linux-md1 online managed 8
MD_LinuxVD 8.0GB 0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000
29 Linux-md2 online managed 8
MD_LinuxVD 8.0GB 0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000
30 Linux-md3 online managed 8
MD_LinuxVD 8.0GB 0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000
31 mdLinux_ivd1 online image 8
MD_LinuxVD 6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
32 mdLinux_ivd online image 8
MD_LinuxVD 12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000

IBM_2145:ITSO-CLS1:ITSO_admin> lsmigrate
migrate_type Migrate_to_Image
progress 4
migrate_source_vdisk_index 29
migrate_target_mdisk_index 32
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 30
migrate_source_vdisk_index 30
migrate_target_mdisk_index 31
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0

Chapter 6. Data migration 301


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

During the migration, our Linux server is unaware that its data is being physically moved
between storage subsystems.

After the migration completes, the image mode volumes are ready to be removed from the
Linux server. Also, the real LUNs can be mapped and masked directly to the host by using the
storage subsystem’s tool.

6.7.6 Removing LUNs from SVC


The next step requires downtime on the Linux server because we remap and remask the
disks so that the host sees them directly through the Green Zone, as shown in Figure 6-86 on
page 299.

Our Linux server has two LUNs: one LUN is our boot disk and holds operating system file
systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.

If we want to move only the LUN that holds our application and data files, we can move that
LUN without rebooting the host. The only requirement is that we unmount the file system and
vary off the VG to ensure data integrity during the reassignment.

Before you start: Moving LUNs to another storage subsystem might need another entry in
the multipath.conf file. Check with the storage subsystem vendor to identify any content
that you must add to the file. You might be able to install and modify the file in advance.

Complete the following steps to move both LUNs at the same time:
1. Confirm that your operating system is configured for the new storage.
2. Shut down the host.
If you are moving only the LUNs that contain the application and data, complete the
following steps:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG by using the vgchange -a n
VOLUMEGROUP_NAME command.
d. If you can, unload your HBA driver by using the rmmod DRIVER_MODULE command. This
command removes the SCSI definitions from the kernel. (We reload this module and
rediscover the disks later.) It is possible to tell the Linux SCSI subsystem to rescan for
new disks without requiring you to unload the HBA driver; however, we do not provide
those details here.
3. Remove the volumes from the host by using the rmvdiskhostmap command
(Example 6-30). To confirm that you removed the volumes, use the lshostvdiskmap
command, which shows that these disks are no longer mapped to the Linux server.

Example 6-30 Remove the volumes from the host


IBM_2145:ITSO-CLS1:ITSO_admin> rmvdiskhostmap -host Linux Linux_SANB
IBM_2145:ITSO-CLS1:ITSO_admin> rmvdiskhostmap -host Linux Linux_Data
IBM_2145:ITSO-CLS1:ITSO_admin> lshostvdiskmap
Linux

302 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

4. Remove the volumes from the SVC by using the rmvdisk command. This step makes
them unmanaged, as shown in Example 6-31.

Cached data: When you run the rmvdisk command, the SVC first confirms that no
outstanding dirty cached data exists for the volume that is being removed. If cached
data is still uncommitted, the command fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk

You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.

The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data needs to be destaged and how busy the
I/O subsystem is determine how long this command takes to complete.

You can check whether the volume has uncommitted data in the cache by using the
command lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This
attribute has the following meanings:
򐂰 empty: No modified data exists in the cache.
򐂰 not_empty: Modified data might exist in the cache.
򐂰 corrupt: Modified data might exist in the cache, but any data was lost.

Example 6-31 Remove volumes from SVC


IBM_2145:ITSO-CLS1:ITSO_admin> rmvdisk Linux_SANB
IBM_2145:ITSO-CLS1:ITSO_admin> rmvdisk Linux_Data
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
31 mdLinux_ivd1 online unmanaged
6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
32 mdLinux_ivd online unmanaged
12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000

5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the Linux server.

Important: If one of the disks is used to boot your Linux server, you must ensure that
the disk is presented back to the host as SCSI ID 0 so that the FC adapter BIOS finds
that disk during its initialization.

6. Power on your host server and enter your FC HBA BIOS before you boot the OS. Ensure
that you change the boot configuration so that it points to the SVC. In our example, we
performed the following steps on a QLogic HBA:
a. Pressed Ctrl+Q to enter the HBA BIOS
b. Opened Configuration Settings
c. Opened Selectable Boot Settings
d. Changed the entry from the SVC to the storage subsystem LUN with SCSI ID 0

Chapter 6. Data migration 303


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

e. Exited the menu and saved the changes

Important: This step is the last step that you can perform and still safely back out from
the changes so far.

Up to this point, you can reverse all of the following actions that you performed so far to
get the server back online without data loss:
򐂰 Remap and remask the LUNs back to the SVC.
򐂰 Run the detectmdisk command to rediscover the MDisks.
򐂰 Re-create the volumes with the mkvdisk command.
򐂰 Remap the volumes back to the server with the mkvdiskhostmap command.

After you start the next step, you might not be able to turn back without the risk of data
loss.

7. Restart the Linux server.


If all of the zoning, LUN masking, and mapping were successful, the Linux server boots as
though nothing happened.
However, if you moved only the application LUN to the SVC and left your Linux server
running, you must complete the following steps to see the new volume:
a. Load your HBA driver by using the modprobe DRIVER_NAME command.
If you did not (and cannot) unload your HBA driver, you can run commands to the
kernel to rescan the SCSI bus to see the new volumes. (Details for this step are beyond
the scope of this book.)
b. Check your syslog and verify that the kernel found the new volumes. On Red Hat
Enterprise Linux, the syslog is stored in the /var/log/messages directory.
c. If your application and data are on an LVM volume, run the vgscan command to
rediscover the VG. Then, run the vgchange -a y VOLUME_GROUP command to activate
the VG.
8. Mount your file systems by using the mount /MOUNT_POINT command, as shown in
Example 6-32. The df output shows that all of the disks are available again.

Example 6-32 File system after migration


[root@Linux ~]# mount /dev/dm-2 /data
[root@Linux ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
10093752 1938124 7634620 21% /
/dev/sda1 101086 12054 83813 13% /boot
tmpfs 1033496 0 1033496 0% /dev/shm
/dev/dm-2 5160576 158160 4740272 4% /data

You are ready to start your application.


9. To ensure that the MDisks are removed from the SVC, run the detectmdisk command.
The MDisks first are discovered as offline. Then, they are automatically removed when the
SVC determines that no volumes are associated with these MDisks.

304 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

6.8 Migrating ESX SAN disks to SVC disks


In this section, we describe how to move the two LUNs from our VMware ESX server to the
SVC. The ESX operating system is installed locally on the host, but the two SAN disks are
connected and the VMs are stored there. We then manage those LUNs with the SVC, move
them between other managed disks, and finally move them back to image mode disks so that
those LUNs can then be masked and mapped back to the VMware ESX server directly.

This example can help you perform any one of the following tasks in your environment:
򐂰 Move your ESX server’s data LUNs (that are your VMware VMFS file systems where you
might have your VMs stored), which are directly accessed from a storage subsystem, to
virtualized disks under the control of the SVC.
򐂰 Move LUNs between storage subsystems while your VMware VMs are still running.
You can perform this task to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, considering availability, performance, and
redundancy. For more information, see 6.8.3, “Migrating image mode volumes” on
page 312.
򐂰 Move your VMware ESX server’s LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the server.
This task starts in 6.8.4, “Preparing to migrate SVC” on page 315.

You can use these tasks individually or together to migrate your VMware ESX server’s LUNs
from one storage subsystem to another storage subsystem by using the SVC as your
migration tool. If you do not use all three of these tasks, you can introduce the SVC in your
environment or move the data between your storage subsystems. The only downtime that is
required for these tasks is the time that it takes you to remask and remap the LUNs between
the storage subsystems and your SVC.

Our starting SAN environment is shown in Figure 6-87.

Figure 6-87 ESX environment before migration

Chapter 6. Data migration 305


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-87 shows our ESX server that is connected to the SAN infrastructure. Two LUNs are
masked directly to it from our storage subsystem.

Our ESX server represents a typical SAN environment with a host that directly uses LUNs
that were created on a SAN storage subsystem, as shown in Figure 6-87.

The ESX server’s HBA cards are zoned so that they are in the Green Zone with our storage
subsystem.

The two LUNs that were defined on the storage subsystem and that use LUN masking are
directly available to our ESX server.

6.8.1 Preparing SVC to virtualize ESX disks


This section describes the preparatory tasks that we perform before taking our ESX server or
VMs offline. These tasks are all nondisruptive activities, which do not affect your SAN fabric or
your existing SVC configuration (if you have a production SVC in place).

Creating a storage pool


When we move the two ESX LUNs to the SVC, they first are used in image mode; therefore,
we need a storage pool to hold those disks.

We create an empty storage pool for these disks by using the command that is shown in
Example 6-33. Our MDG_Nile_VM storage pool holds the boot LUN and our data LUN.

Example 6-33 Creating an empty storage pool


IBM_2145:ITSO-CLS1:ITSO_admin> mkmdiskgrp -name MDG_Nile_VM -ext 512
MDisk Group, id [3], successfully created

Creating the host definition


If you prepared the zones correctly, the SVC can see the ESX server’s HBAs on the fabric.
(Our host had only one HBA.)

First, we get the WWN for our ESX server’s HBA because many hosts are connected to our
SAN fabric and in the Blue Zone. We want to ensure that we have the correct WWN to reduce
our ESX server’s downtime.

Log in to your VMware Management Console as root, browse to Configuration, and select
Storage Adapters. The storage adapters are shown on the right side of the window that is
shown in Figure 6-88. This window displays all of the necessary information. Figure 6-88
shows our WWNs, which are 210000E08B89B8C0 and 210000E08B892BCD.

306 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-88 Obtain your WWN by using the VMware Management Console

Use the lshbaportcandidate command on the SVC to list all of the WWNs that are not yet
allocated to a host and that the SVC can see on the SAN fabric. Example 6-34 on page 307
shows the output of the host WWNs that it found on our SAN fabric. (If the port is not shown,
a zone configuration problem exists.)

Example 6-34 Available host WWNs


IBM_2145:ITSO-CLS1:ITSO_admin> lshbaportcandidate
id
210000E08B89B8C0
210000E08B892BCD
210000E08B0548BC
210000E08B0541BC
210000E08B89CCC2

After we verify that the SVC can see our host, we create the host entry and assign the WWN
to this entry, as shown in Example 6-35.

Example 6-35 Create the host entry


IBM_2145:ITSO-CLS1:ITSO_admin> mkhost -name Nile -hbawwpn
210000E08B89B8C0:210000E08B892BCD
Host, id [1], successfully created
IBM_2145:ITSO-CLS1:ITSO_admin> lshost Nile
id 1
name Nile
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B892BCD
node_logged_in_count 4
state active
WWPN 210000E08B89B8C0
node_logged_in_count 4
state active

Chapter 6. Data migration 307


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Verifying that you can see your storage subsystem


If our zoning was performed correctly, the SVC can also see the storage subsystem with the
lscontroller command, as shown in Example 6-36.

Example 6-36 Available storage controllers


IBM_2145:ITSO-CLS1:ITSO_admin> lscontroller
id controller_name ctrl_s/n vendor_id
product_id_low product_id_high
0 DS4500 IBM
1 DS4700 IBM 1814

Getting your disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available, unmanaged
MDisks (if the SVC sees many available, unmanaged MDisks), we get the LUN serial
numbers from our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.

If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Figure 6-89 and
Figure 6-90 show our serial numbers. Figure 6-89 shows disk serial number VM_W2k3.

Figure 6-89 Obtaining disk serial number VM_W2k3

Figure 6-90 shows disk serial number VM_SLES.

308 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-90 Obtaining disk serial number VM_SLES

We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and
give them back to the host as volumes.

6.8.2 Moving LUNs to SVC


In this step, we move the LUNs that are assigned to the ESX server and reassign them to the
SVC. Our ESX server has two LUNs, as shown in Figure 6-91.

Figure 6-91 VMware LUNs

The VMs are on these LUNs. Therefore, to move these LUNs under the control of the SVC,
we do not need to reboot the entire ESX server. However, we must stop and suspend all
VMware guests that are using these LUNs.

Moving VMware guest LUNs


To move the VMware LUNs to the SVC, complete the following steps:
1. By using Storage Manager, we identified the LUN number that was presented to the ESX
server. Record which LUN had which LUN number (Figure 6-92).

Figure 6-92 Identify LUN numbers in IBM DS4000 Storage Manager

Chapter 6. Data migration 309


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

2. Identify all of the VMware guests that are using this LUN and shut them down. One way to
identify them is to highlight the VM and open the Summary tab. The datapool that is used
is displayed under Datastore. Figure 6-93 on page 310 shows a Linux VM that is using the
datastore that is named SLES_Costa_Rica.

Figure 6-93 Identify the LUNs that are used by the VMs

3. If you have several ESX hosts, also check the other ESX hosts to ensure that no guest
operating system is running and using this datastore.
4. Repeat steps 1 - 3 for every datastore that you want to migrate.
5. After the guests are suspended, we use Storage Manager (our storage subsystem
management tool) to unmap and unmask the disks from the ESX server and to remap and
remask the disks to the SVC.
6. From the SVC, discover the new disks by using the detectmdisk command. The disks are
discovered and named as mdiskN, where N is the next available MDisk number (starting
from 0). Example 6-37 shows the commands that we used to discover our MDisks and to
verify that we have the correct MDisks.

Example 6-37 Discover the new MDisks


IBM_2145:ITSO-CLS1:ITSO_admin> detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
21 mdisk21 online unmanaged
60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 mdisk22 online unmanaged
70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000

310 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Important: Match your discovered MDisk serial numbers (UID on the lsmdisk
command task display) with the serial number that you obtained earlier, as shown in
Figure 6-89 on page 308 and Figure 6-90 on page 309.

7. After we verify that we have the correct MDisks, we rename them to avoid confusion in the
future when we perform other MDisk-related tasks, as shown in Example 6-38.

Example 6-38 Rename the MDisks


IBM_2145:ITSO-CLS1:ITSO_admin> chmdisk -name ESX_W2k3 mdisk22
IBM_2145:ITSO-CLS1:ITSO_admin> chmdisk -name ESX_SLES mdisk21
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk

21 ESX_SLES online unmanaged


60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 ESX_W2k3 online unmanaged
70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000

8. We create our image mode volumes by using the mkvdisk command (Example 6-39). The
use of the -vtype image parameter ensures that it creates image mode volumes, which
means that the virtualized disks have the same layout as though they were not virtualized.

Example 6-39 Create the image mode volumes


IBM_2145:ITSO-CLS1:ITSO_admin> mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype
image -mdisk ESX_W2k3 -name ESX_W2k3_IVD
Virtual Disk, id [29], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin> mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype


image -mdisk ESX_SLES -name ESX_SLES_IVD
Virtual Disk, id [30], successfully created

9. We can map the new image mode volumes to the host. Use the same SCSI LUN IDs as
on the storage subsystem for the mapping, as shown in Example 6-40.

Example 6-40 Map the volumes to the host


IBM_2145:ITSO-CLS1:ITSO_admin> mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVD
Virtual Disk to Host map, id [0], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin> mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD


Virtual Disk to Host map, id [1], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin> lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD
60050768018301BF280000000000002A
1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD
60050768018301BF2800000000000029

Chapter 6. Data migration 311


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

10.By using the VMware Management Console, rescan to discover the new volume. Open
the Configuration tab, select Storage Adapters, and then click Rescan. During the
rescan, you can receive geometry errors when ESX discovers that the old disk
disappeared. Your volume appears with the new vmhba devices.
11.We are ready to restart the VMware guests again.

At this point, you migrated the VMware LUNs successfully to the SVC.

6.8.3 Migrating image mode volumes


While the VMware server and its VMs are still running, we migrate the image mode volumes
onto striped volumes, with the extents being spread over three other MDisks.

Preparing MDisks for striped mode volumes


In this example, we migrate the image mode volumes to volumes and move the data to
another storage subsystem in one step.

Adding a storage subsystem to the IBM SAN Volume Controller


If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, which is shown in Figure 6-94.

Figure 6-94 ESX SVC SAN environment

Make fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. Add the
new storage subsystem to the Red Zone so that the SVC can communicate with it directly.

We also need a Green Zone for our host to use when we are ready for it to directly access the
disk after it is removed from the SVC.

312 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

We assume that you created the necessary zones.

In our environment, we performed the following tasks:


򐂰 Created three LUNs on another storage subsystem and mapped it to the SVC
򐂰 Discovered the LUNs as MDisks
򐂰 Created a storage pool
򐂰 Renamed these LUNs to more meaningful names
򐂰 Put all these MDisks into this storage pool

You can see the output of the commands in Example 6-41.

Example 6-41 Create a storage pool


IBM_2145:ITSO-CLS1:ITSO_admin> detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
21 ESX_SLES online image 3
MDG_Nile_VM 60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 ESX_W2k3 online image 3
MDG_Nile_VM 70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
23 mdisk23 online unmanaged
55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 mdisk24 online unmanaged
55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 mdisk25 online unmanaged
55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000

IBM_2145:ITSO-CLS1:ITSO_admin> mkmdiskgrp -name MDG_ESX_VD -ext 512


IBM_2145:ITSO-CLS1:ITSO_admin> chmdisk -name IBMESX-MD1 mdisk23
IBM_2145:ITSO-CLS1:ITSO_admin> chmdisk -name IBMESX-MD2 mdisk24
IBM_2145:ITSO-CLS1:ITSO_admin> chmdisk -name IBMESX-MD3 mdisk25
IBM_2145:ITSO-CLS1:ITSO_admin> addmdisk -mdisk IBMESX-MD1 MDG_ESX_VD
IBM_2145:ITSO-CLS1:ITSO_admin> addmdisk -mdisk IBMESX-MD2 MDG_ESX_VD
IBM_2145:ITSO-CLS1:ITSO_admin> addmdisk -mdisk IBMESX-MD3 MDG_ESX_VD

IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
21 ESX_SLES online image 3
MDG_Nile_VM 60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 ESX_W2k3 online image 3
MDG_Nile_VM 70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
23 IBMESX-MD1 online managed 4
MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000

Chapter 6. Data migration 313


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

25 IBMESX-MD3 online managed 4


MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000

Migrating the volumes


We are now ready to migrate the image mode volumes onto striped volumes in the new
storage pool (MDG_ESX_VD) by using the migratevdisk command, as shown in
Example 6-42. While the migration is running, our VMware ESX server and our VMware
guests continue to run.

To check the overall progress of the migration, we use the lsmigrate command, as shown in
Example 6-42. Listing the storage pool with the lsmdiskgrp command shows that the free
capacity on the old storage pool is slowly increasing as those extents are moved to the new
storage pool.

Example 6-42 Migrating image mode volumes to striped volumes


IBM_2145:ITSO-CLS1:ITSO_admin> migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp MDG_ESX_VD
IBM_2145:ITSO-CLS1:ITSO_admin> migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp MDG_ESX_VD
IBM_2145:ITSO-CLS1:ITSO_admin> lsmigrate
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:ITSO_admin> lsmigrate
migrate_type MDisk_Group_Migration
progress 1
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0

IBM_2145:ITSO-CLS1:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count capacity
extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation
warning
3 MDG_Nile_VM online 2 2 130.0GB
512 1.0GB 130.00GB 130.00GB 130.00GB 100
4 MDG_ESX_VD online 3 0 165.0GB
512 35.0GB 0.00MB 0.00MB 0.00MB 0

If you compare the lsmdiskgrp output after the migration (as shown in Example 6-43), you
can see that all of the virtual capacity was moved from the old storage pool (MDG_Nile_VM)

314 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

to the new storage pool (MDG_ESX_VD). The mdisk_count column shows that the capacity is
now spread over three MDisks.

Example 6-43 List MDisk group


IBM_2145:ITSO-CLS1:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 MDG_Nile_VM online 2 0
130.0GB 512 130.0GB 0.00MB 0.00MB
0.00MB 0 0
4 MDG_ESX_VD online 3 2
165.0GB 512 35.0GB 130.00GB 130.00GB
130.00GB 78 0

The migration to the SVC is complete. You can remove the original MDisks from the SVC and
remove these LUNs from the storage subsystem.

If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it
from our SAN fabric.

6.8.4 Preparing to migrate SVC


Before we move the ESX server’s LUNs from being accessible by the SVC as volumes to
becoming directly accessed from the storage subsystem, we must convert the volumes into
image mode volumes.

You might want to perform this process for any one of the following reasons:
򐂰 You purchased a new storage subsystem and you were using the SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
򐂰 You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you
no longer need that host connected to the SVC.
򐂰 You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
򐂰 Changes to your environment no longer require this host to use the SVC.

We can perform other preparatory activities before we shut down the host and reconfigure the
LUN masking and mapping. This section describes those activities. In our example, we move
volumes that are on a DS4500 to image mode volumes that are on a DS4700.

If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as described in “Adding a storage
subsystem to the IBM SAN Volume Controller” on page 312 and “Make fabric zone changes”
on page 312.

Creating LUNs
On our storage subsystem, we create two LUNs and mask the LUNs so that the SVC can see
them. These two LUNs eventually are given directly to the host, which removes the volumes
that it uses. To check that the SVC can use them, run the detectmdisk command, as shown in
Example 6-44.

Chapter 6. Data migration 315


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Example 6-44 Discover the new MDisks


IBM_2145:ITSO-CLS1:ITSO_admin> detectmdisk
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID

23 IBMESX-MD1 online managed 4


MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
26 mdisk26 online unmanaged
120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 mdisk27 online unmanaged
100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000

Although the MDisks do not stay in the SVC long, we suggest that you rename them to more
meaningful names so that they are not confused with other MDisks that are being used by
other activities. We also create the storage pools to hold our new MDisks, as shown in
Example 6-45.

Example 6-45 Rename the MDisks


IBM_2145:ITSO-CLS1:ITSO_admin> chmdisk -name ESX_IVD_SLES mdisk26
IBM_2145:ITSO-CLS1:ITSO_admin> chmdisk -name ESX_IVD_W2K3 mdisk27

IBM_2145:ITSO-CLS1:ITSO_admin> mkmdiskgrp -name MDG_IVD_ESX -ext 512


MDisk Group, id [5], successfully created

IBM_2145:ITSO-CLS1:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
4 MDG_ESX_VD online 3 2
165.0GB 512 35.0GB 130.00GB 130.00GB
130.00GB 78 0
5 MDG_IVD_ESX online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0

Our SVC environment is ready for the volume migration to image mode volumes.

316 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

6.8.5 Migrating the managed volumes to image mode volumes


While our ESX server is still running, we migrate the managed volumes onto the new MDisks
by using image mode volumes. The command to perform this task is the migratetoimage
command, which is shown in Example 6-46.

Example 6-46 Migrate the volumes to image mode volumes


IBM_2145:ITSO-CLS1:ITSO_admin> migratetoimage -vdisk ESX_SLES_IVD -mdisk ESX_IVD_SLES
-mdiskgrp MDG_IVD_ESX
IBM_2145:ITSO-CLS1:ITSO_admin> migratetoimage -vdisk ESX_W2k3_IVD -mdisk ESX_IVD_W2K3
-mdiskgrp MDG_IVD_ESX

IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
23 IBMESX-MD1 online managed 4
MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
26 ESX_IVD_SLES online image 5
MDG_IVD_ESX 120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 ESX_IVD_W2K3 online image 5
MDG_IVD_ESX 100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000

During the migration, our ESX server is unaware that its data is being physically moved
between storage subsystems. We can continue to run and use the VMs that are running on
the server.

You can check the migration status by using the lsmigrate command, as shown in
Example 6-47.

Example 6-47 The lsmigrate command and output


IBM_2145:ITSO-CLS1:ITSO_admin> lsmigrate
migrate_type Migrate_to_Image
progress 2
migrate_source_vdisk_index 29
migrate_target_mdisk_index 27
migrate_target_mdisk_grp 5
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 12
migrate_source_vdisk_index 30
migrate_target_mdisk_index 26
migrate_target_mdisk_grp 5
max_thread_count 4
migrate_source_vdisk_copy_id 0

Chapter 6. Data migration 317


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

After the migration completes, the image mode volumes are ready to be removed from the
ESX server and the real LUNs can be mapped and masked directly to the host by using the
storage subsystem’s tool.

6.8.6 Removing LUNs from SVC


Your ESX server’s configuration determines in what order your LUNs are removed from the
control of the SVC, and whether you must reboot the ESX server and suspend the VMware
guests.

In our example, we moved the VM disks. Therefore, to remove these LUNs from the control of
the SVC, we must stop and suspend all of the VMware guests that are using this LUN.
Complete the following steps:
1. Check which SCSI LUN IDs are assigned to the migrated disks by using the
lshostvdiskmap command, as shown in Example 6-48. Compare the volume UID and sort
out the information.

Example 6-48 Note the SCSI LUN IDs


IBM_2145:ITSO-CLS1:ITSO_admin> lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name
wwpn vdisk_UID
1 Nile 0 30 ESX_SLES_IVD
210000E08B892BCD 60050768018301BF280000000000002A
1 Nile 1 29 ESX_W2k3_IVD
210000E08B892BCD 60050768018301BF2800000000000029

IBM_2145:ITSO-CLS1:ITSO_admin> lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count
0 vdisk_A 0 io_grp0 online
2 MDG_Image 36.0GB image
29 ESX_W2k3_IVD 0 io_grp0 online
4 MDG_ESX_VD 70.0GB striped
60050768018301BF2800000000000029 0 1
30 ESX_SLES_IVD 0 io_grp0 online
4 MDG_ESX_VD 60.0GB striped
60050768018301BF280000000000002A 0 1

2. Shut down and suspend all guests that are using the LUNs. You can use the same method
that is described in “Moving VMware guest LUNs” on page 309 to identify the guests that
are using this LUN.
3. Remove the volumes from the host by using the rmvdiskhostmap command, as shown in
Example 6-49. To confirm that the volumes were removed, use the lshostvdiskmap
command, which shows that these volumes are no longer mapped to the ESX server.

Example 6-49 Remove the volumes from the host


IBM_2145:ITSO-CLS1:ITSO_admin> rmvdiskhostmap -host Nile ESX_W2k3_IVD
IBM_2145:ITSO-CLS1:ITSO_admin> rmvdiskhostmap -host Nile ESX_SLES_IVD

4. Remove the volumes from the SVC by using the rmvdisk command, which makes the
MDisks unmanaged, as shown in Example 6-50.

318 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Cached data: When you run the rmvdisk command, the SVC first confirms that there is
no outstanding dirty cached data for the volume that is being removed. If uncommitted
cached data still exists, the command fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk

You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.

The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data exists to destage and how busy the I/O
subsystem is determine how long this command takes to complete.

You can check whether the volume has uncommitted data in the cache by using the
lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This
attribute has the following meanings:
򐂰 empty: No modified data exists in the cache.
򐂰 not_empty: Modified data might exist in the cache.
򐂰 corrupt: Modified data might exist in the cache, but the data was lost.

Example 6-50 Remove the volumes from SVC


IBM_2145:ITSO-CLS1:ITSO_admin> rmvdisk ESX_W2k3_IVD
IBM_2145:ITSO-CLS1:ITSO_admin> rmvdisk ESX_SLES_IVD
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID

26 ESX_IVD_SLES online unmanaged


120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 ESX_IVD_W2K3 online unmanaged
100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000

5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the ESX server. Remember in Example 6-48 on
page 318, we recorded the SCSI LUN IDs. To map your LUNs on the storage subsystem,
use the same SCSI LUN IDs that you used in the SVC.

Important: This step is the last step that you can perform and still safely back out of
any changes made so far.

Up to this point, you can reverse all of the following actions that you performed to get
the server back online without data loss:
򐂰 Remap and remask the LUNs back to SVC.
򐂰 Run the detectmdisk command to rediscover the MDisks.
򐂰 Re-create the volumes with the mkvdisk command.
򐂰 Remap the volumes back to the server with the mkvdiskhostmap command.

After you start the next step, you might not be able to turn back without the risk of data
loss.

Chapter 6. Data migration 319


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

6. By using the VMware Management Console, rescan to discover the new volume.
Figure 6-95 shows the view before the rescan. Figure 6-96 on page 320 shows the view
after the rescan. The size of the LUN changed because we moved to another LUN on
another storage subsystem.

Figure 6-95 Before adapter rescan

Figure 6-96 After adapter rescan

320 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

During the rescan, you can receive geometry errors when ESX discovers that the old disk
disappeared. Your volume appears with a new vmhba address and VMware recognizes it
as our VMWARE-GUESTS disk.
We are now ready to restart the VMware guests.
7. To ensure that the MDisks are removed from the SVC, run the detectmdisk command.
The MDisks are discovered as offline and then automatically removed when the SVC
determines that no volumes are associated with these MDisks.

6.9 Migrating AIX SAN disks to SVC volumes


In this section, we describe how to move the two LUNs from an AIX server, which is directly
off our DS4000 storage subsystem, over to the SVC.

We manage those LUNs with the SVC, move them between other managed disks, and then
move them back to image mode disks so that those LUNs can then be masked and mapped
back to the AIX server directly.

By using this example, you can perform any of the following tasks in your environment:
򐂰 Move an AIX server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC, which is the first task that you perform when you are introducing
the SVC into your environment.
This section shows that your host downtime is only a few minutes while you remap and
remask disks by using your storage subsystem LUN management tool. This step starts in
6.9.1, “Preparing SVC to virtualize AIX disks” on page 323.
򐂰 Move data between storage subsystems while your AIX server is still running and
servicing your business application.
You can perform this task if you are removing a storage subsystem from your SAN
environment and you want to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, considering availability, performance, and
redundancy. This step is described in 6.9.3, “Migrating image mode volumes to volumes”
on page 329.
򐂰 Move your AIX server’s LUNs back to image mode volumes so that they can be remapped
and remasked directly back to the AIX server.
This step starts in 6.9.4, “Preparing to migrate from SVC” on page 332.

Use these tasks individually or together to migrate your AIX server’s LUNs from one storage
subsystem to another storage subsystem by using the SVC as your migration tool. If you do
not use all three tasks, you can introduce or remove the SVC from your environment.

The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.

Chapter 6. Data migration 321


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Our AIX environment is shown in Figure 6-97.

Zoning for Migration Scenarios

AIX
Host

SAN

Green Zone

IBM or OEM
Storage
Subsystem

Figure 6-97 AIX SAN environment

Figure 6-97 also shows that our AIX server is connected to our SAN infrastructure. It has two
LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem.

The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the
itsoaixvg1 LVM group, as shown in Example 6-51 on page 323.

322 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Example 6-51 AIX SAN configuration


#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1D-08-02 1814 DS4700 Disk Array Device
hdisk4 Available 1D-08-02 1814 DS4700 Disk Array Device
#lspv
hdisk0 0009cddaea97bf61 rootvg active
hdisk1 0009cdda43c9dfd5 rootvg active
hdisk2 0009cddabaef1d99 rootvg active
hdisk3 0009cdda0a4c0dd5 itsoaixvg active
hdisk4 0009cdda0a4d1a64 itsoaixvg1 active

Our AIX server represents a typical SAN environment with a host directly by using LUNs that
were created on a SAN storage subsystem, as shown in Figure 6-97 on page 322.

The AIX server’s HBA cards are zoned so that they are in the Green (dotted line) Zone with
our storage subsystem.

The two LUNs, hdisk3 and hdisk4, were defined on the storage subsystem. By using LUN
masking, they are directly available to our AIX server.

6.9.1 Preparing SVC to virtualize AIX disks


This section describes the preparatory tasks that we perform before we take our AIX server
offline. These tasks are all nondisruptive activities and do not affect your SAN fabric or your
existing SVC configuration (if you already have a production SVC in place).
1. Creating a storage pool
When we move the two AIX LUNs to the SVC, they first are used in image mode;
therefore, we must create a storage pool to hold those disks. We must create an empty
storage pool for these disks by using the commands that are shown in Example 6-52. We
name the storage pool, which will hold our LUNs, aix_imgmdg.

Example 6-52 Create an empty storage pool (mdiskgrp)


IBM_2145:ITSO-CLS2:ITSO_admin> mkmdiskgrp -name aix_imgmdg -ext 512
MDisk Group, id [7], successfully created

IBM_2145:ITSO-CLS2:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size
free_capacity virtual_capacity used_capacity real_capacity overallocation

7 aix_imgmdg online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0

2. Creating our host definition


If you prepared the zones correctly, the SVC can see the AIX server’s HBAs on the fabric.
(Our host had only one HBA.)
First, we get the WWN for our AIX server’s HBA because we have many hosts that are
connected to our SAN fabric and in the Blue Zone. We want to ensure that we have the
correct WWN to reduce our AIX server’s downtime. Example 6-53 shows the commands
to get the WWN; our host has a WWN of 10000000C932A7FB.

Chapter 6. Data migration 323


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Example 6-53 Discover your WWN


#lsdev -Ccadapter|grep fcs
fcs0 Available 1Z-08 FC Adapter
fcs1 Available 1D-08 FC Adapter
#lscfg -vpl fcs0
fcs0 U0.1-P2-I4/Q1 FC Adapter

Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I4/Q1
#lscfg -vpl fcs1
fcs1 U0.1-P2-I5/Q1 FC Adapter

Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A67B
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A800
ROS Level and ID............02C03891
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........02000909
Device Specific.(Z4)........FF401050
Device Specific.(Z5)........02C03891
Device Specific.(Z6)........06433891
Device Specific.(Z7)........07433891
Device Specific.(Z8)........20000000C932A800
Device Specific.(Z9)........CS3.82A1
Device Specific.(ZA)........C1D3.82A1
Device Specific.(ZB)........C2D3.82A1
Device Specific.(YL)........U0.1-P2-I5/Q1

324 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9000
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I5/Q1

The lshbaportcandidate command on the SVC lists all of the WWNs, which were not yet
allocated to a host, that the SVC can see on the SAN fabric. Example 6-54 shows the
output of the host WWNs that it found in our SAN fabric. (If the port is not shown, a zone
configuration problem exists.)

Example 6-54 Available host WWNs

IBM_2145:ITSO-CLS2:ITSO_admin> lshbaportcandidate
id
10000000C932A7FB
10000000C932A800
210000E08B89B8C0

After we verify that the SVC can see our host (Kanaga), we create the host entry and
assign the WWN to this entry, as shown with the commands in Example 6-55.

Example 6-55 Create the host entry


IBM_2145:ITSO-CLS2:ITSO_admin> mkhost -name Kanaga -hbawwpn
10000000C932A7FB:10000000C932A800
Host, id [5], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin> lshost Kanaga
id 5
name Kanaga
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 10000000C932A800
node_logged_in_count 2
state inactive
WWPN 10000000C932A7FB
node_logged_in_count 2
state inactive

3. Verifying that we can see our storage subsystem


If we performed the zoning correctly, the SVC can see the storage subsystem with the
lscontroller command, as shown in Example 6-56.

Example 6-56 Discover the storage controller


IBM_2145:ITSO-CLS2:admin> lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814
IBM_2145:ITSO-CLS2:admin>

Chapter 6. Data migration 325


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Names: The chcontroller command enables you to change the discovered storage
subsystem name in the SVC. In complex SANs, we suggest that you rename your
storage subsystem to a more meaningful name.

4. Getting the disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available, unmanaged
MDisks (if many available unmanaged MDisks are seen by the SVC), we obtain the LUN
serial numbers from our storage subsystem administration tool (Storage Manager).
When we discover these MDisks, we confirm that we have the correct serial numbers
before we create the image mode volumes.
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Figure 6-98 on
page 326 shows disk serial number kanage_lun0.

Figure 6-98 Obtaining disk serial number kanage_lun0

Figure 6-99 shows disk serial number kanga_Lun1.

326 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-99 Obtaining disk serial number kanga_Lun1

We are ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as volumes.

6.9.2 Moving LUNs to SVC


In this step, we move all LUNs that are assigned to the AIX server and reassign them to our
SVC.

Because we want to move only the LUN that holds our application and data files, we move
that LUN without rebooting the host. The only requirement is that we unmount the file system
and vary off the VG to ensure data integrity after the reassignment.

Before you start: Moving LUNs to the SVC requires that the Subsystem Device Driver
(SDD) is installed on the AIX server. You can install the SDD in advance; however, it might
require an outage of your host to install the SDD in advance.

Complete the following steps to move both LUNs at the same time:
1. Confirm that the SDD is installed.
2. Complete the following steps to unmount and vary off the VGs:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG by using the varyoffvg
VOLUMEGROUP_NAME command.
Example 6-57 shows the commands that were run on Kanaga.

Chapter 6. Data migration 327


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Example 6-57 AIX command sequence


#varyoffvg itsoaixvg
#varyoffvg itsoaixvg1
#lsvg
rootvg
itsoaixvg
itsoaixvg1
#lsvg -o
rootvg

3. By using Storage Manager (our storage subsystem management tool), the disks can be
unmapped and unmasked from the AIX server and remapped and remasked as disks of
the SVC.
4. From the SVC, discover the new disks by using the detectmdisk command. The disks are
discovered and named mdiskN, where N is the next available MDisk number (starting from
0). Example 6-58 shows the commands that were used to discover our MDisks and to
verify that the correct MDisks are available.

Example 6-58 Discover the new MDisks


IBM_2145:ITSO-CLS2:ITSO_admin> detectmdisk
IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID

24 mdisk24 online unmanaged


5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 mdisk25 online unmanaged
8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000

Important: Match your discovered MDisk serial numbers (the UID on the lsmdisk
command task display) with the serial number that you discovered earlier, as shown in
Figure 6-98 on page 326 and Figure 6-99 on page 327).

5. After you verify that the correct MDisks are available, rename them to avoid confusion in
the future when you perform other MDisk-related tasks, as shown in Example 6-59.

Example 6-59 Rename the MDisks


IBM_2145:ITSO-CLS2:ITSO_admin> chmdisk -name Kanaga_AIX mdisk24
IBM_2145:ITSO-CLS2:ITSO_admin> chmdisk -name Kanaga_AIX1 mdisk25

IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online unmanaged
8.0GB 0000000000000009 DS4700

328 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

600a0b800026b2820000432f4874f57c00000000000000000000000000000000

6. Create the image mode volumes by using the mkvdisk command and the option -vtype
image, as shown in Example 6-60. This command virtualizes the disks in the same layout
as though they were not virtualized.

Example 6-60 Create the image mode volumes


IBM_2145:ITSO-CLS2:ITSO_admin> mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype
image -mdisk Kanaga_AIX -name IVD_Kanaga
Virtual Disk, id [8], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin> mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype
image -mdisk Kanaga_AIX1 -name IVD_Kanaga1
Virtual Disk, id [9], successfully created

7. Map the new image mode volumes to the host, as shown in Example 6-61.

Example 6-61 Map the volumes to the host


IBM_2145:ITSO-CLS2:ITSO_admin> mkvdiskhostmap -host Kanaga IVD_Kanaga
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO-CLS2:ITSO_admin> mkvdiskhostmap -host Kanaga IVD_Kanaga1
Virtual Disk to Host map, id [1], successfully created

FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process completes before you start your application.

Complete the following steps to put the image mode volumes online:
1. Remove the old disk definitions, if you have not done so already.
2. Run the cfgmgr -vs command to rediscover the available LUNs.
3. If your application and data are on an LVM volume, rediscover the VG. Then, run the
varyonvg VOLUME_GROUP command to activate the VG.
4. Mount your file systems with the mount /MOUNT_POINT command.

You are ready to start your application.

6.9.3 Migrating image mode volumes to volumes


While the AIX server is still running and our file systems are in use, we migrate the image
mode volumes onto striped volumes, spreading the extents over three other MDisks.

Preparing MDisks for striped mode volumes


From our storage subsystem, we performed the following tasks:
򐂰 Created and allocated three LUNs to the SVC
򐂰 Discovered them as MDisks
򐂰 Renamed these LUNs to more meaningful names
򐂰 Created a storage pool
򐂰 Put all of these MDisks into this storage pool

You can see the output of our commands in Example 6-62.

Chapter 6. Data migration 329


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Example 6-62 Create a storage pool


IBM_2145:ITSO-CLS2:ITSO_admin> mkmdiskgrp -name aix_vd -ext 512
IBM_2145:ITSO-CLS2:ITSO_admin> detectmdisk

IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX online image 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online image 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 mdisk26 online unmanaged
6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 mdisk27 online unmanaged
6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 mdisk28 online unmanaged
6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000

IBM_2145:ITSO-CLS2:ITSO_admin> chmdisk -name aix_vd0 mdisk26


IBM_2145:ITSO-CLS2:ITSO_admin> chmdisk -name aix_vd1 mdisk27
IBM_2145:ITSO-CLS2:ITSO_admin> chmdisk -name aix_vd2 mdisk28
IBM_2145:ITSO-CLS2:ITSO_admin> addmdisk -mdisk aix_vd0 aix_vd
IBM_2145:ITSO-CLS2:ITSO_admin> addmdisk -mdisk aix_vd1 aix_vd
IBM_2145:ITSO-CLS2:ITSO_admin> addmdisk -mdisk aix_vd2 aix_vd

IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX online image 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online image 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000

Migrating volumes
We are ready to migrate the image mode volumes onto striped volumes by using the
migratevdisk command, as shown in Example 6-22 on page 297.

While the migration is running, our AIX server is still running and we can continue accessing
the files.

330 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

To check the overall progress of the migration, we use the lsmigrate command, as shown in
Example 6-63. Listing the storage pool by using the lsmdiskgrp command shows that the free
capacity on the old storage pool is slowly increasing while those extents are moved to the
new storage pool.

Example 6-63 Migrating image mode volumes to striped volumes


IBM_2145:ITSO-CLS2:ITSO_admin> migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vd
IBM_2145:ITSO-CLS2:ITSO_admin> migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd

IBM_2145:ITSO-CLS2:ITSO_admin> lsmigrate
migrate_type MDisk_Group_Migration
progress 10
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 9
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0

After this task is complete, the volumes are spread over three MDisks in the aix_vd storage
pool, as shown in Example 6-64. The old storage pool is empty.

Example 6-64 Migration complete


IBM_2145:ITSO-CLS2:ITSO_admin> lsmdiskgrp aix_vd
id 6
name aix_vd
status online
mdisk_count 3
vdisk_count 2
capacity 18.0GB
extent_size 512
free_capacity 5.0GB
virtual_capacity 13.00GB
used_capacity 13.00GB
real_capacity 13.00GB
overallocation 72
warning 0

IBM_2145:ITSO-CLS2:ITSO_admin> lsmdiskgrp aix_imgmdg


id 7
name aix_imgmdg
status online
mdisk_count 2
vdisk_count 0
capacity 13.0GB
extent_size 512
free_capacity 13.0GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 0

Chapter 6. Data migration 331


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Our migration to SVC is complete. You can remove the original MDisks from SVC and you
can remove these LUNs from the storage subsystem.

If these LUNs are the LUNs that were used last on our storage subsystem, we can remove
these LUNs from our SAN fabric.

6.9.4 Preparing to migrate from SVC


Before we change the AIX server’s LUNs from being accessed by SVC as volumes to being
accessed directly from the storage subsystem, we must convert the volumes into image mode
volumes.

You can perform this task for one of the following reasons:
򐂰 You purchased a new storage subsystem and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
򐂰 You used the SVC to FlashCopy or Metro Mirror a volume onto another volume and you no
longer need that host that is connected to SVC.
򐂰 You want to move a host, which is connected to SVC, and its data to a site where no SVC
exists.
򐂰 Changes to your environment no longer require this host to use SVC.

Other preparatory tasks need to be performed before we shut down the host and reconfigure
the LUN masking and mapping. This section describes those tasks.

If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as shown in Figure 6-100.

Figure 6-100 Environment with the SVC

332 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Making fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. Add the
new storage subsystem to the Red Zone so that SVC can communicate with it directly.

Create a Green Zone for our host to use when we are ready for it to access the disk directly
after it is removed from the SVC (it is assumed that you created the necessary zones). After
your zone configuration is set up correctly, SVC sees the new storage subsystem’s controller
by using the lscontroller command, as shown in Example 6-65 on page 333. It is also
useful to rename the controller to a more meaningful name by using the chcontroller -name
command.

Example 6-65 Discovering the new storage subsystem


IBM_2145:ITSO-CLS2:ITSO_admin> lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814 FAStT

Creating LUNs
On our storage subsystem, we created two LUNs and masked them so that the SVC can see
them. We eventually give these LUNs directly to the host and remove the volumes that the
host is using. To check that the SVC can use the LUNs, run the detectmdisk command, as
shown in Example 6-66.

In our example, we use two 10 GB LUNs that are on the DS4500 subsystem. Therefore, in
this step, we migrate back to image mode volumes and to another subsystem in one step. We
deleted the old LUNs on the DS4700 storage subsystem, which is the reason why they
appear offline here.

Example 6-66 Discover the new MDisks


IBM_2145:ITSO-CLS2:ITSO_admin> detectmdisk
IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX offline managed 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 offline managed 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
29 mdisk29 online unmanaged
10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 mdisk30 online unmanaged
10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000

Chapter 6. Data migration 333


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Although the MDisks do not stay in the SVC long, we suggest that you rename them to more
meaningful names so that they are not confused with other MDisks that are used by other
activities. Also, we create the storage pools to hold our new MDisks, as shown in
Example 6-67 on page 334.

Example 6-67 Rename the MDisks


IBM_2145:ITSO-CLS2:ITSO_admin> chmdisk -name AIX_MIG mdisk29
IBM_2145:ITSO-CLS2:ITSO_admin> chmdisk -name AIX_MIG1 mdisk30
IBM_2145:ITSO-CLS2:ITSO_admin> mkmdiskgrp -name KANAGA_AIXMIG -ext 512
MDisk Group, id [3], successfully created

IBM_2145:ITSO-CLS2:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 KANAGA_AIXMIG online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
6 aix_vd online 3 2
18.0GB 512 5.0GB 13.00GB 13.00GB
13.00GB 72 0
7 aix_imgmdg offline 2 0
13.0GB 512 13.0GB 0.00MB 0.00MB
0.00MB 0 0

Now, our SVC environment is ready for the volume migration to image mode volumes.

6.9.5 Migrating the managed volumes


While our AIX server is still running, we migrate the managed volumes onto the new MDisks
by using image mode volumes by using the migratetoimage command, which is shown in
Example 6-68.

Example 6-68 Migrate the volumes to image mode volumes


IBM_2145:ITSO-CLS2:ITSO_admin> migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG -mdiskgrp
KANAGA_AIXMIG
IBM_2145:ITSO-CLS2:ITSO_admin> migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1 -mdiskgrp
KANAGA_AIXMIG
IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX offline managed 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 offline managed 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000

334 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

28 aix_vd2 online managed 6


aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
29 AIX_MIG online image 3
KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 AIX_MIG1 online image 3
KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000

IBM_2145:ITSO-CLS2:ITSO_admin> lsmigrate
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 9
migrate_target_mdisk_index 30
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 8
migrate_target_mdisk_index 29
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0

During the migration, our AIX server is unaware that its data is being moved physically
between storage subsystems.

After the migration is complete, the image mode volumes are ready to be removed from the
AIX server and the real LUNs can be mapped and masked directly to the host by using the
storage subsystems tool.

6.9.6 Removing LUNs from SVC


The next step requires downtime while we remap and remask the disks so that the host sees
them directly through the Green Zone.

Because our LUNs hold data files only and we use a unique VG, we can remap and remask
the disks without rebooting the host. The only requirement is that we unmount the file system
and vary off the VG to ensure data integrity after the reassignment.

Before you start: Moving LUNs to another storage system might need a driver other than
SDD. Check with the storage subsystems vendor to see which driver you need. You might
be able to install this driver in advance.

Complete the following steps to remove the SVC:


1. Confirm that the correct device driver for the new storage subsystem is loaded. Because
we are moving to a DS4500, we can continue to use the SDD.
2. Complete the following steps to shut down any applications and unmount the file systems:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG by using the varyoffvg
VOLUMEGROUP_NAME command.

Chapter 6. Data migration 335


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

3. Remove the volumes from the host by using the rmvdiskhostmap command, as shown in
Example 6-69. To confirm that you removed the volumes, use the lshostvdiskmap
command, which shows that these disks are no longer mapped to the AIX server.

Example 6-69 Remove the volumes from the host


IBM_2145:ITSO-CLS2:ITSO_admin> rmvdiskhostmap -host Kanaga IVD_Kanaga
IBM_2145:ITSO-CLS2:ITSO_admin> rmvdiskhostmap -host Kanaga IVD_Kanaga1
IBM_2145:ITSO-CLS2:ITSO_admin> lshostvdiskmap Kanaga

4. Remove the volumes from SVC by using the rmvdisk command, which makes the MDisks
unmanaged, as shown in Example 6-70.

Cached data: When you run the rmvdisk command, SVC first confirms that there is no
outstanding dirty cached data for the volume that is being removed. If uncommitted
cached data still exists, the command fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk

You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.

The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data there is to destage and how busy the I/O
subsystem is determine how long this command takes to complete.

You can check whether the volume has uncommitted data in the cache by using the
lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This
attribute has the following meanings:
򐂰 empty: No modified data exists in the cache.
򐂰 not_empty: Modified data might exist in the cache.
򐂰 corrupt: Modified data might exist in the cache, but any modified data was lost.

Example 6-70 Remove the volumes from SVC


IBM_2145:ITSO-CLS2:ITSO_admin> rmvdisk IVD_Kanaga
IBM_2145:ITSO-CLS2:ITSO_admin> rmvdisk IVD_Kanaga1

IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
29 AIX_MIG online unmanaged
10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 AIX_MIG1 online unmanaged
10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000

336 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the AIX server.

Important: This step is the last step that you can perform and still safely back out of
any changes that you made.

Up to this point, you can reverse all of the following actions that you performed so far to
get the server back online without data loss:
򐂰 Remap and remask the LUNs back to the SVC.
򐂰 Run the detectmdisk command to rediscover the MDisks.
򐂰 Re-create the volumes with the mkvdisk command.
򐂰 Remap the volumes back to the server with the mkvdiskhostmap command.

After you start the next step, you might not be able to turn back without the risk of data
loss.

We are ready to access the LUNs from the AIX server. If all of the zoning, LUN masking, and
mapping were successful, our AIX server boots as though nothing happened. Complete the
following steps:
1. Run the cfgmgr -S command to discover the storage subsystem.
2. Use the lsdev -Cc disk command to verify the discovery of the new disk.
3. Remove the references to all of the old disks. Example 6-71 shows the removal by using
SDDPCM or MPIO (Example 6-71).

Example 6-71 Remove references to old paths


# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Defined 1D-08-02 MPIO FC 2145
hdisk4 Defined 1D-08-02 MPIO FC 2145
hdisk5 Available 1D-08-02 MPIO FC 2145

# for i in 3 4; do rmdev -dl hdisk$i -R;done


hdisk3 deleted
hdisk4 deleted

# lsdev -Cc disk


hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk5 Available 1D-08-02 MPIO FC 2145

4. If your application and data are on an LVM volume, rediscover the VG. Then, run the
varyonvg VOLUME_GROUP command to activate the VG.
5. Mount your file systems by using the mount /MOUNT_POINT command.
You are ready to start your application.
6. To ensure that the MDisks are removed from the SVC, run the detectmdisk command.
The MDisks first are discovered as offline. Then, they removed automatically after the
SVC determines that no volumes are associated with these MDisks.

Chapter 6. Data migration 337


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

6.10 Using SVC for storage migration


The primary use of SVC is not as a storage migration tool. However, the advanced
capabilities of SVC enable us to use SVC as a storage migration tool. Therefore, you can add
SVC temporarily to your SAN environment to copy the data from one storage subsystem to
another storage subsystem. SVC enables you to copy image mode volumes directly from one
subsystem to another subsystem while host I/O is running. The only required downtime is
when the SVC is added to and removed from your SAN environment.

To use SVC for migration purposes only, complete the following steps:
1. Add SVC to your SAN environment.
2. Prepare SVC.
3. Depending on your operating system, unmount the selected LUNs or shut down the host.
4. Add SVC between your storage and the host.
5. Mount the LUNs or start the host again.
6. Start the migration.
7. After the migration process is complete, unmount the selected LUNs or shut down the
host.
8. Remove SVC from your SAN.
9. Mount the LUNs or start the host again.

The migration is complete.

As you can see, little downtime is required. If you prepare everything correctly, you can
reduce your downtime to a few minutes. The copy process is handled by SVC, so the host
does not hinder the performance while the migration progresses.

To use SVC for storage migrations, complete the steps that are described in the following
sections:
򐂰 6.6.2, “Adding SVC between the host system and DS3400” on page 261
򐂰 6.6.6, “Migrating volume from image mode to image mode” on page 279
򐂰 6.6.7, “Removing image mode data from SVC” on page 284

6.11 Migrating volumes between pools using CLI


You can migrate volumes between pools using the command-line interface (CLI).

About this task


You can determine the usage of particular MDisks by gathering input/output (I/O) statistics
about nodes, MDisks, and volumes. After you collect this data, you can analyze it to
determine which volumes or MDisks are hot. You can then migrate volumes from one storage
pool to another.

Complete the following step to gather statistics about MDisks and volumes:
1. Use secure copy (scp command) to retrieve the dump files for analyzing. For example,
issue the following command:
scp clusterip:/dumps/iostats/v_*

338 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

This command copies all the volume statistics files to the AIX host in the current directory.
2. Analyze the memory dumps to determine which volumes are hot. It might be helpful to
also determine which MDisks are being used heavily as you can spread the data that they
contain more evenly across all the MDisks in the storage pool by migrating the extents.
3. After you analyze the I/O statistics data, you can determine which volumes are hot. You
also need to determine the storage pool that you want to move this volume to. Either
create a new storage pool or determine an existing group that is not yet overly used.
Check the I/O statistics files that you generated and then ensure that the MDisks or
volumes in the target storage pool are used less than the MDisks or volumes in the source
storage pool.

You can use data migration or volume mirroring to migrate data between storage pools:
򐂰 Data migration uses the command migratevdisk.
򐂰 Volume mirroring uses the commands addvdiskcopy and rmvdiskcopy.

6.11.1 Migrating data using migratevdisk


You can use the migratevdisk command to migrate data between two storage pools. When
you issue the migratevdisk command, a check is made to ensure that the destination of the
migration has enough free extents to satisfy the command. If it does, the command proceeds.
The command takes some time to complete.

Note 1: The following migration - migratevdisk - options are valid for Cluster with
unencrypted Pools:
򐂰 Child pool to its parent pool
򐂰 Parent pool to one of its child pools
򐂰 Between the child pools in the same parent pool
򐂰 Between two parent pools

Note 2: The following migration - migratevdisk - options are valid for Cluster with
encrypted Pools:
򐂰 A parent pool to parent pool migration is allowed in all cases.
򐂰 A parent pool to child pool migration is not allowed if child has encryption key.
򐂰 A child pool to parent pool or child pool is not allowed if either child pool has an
encryption key.

򐂰 You cannot use the data migration function to move a volume between storage pools that
have different extent sizes.
򐂰 Migration commands fail if the target or source volume is offline, there is no quorum disk
defined, or the defined quorum disks are unavailable. Correct the offline or quorum disk
condition and reissue the command.
򐂰 The system supports migrating volumes between child pools within the same parent pool
or migrating a volume in a child pool to its parent pool. Migration of volumes fail if source
and target child pools have different parent pools. However, you can use addvdiskcopy
and rmvdiskcopy commands to migrate volumes between child pools in different parent
pools.
򐂰 When you use data migration, it is possible for the free destination extents to be consumed
by another process; for example, if a new volume is created in the destination parent pool
or if more migration commands are started. In this scenario, after all the destination
extents are allocated, the migration commands suspend and an error is logged (error ID
020005). To recover from this situation, use either of the following methods:

Chapter 6. Data migration 339


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Add more MDisks to the target parent pool, which provides more extents in the group and
allows the migrations to be restarted. You must mark the error as fixed before you
reattempt the migration.
򐂰 Migrate one or more volumes that are already created from the parent pool to another
group. This action frees up extents in the group and allows the original migrations to be
restarted.

Complete the following steps to use the migratevdisk command to migrate volumes between
storage pools:

After you determine the volume that you want to migrate and the new storage pool you want
to migrate it to, issue the following CLI command:

migratevdisk -vdisk vdiskname/ID -mdiskgrp

newmdiskgrname/ID -threads 4

You can check the progress of the migration by issuing the following CLI command:

lsmigrate

6.11.2 Migrating data using volume mirroring


When you use data migration, the volume goes offline if either pool fails. Volume mirroring
can be used to minimize the impact to the volume because the volume goes offline only if the
source pool fails. You can migrate volumes between child pools or from a child pool to a
parent pool using the addvdiskcopy and rmvdiskcopy commands instead of using the
migratevdisk command.Complete the following steps to use volume mirroring to migrate
volumes between pools:

After you determine the volume that you want to migrate and the new pool that you want to
migrate it to, enter the following command:

SVC> addvdiskcopy -mdiskgrp newmdiskgrname/ID vdiskname/ID

The copy ID of the new copy is returned. The copies now synchronize such that the data is
stored in both storage pools. You can check the progress of the synchronization by issuing the
following command:

lsvdisksyncprogress

After the synchronization is complete, remove the copy from the original I/O group to free up
extents and decrease the utilization of the storage pool. To remove the original copy, issue the
following command:

rmvdiskcopy -copy original copy id vdiskname/ID

6.11.3 Migrating extents using the CLI


To improve performance, you can migrate extents using the command-line interface (CLI).

About this task


The system provides various data migration features. These features can be used to move
the placement of data within parent pools and between parent pools. These features can be
used concurrently with I/O operations. You can use either of these methods to migrate data:

340 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Migrating data (extents) from one MDisk to another (within the same parent pool). This
method can be used to remove highly used MDisks.

Migrating volumes from one parent pool to another. This method can be used to remove
highly used parent pools. For example, you can reduce the use of a pool of MDisks. Child
pools that receive their capacity from parent pools, cannot have extents that are migrated to
them.

Note that:
򐂰 The source MDisk must not currently be the source MDisk for any other migrate extents
operation.
򐂰 The destination MDisk must not be the destination MDisk for any other migrate extents
operation.
򐂰 Migration commands fail if the target or source volume is offline, there is no quorum disk
defined, or the defined quorum disks are unavailable. Correct the offline or quorum disk
condition and reissue the command.

You can determine the use of particular MDisks by gathering input/output (I/O) statistics about
nodes, MDisks, and volumes. After you collect this data, you can analyze it to determine
which MDisks are used frequently. The procedure then takes you through querying and
migrating extents to different locations in the same parent pool. This procedure only can be
completed using the command-line interface.

If performance monitoring tools indicate that an MDisk in the pool is being overused, you can
migrate data to other MDisks within the same parent pool.

Follow the procedure:


1. Determine the number of extents that are in use by each volume for the MDisk by issuing
this CLI command:
lsmdiskextent mdiskname
2. This command returns the number of extents that each volume is using on the MDisk.
Select some of these extents to migrate within the pool.
3. Determine the other MDisks that are in the same volume.
To determine the parent pool that the MDisk belongs to, issue this CLI command:
lsmdisk mdiskname | ID
List the MDisks in the pool by issuing this CLI command:
lsmdisk -filtervalue mdisk_grp_name=mdiskgrpname
4. Select one of these MDisks as the target MDisk for the extents. You can determine how
many free extents exist on an MDisk by issuing this CLI command:
lsfreeextents mdiskname
You can issue the lsmdiskextent newmdiskname command for each of the target MDisks
to ensure that you are not just moving the over-utilization to another MDisk. Check that the
volume that owns the set of extents to be moved does not already own a large set of
extents on the target MDisk.
5. For each set of extents, issue this CLI command to move them to another MDisk:
migrateexts -source mdiskname | ID -exts num_extents
-target newmdiskname | ID -threads 4 -vdisk vdiskid
where num_extents is the number of extents on the vdiskid. The newmdiskname | ID value
is the name or ID of the MDisk to migrate this set of extents to.

Chapter 6. Data migration 341


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

-threads indicates the priority of the migration processing, where 1 is the lowest priority
and 4 is the highest priority.
6. Repeat the previous steps for each set of extents that you are moving.
7. You can check the progress of the migration by issuing this CLI command:
lsmigrate

342 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

6.12 Migrating volumes to an encrypted pool


For systems with encryption enabled, you can migrate existing volumes from non-encrypted
pools to encrypted pools. Both the management GUI and the command-line interface can be
used to migrate volumes to encrypted pools.

During system setup in the management GUI, you can activate and enable encryption
licenses. The management GUI automatically displays any nodes that support encryption.
The license can either be automatically or manually activated and then enabled for the
system and the supported nodes. Any pools that created after encryption is enabled are
assigned a key that can be used to encrypt and decrypt data.

However, if encryption was configured after volumes were already assigned to non-encrypted
pools, you can migrate those volumes to an encrypted pool by using child pools.

When you create a child pool after encryption is enabled, an encryption key is created for the
child pool even when the parent pool is not encrypted. You can then use volume mirroring to
migrate the volumes from the non-encrypted parent pool to the encrypted child pool.

6.12.1 Using Add Volume Copy


To migrate volumes using the Add Volume Copy function in the management GUI, complete
these steps:
1. In the management GUI, select Pools > Pools.
2. Right-click the non-encrypted parent pool that contains the volumes that you want to
migrate and select Create Child Pool.
3. On the Create Child Pool page, enter the name for the child pool and the amount of
capacity. Ensure that you select enough capacity to accommodate the migrated volumes.
Encryption is selected by default when the system is enabled for encryption.
4. Click Create. After the child pool is created, you can migrate the volumes to the child pool
by adding volume copies.
5. In the management GUI, select Volumes > Volumes by Pools.
6. Select the non-encrypted parent pool to display all the volumes.
7. Right-click the volume and select Add Volume Copy...(Figure 6-101 on page 344).

Chapter 6. Data migration 343


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

Figure 6-101 Add Volume Copy

8. On the Add Volume Copy page, select Basic for the type of copy that you are creating.
From the list of available pools, select the child pool as the target pool for the copy of the
volume (Figure 6-102 on page 345).

344 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-102 Adding Volume Copy to encrypted child pool

9. Click Add (Figure 6-103).

Figure 6-103 task Complete - volume copy added

Chapter 6. Data migration 345


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

10.Repeat these steps to add volume copies to the encrypted child pool for the remaining
volumes in the parent pool.
11.After all the copies are synchronized in the encrypted child pool, you can delete the all the
primary copies from the parent pool. The empty parent pool must remain unused to use
encrypted volumes in the child pool.

The same operation can be performed using the command line.

To migrate volumes using the addvdiskcopy in the command-line interface, complete these
steps:
1. In the command-line interface, enter the following command to create a child pool.
a. mkmdiskgrp -name my_encrypted_child_pool -parentmdiskgrp mypool -encrypt yes
where my_encrypted_child_pool is the name of the new child pool and mypool is the
name of the parent pool.
2. To create mirrored copies of the volumes that are in the parent pool in the new child pool in
the CLI, enter the following command:
a. addvdiskcopy -autodelete -mdiskgrp my_encrypted_child_pool -vdisk volume1
Where autodelete specifies the primary copy is deleted once the secondary copy is
synchronized
b. where my_encrypted_child_pool is the name of the new child pool and volume1 is the
name of the volume that is being copied. Use -autodelete to automatically delete the
primary copy of the volume after the copy synchronizes.
3. Repeat step 2 until all the volumes from the original parent contain mirrored copies in the
new child pool. The empty parent pool must remain unused to use encrypted volumes in
the child pool.

6.12.2 Using the GUI’s Migrate to Another Pool (migratevdisk)


The alternative method of encryption conversion (migration) is to use the migratevdisk
command. This migrates the specified volume into a new storage pool; all the extents that
make up the volume are migrated onto free extents in the new (encrypted) storage pool.
When working without encryption enabled the following migration options are valid:
򐂰 Child pool to its parent pool
򐂰 Parent pool to one of its child pools
򐂰 Between the child pools in the same parent pool
򐂰 Between two parent pools

However with encryption enabled this is restricted as follows:

򐂰 A parent pool to parent pool migration is allowed in all cases.


򐂰 A parent pool to child pool migration is not allowed if child has encryption key.
򐂰 A child pool to parent pool or child pool is not allowed if either child pool has an encryption
key.

By way of example let us consider a situation in which new “additional” External Storage
LUNs are now being managed by the Spectrum Virtualize V7.6 code running on SVC
2145-DH8 node hardware, and that an encrypted pool has been created from this managed
storage. Volumes created before Spectrum Virtualize V7.6 was implemented on our cluster
will be unencrypted, but we can double check their encryption status by customizing the
Volumes window view.

346 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

You can display the encryption status of a volume in the Volumes → Volumes window.

In order to do this you must customize attributes bar: (right-click → select encryption).

This will alter the default view to include a column which displays encryption status using a
“key” icon.

We can take this unencrypted volume and migrate it to an encrypted version of itself using the
GUIs migrate option (which executes the command migratevdisk). The target pool selected
must be encrypted for this conversion to take place.

Note: The target pool must be a “Parent Pool”. Currently it is not possible to use the
Migrate option between Parent and Child Pools.

The procedure to do this is as follows:


1. Open Volumes → Volumes, or Pools → Volumes by Pool Select Volume (becomes
highlighted in darker blue).
a. Right-click → Migrate to Another Pool (Figure 6-104 on page 347).

Figure 6-104 Select Migrate to Another Pool

The Migrate to Another Pool option opens the Migrate Volume Copy window.

Chapter 6. Data migration 347


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

2. Select a new encrypted pool (which has the same extent size) as the pool you are
migrating from (Figure 6-105).

Figure 6-105 Selection of new target pool for migration

Figure 6-105 shows the original unencrypted volume “unencrypted volume” and the new
encrypted target pool “New_Additional_SW_enc”.
3. Having confirmed target pool select Migrate. The following “Task completed” message
should appear in the Migrate Volume Copy window (Figure 6-106).

348 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-106 Migration tasks using migratevdisk command completed

The Volumes → Volumes view shows the migrated volume with new encryption
characteristic (Figure on page 349).

Encrypted Volume post migration

6.13 Using SVC for storage migration


The primary use of the SVC is not as a storage migration tool. However, the advanced
capabilities of the SVC enable us to use the SVC as a storage migration tool. Therefore, you
can add the SVC temporarily to your SAN environment to copy the data from one storage
subsystem to another storage subsystem. The SVC enables you to copy image mode
volumes directly from one subsystem to another subsystem while host I/O is running. The
only required downtime is when the SVC is added to and removed from your SAN
environment.

To use the SVC for migration purposes only, complete the following steps:
1. Add the SVC to your SAN environment.
2. Prepare the SVC.
3. Depending on your operating system, unmount the selected LUNs or shut down the host.
4. Add the SVC between your storage and the host.

Chapter 6. Data migration 349


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

5. Mount the LUNs or start the host again.


6. Start the migration.
7. After the migration process is complete, unmount the selected LUNs or shut down the
host.
8. Remove the SVC from your SAN.
9. Mount the LUNs or start the host again.

The migration is complete.

As you can see, little downtime is required. If you prepare everything correctly, you can
reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host
does not hinder the performance while the migration progresses.

To use the SVC for storage migrations, complete the steps that are described in the following
sections:
򐂰 6.6.2, “Adding SVC between the host system and DS3400” on page 261
򐂰 6.6.6, “Migrating volume from image mode to image mode” on page 279

6.6.7, “Removing image mode data from SVC” on page 284

6.14 Using volume mirroring and thin-provisioned volumes


together
In this section, we show how you can use the volume mirroring feature and thin-provisioned
volumes together to move data from a fully allocated volume to a thin-provisioned volume.

6.14.1 Zero detect feature


The zero detect feature for thin-provisioned volumes enables clients to reclaim unused
allocated disk space (zeros) when a fully allocated volume is converted to a thin-provisioned
volume by using volume mirroring.

To migrate from a fully allocated volume to a thin-provisioned volume, complete the following
steps:
1. Add the target thin-provisioned copy.
2. Wait for synchronization to complete.
3. Remove the source fully allocated copy.

By using this feature, clients can free managed disk space easily and make better use of their
storage without the need to purchase any other functions for the SVC.

Volume mirroring and thin-provisioned volume functions are included in the base virtualization
license. Clients with thin-provisioned storage on an existing storage system can migrate their
data under SVC management by using thin-provisioned volumes without having to allocate
more storage space.

Zero detect works only if the disk contains zeros. An uninitialized disk can contain anything,
unless the disk is formatted (for example, by using the -fmtdisk flag on the mkvdisk
command).

Figure 6-107 shows the thin-provisioned volume zero detect concept.

350 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

Figure 6-107 Thin-provisioned volume zero detect feature

Figure 6-108 shows the thin-provisioned volume organization.

Figure 6-108 Thin-provisioned volume organization

Chapter 6. Data migration 351


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

As shown in Figure 6-108 on page 351, a thin-provisioned volume features the following
components:
򐂰 Used capacity
This term specifies the portion of real capacity that is used to store data. For
non-thin-provisioned copies, this value is the same as the volume capacity. If the volume
copy is thin-provisioned, the value increases from zero to the real capacity value as more
of the volume is written to.
򐂰 Real capacity
This capacity is the real allocated space in the storage pool. In a thin-provisioned volume,
this value can differ from the total capacity.
򐂰 Free data
This value specifies the difference between the real capacity and the used capacity
values. If the free data capacity reaches the used capacity and if the volume is configured
with the -autoexpand option, the SVC automatically expands the allocated space for this
volume to keep this value equal to the real capacity.
򐂰 Grains
This value is the smallest unit into which the allocated space can be divided.
򐂰 Metadata
This value is allocated in the real capacity, and it tracks the used capacity, real capacity,
and free capacity.

6.14.2 Volume mirroring with thin-provisioned volumes


The following example shows the use of the volume mirror feature with thin-provisioned
volumes:
1. We create a fully allocated volume of 15 GiB named VD_Full, as shown in Example 6-72.

Example 6-72 VD_Full creation example


IBM_2145:ITSO-CLS2:ITSO_admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk
0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full

Virtual Disk, id [2], successfully created

======================================================================

IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk VD_Full

id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status offline
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
capacity 15.00GB
type striped
formatted yes
.
.
vdisk_UID 60050768018401BF280000000000000B

352 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

mdisk_grp_name MDG_DS47
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100

2. We add a thin-provisioned volume copy with the volume mirroring option by using the
addvdiskcopy command and the autoexpand parameter, as shown in Example 6-73.

Example 6-73 The addvdiskcopy command


IBM_2145:ITSO-CLS2:ITSO_admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9
-vtype striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full

VDisk [2] copy [1] successfully created

======================================================================

IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk VD_Full

id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 15.00GB
type many
formatted yes
mdisk_id many
mdisk_name many
vdisk_UID 60050768018401BF280000000000000B
tsync_rate 50
copy_count 2
copy_id 0
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
fused_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
copy_id 1
status online
sync no
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name

Chapter 6. Data migration 353


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32

As you can see in Example 6-74, the VD_Full has a copy_id 1 where the used_capacity is
0.41 MB, which is equal to the metadata, because only zeros exist in the disk.
The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in
the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real
capacity minus the used capacity.
If zeros are written on the disk, the thin-provisioned volume does not use space.
Example 6-74 shows that the thin-provisioned volume does not use space even when the
capacities are in sync.

Example 6-74 Thin-provisioned volume display


IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisksyncprogress 2

vdisk_id vdisk_name copy_id progress


estimated_completion_time
2 VD_Full 0 100
2 VD_Full 1 100

======================================================================
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk VD_Full

id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 15.00GB
type many
formatted yes
mdisk_id many
mdisk_name many
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 2
copy_id 0
status online
sync yes
primary yes

354 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32

3. We can split the volume mirror or remove one of the copies, which keeps the
thin-provisioned copy as our valid copy by using the splitvdiskcopy command or the
rmvdiskcopy command.
If you need your copy as a thin-provisioned clone, we suggest that you use the
splitvdiskcopy command because that command generates a new volume and you can
map to any server that you want.
If you need your copy because you are migrating from a previously fully allocated volume
to go to a thin-provisioned volume without any effect on the server operations, we suggest
that you use the rmvdiskcopy command. In this case, the original volume name is kept and
it remains mapped to the same server.
Example 6-75 shows the splitvdiskcopy command.

Example 6-75 The splitvdiskcopy command


IBM_2145:ITSO-CLS2:ITSO_admin>svctask splitvdiskcopy -copy 1 -name VD_TPV
VD_Full

Virtual Disk, id [7], successfully created

======================================================================
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk -filtervalue name VD*

id name IO_group_id IO_group_name status


mdisk_grp_id mdisk_grp_name capacity type FC_id

Chapter 6. Data migration 355


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

FC_name RC_id RC_name vdisk_UID


fc_map_count copy_count fast_write_state
2 VD_Full 0 io_grp0 online
0 MDG_DS47 15.00GB striped
60050768018401BF280000000000000B 0 1 empty
7 VD_TPV 0 io_grp0 online
1 MDG_DS83 15.00GB striped
60050768018401BF280000000000000D 0 1 empty

======================================================================

IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk VD_TPV

id 7
name VD_TPV
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB
type striped
formatted no
vdisk_UID 60050768018401BF280000000000000D
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32

Example 6-76 shows the rmvdiskcopy command.

Example 6-76 The rmvdiskcopy command


IBM_2145:ITSO-CLS2:ITSO_admin>svctask rmvdiskcopy -copy 0 VD_Full

356 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm

IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk -filtervalue name=VD*

id name IO_group_id IO_group_name status


mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count fast_write_state
2 VD_Full 0 io_grp0 online
1 MDG_DS83 15.00GB striped
60050768018401BF280000000000000B 0 1 empty

IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk 2

id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB
type striped
formatted no
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 1
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32

Chapter 6. Data migration 357


7933 06 Data Migration Frank.fm Draft Document for Review February 4, 2016 8:01 am

358 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Chapter 7. Volume creation and


provisioning
This chapter describes how to use Spectrum Virtualize code V7.6 with SVC hardware to
create a volume and map a volume to a host. Where a volume is a logical disk provisioned out
of a storage pool and is recognized by a host with an identifier UID field and a parameter list.

The first part of this chapter provides a brief overview of Spectrum Virtualize volumes, the
classes of volumes available and the topologies they are associated with. It also provides an
overview of advanced customization available.

The second describes how to create volumes using the GUI’s Quick and Advanced volume
creation menus and how to map these to defined hosts.

The third provides an introduction to the new volume manipulation commands, designed to
facilitate the creation and administration of volumes used for HyperSwap and Enhanced
Stretched Cluster topologies.

Note: Advanced host and volume administration, such as volume migration, creating
volume copies, and so on, is described in Chapter 9, “Advanced Copy Services” on
page 475.

© Copyright IBM Corp. 2015. All rights reserved. 359


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

7.1 An Introduction to Volumes


A volume is a logical disk that the system presents to attached hosts, for a SVC cluster the
volume is presented is from a virtual disk (VDisk). This is a discrete area of usable storage
that has been virtualized, using Spectrum Virtualize V7.6 code, from SAN storage - that is
managed by the SVC cluster. The term virtual is used because the volume presented does
not necessarily exist on a single physical entity.

A basic volume simply presents an area of usable storage that the host can access to perform
I/O. However additional advanced features can be used to customize the properties of a basic
volume to provide, capacity savings (Thin-provisioning and Compression) and enhanced
availability using mirroring. A mirrored volume being a volume with two physical copies. Each
volume copy belonging to a different storage pool, and each copy having the same virtual
capacity as the volume.

With Spectrum Virtualize V7.5 code we expanded the advanced features of SVC volumes to
support High Availability (HA) using HyperSwap. Specific configuration requirements are
needed to support this volume class, and in the 760 GUI we have introduced assisted
configuration - using GUI wizards - that simplify their creation. These wizards simply creation
by ensuring that only - site specific - valid configuration options are presented.

Note: SVC hardware continues to support HA volumes in an Enhanced Stretched Cluster


topology, by physically “splitting” nodes within an IOgrp into two separate site location and
using mirrored volume copies at these sites.

Stretched cluster solutions cannot be created using Storwize hardware.

Spectrum Virtualize V7.6 code also incorporates a class of volume designed to provide
enhanced support with VMware, by supporting VMware vSphere Virtual Volumes, sometimes
referred to as VVols, which allow VMware vCenter to manage system objects like volumes
and pools.

The SVC volume presented is derived from a virtual disk created form managed virtualized
storage (MDisks). Application servers access volumes, not MDisks or drives and each
volume’s volume copy is created from a set of MDisk extents managed in a storage pool.

Note: A managed disk (MDisk) is a logical unit of physical storage. MDisks are either
arrays (RAID) from internal storage or external physical disks that are presented as a
single logical disk on the storage area network (SAN). Each MDisk is divided into a number
of extents, which are numbered, from 0, sequentially from the start to the end of the MDisk.
The extent size is a property of the storage pools the MDisks is added to

Attention: MDisks are not visible to host systems.

The type attribute of a volume defines the allocation of extents that make up the volume copy.
i.e.
򐂰 A striped volume contains a volume copy that has one extent allocated in turn from each
MDisk that is in the storage pool. This is the default option but you can also supply a list of
MDisks to use as the stripe set. (Figure 7-1).

360 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Attention: By default, striped volume copies are striped across all MDisks in the storage
pool. If some of the MDisks are smaller than others, the extents on the smaller MDisks are
used up before the larger MDisks run out of extents. Manually specifying the stripe set in
this case might result in the volume copy not being created.

If you are unsure if there is sufficient free space to create a striped volume copy, select one
of the following options:
򐂰 Check the free space on each MDisk in the storage pool using the lsfreeextents
command.
򐂰 Let the system automatically create the volume copy by not supplying a specific stripe
set.

Figure 7-1 Striped extent allocation

򐂰 A sequential volume contains a volume copy has extents allocated sequentially on one
MDisk.
򐂰 Image-mode volumes are a special type of volume that have a direct relationship with one
MDisk. They are used when you have an MDisk that contains data that you want to merge
into the clustered system.

It is also possible to define the cache characteristics of a volume.


򐂰 readwrite — All read and write I/O operations that are performed by the volume are stored
in cache. This is the default cache mode for all volumes.
򐂰 readonly — All read I/O operations that are performed by the volume are stored in cache.
򐂰 none — All read and write I/O operations that are performed by the volume are not stored
in cache.

A Basic volume is the simplest form of volume, it consists of a single volume copy, made up of
extents striped across all MDisks in a storage pool. It services I/O using readwrite cache and
is classified as fully allocated i.e. reported real capacity and virtual capacity and equal.

You can create other forms of volumes, depending on the type of topology that is configured
on your system.
򐂰 With standard topology, which is single-site configuration, you can create a basic volume
or a mirrored volume.
– By using volume mirroring, a volume can have two physical copies. Each volume copy
can belong to a different pool, and each copy has the same virtual capacity as the
volume. In the management GUI, an asterisk indicates the primary copy of the mirrored
volume. The primary copy indicates the preferred volume for read requests

Chapter 7. Volume creation and provisioning 361


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 With HyperSwap topology, which is three-site High Availability configuration, you can
create a basic volume or a HyperSwap volume.
– HyperSwap volumes create copies on separate sites for systems that are configured
with HyperSwap topology. Data that is written to a HyperSwap volume is automatically
sent to both copies so that either site can provide access to the volume if the other site
becomes unavailable
򐂰 With Stretched topology, which is three-site disaster resilient configuration, you can create
a basic volume or a Stretched volume.

There is also a custom volume class. This is available across all topologies and enables
additional customization - specific to the topology the volume has been created in - including
the method of capacity savings.
Thin-provisioned When you create a volume, you can designate it as thin-provisioned. A
thin-provisioned volume has a virtual capacity and a real capacity.
Virtual capacity is the volume storage capacity that is available to a
host. Real capacity is the storage capacity that is allocated to a
volume copy from a storage pool. In a fully allocated volume, the
virtual capacity and real capacity are the same. In a thin-provisioned
volume, however, the virtual capacity can be much larger than the real
capacity. Finally, Thin-Mirrored volumes combine the characteristics
of mirrored and thin-provisioned volumes.
Compressed This is a special type of volume where data is compressed as it is
written to disk, saving additional space. To use the compression
function, you must obtain the IBM Real-time Compression license.

760 code also introduces Virtual Volumes. These are available in a system configuration
which supports VMware vSphere Virtual Volumes, sometimes referred to as VVols. These
allow VMware vCenter to manage system objects like volumes and pools. SVC administrators
can create volume objects of this class, and assign ownership to VMware administrators to
simplify management.

Note: From V7.4 it is possible to prevent accidental deletion of volumes, if they have
recently performed any IO operations. This feature is called Volume protection and it
prevents active volumes, or host mappings from being deleted inadvertently. This is done
using a global system setting. For more information see the “Enabling volume protection”
topic in the IBM Knowledge Center:

http://www.ibm.com/support/knowledgecenter/STPVGU_7.4.0

362 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

7.2 Create Volumes menu


The GUI is the simplest means of volume creation and will present different options in the
Create Volumes menu depending on the topology of the system.

To start the process of creating a new volumes click the dynamic menu (function icons), open
the Volumes menu and click the Volumes option of the IBM SVC V7000 graphical user
interface (Figure 7-2).

Figure 7-2 Volumes menu

A list of existing volumes, their state, capacity and associated storage pools is displayed.

To define a new Basic volume, click Create Volumes option on the tab header (Figure 7-3).

Figure 7-3 Create Volume window

The Create Volume options tab opens the Create Volumes menu, which displays two
potential creation methods, Quick Volume Creation and Advanced and volume classes.

Note: The volume classes displayed on the Create Volume menu is dependent on the
topology of the system

Chapter 7. Volume creation and provisioning 363


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Volumes can be created using the Quick Volume Creation submenu, or the Advanced
submenu as shown in (Figure 7-4).

Figure 7-4 Quick and Advanced Volume Creation options

In the example above Quick Volume Creation submenu shows icons that enable the quick
creation of Basic, and Mirrored volumes (in standard topology) and the Advanced submenu
shows a Custom icon that can be used to customize parameters of volumes. Custom volumes
are discussed in more detail later in this section.
򐂰 For a HyperSwap Topology the Create Volumes menu will display (Figure 7-5).

Figure 7-5 Create Volumes menu with HyperSwap Topology

򐂰 For a Stretched Topology the Create Volumes menu will display (Figure 7-6).

Figure 7-6 Create Volumes with a Stretched Topology

Independent of the topology of the system the Create Volume menu will display a Basic
volume icon, in the Quick Volume Creation submenu, and always show a Custom volume
icon in the Advanced submenu.

364 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Clicking on any of the 3 x icon in the Create Volumes window will open a drop down window
where volume details can be entered. The example (Figure 7-7) uses a Basic volume to
demonstrate this.

Figure 7-7 Quick Volume Creation submenu

Notes:
򐂰 A Basic volume is a volume whose data is striped across all available managed disks
(MDisks) in one storage pool.
򐂰 A Mirrored volume is a volume with two physical copies, where each volume copy can
belong to a different storage pools.
򐂰 A Custom volume, in the context of this menu, is either a Basic or Mirrored volume with
customization from default parameters

Quick Volume Creation also provides, using the Capacity Savings parameter, the ability
to change the default provisioning of a Basic or Mirrored Volume to Thin-provisioned or
Compressed. See section.

Note: Advanced host and volume administration, such as volume migration, creating
volume copies, and so on, is described in Chapter 9, “Advanced Copy Services” on
page 475.

Chapter 7. Volume creation and provisioning 365


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

7.3 Creating volumes using the Quick Volume Creation


In this section we will focus on using Quick Volume Creation menu to create Basic and
Mirrored volumes in a system with standard topology. It will also cover creating
host-to-volume mapping. As previous stated Quick Volume Creation is available on four
different volume classes:
򐂰 Basic
򐂰 Mirrored
򐂰 HyperSwap
򐂰 Stretched

Note: The ability to create HyperSwap volumes using the GUI is a new feature with
Spectrum Virtualize V7.6 code and significantly simplifies creation / configuration. This
simplification is enhance by the GUI using a new command - mkvolume.

7.3.1 Creating Basic volumes using Quick Volume Creation


The most commonly used type of volume is the Basic volume. This type of volume is fully
provisioned, with the entire size dedicated to the defined volume. The host and the SVC see
the fully allocated space.

Create a Basic volume by clicking the Basic icon (Figure 7-4). This opens an additional input
window where you can define:
򐂰 Pool: The Pool in which the volume will be created (drop-down)
򐂰 Quantity: Number of volumes to be created (numeric up/down)
򐂰 Capacity: Size of the volume
– Units (drop-down)
򐂰 Capacity Savings: (drop-down)
– None
– Thin-provisioned
– Compressed
򐂰 Name: Name of the Volume (cannot start with a numeric)
򐂰 I/O group

Creating a Basic Volume (Figure 7-8).

366 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-8 Creating Basic volume

We suggest using an appropriate naming convention of volumes to help you easily identify
the associated host or group of hosts. At a minimum, it should contain the name of the pool or
some tag that identifies the underlying storage subsystem. It can also contain the host name
that the volume will be mapped to, or perhaps the content of this volume, for example, name
of applications to be installed.

Once all the characteristics of the Basic volume have been defined it can be created by
selecting one of:
򐂰 Create
򐂰 Create and Map to Host

In this example we have chosen the Create option (the volume-to-host mapping can be
performed at a later date). Once selected you should see the following confirmation window
(Figure 7-9).

Note: The “+” icon highlighted in green can be used to create additional volumes in the
same instance of the volume creation wizard.

Chapter 7. Volume creation and provisioning 367


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-9 Create Volume Task Completion window: Success

Success will also be indicated by the state of the Basic volume being reported as formatting in
the Volumes screen (Figure 7-10).

Figure 7-10 Basic Volume Fast-Format (introduced in 7.5 code)

Notes:
򐂰 The V7.5 release changed the default behavior of volume creation to introduce a new
feature which fills a fully allocated volume with zeros as a background task activity. i.e.
Basic volumes are automatically formatted through the quick initialization process. This
process makes fully allocated volumes available for use immediately.
򐂰 Quick initialization requires a small amount of I/O to complete and limits the number of
volumes that can be initialized at the same time. Some volume actions such as moving,
expanding, shrinking, or adding a volume copy are disabled when the specified volume
is initializing. Those actions are available after the initialization process completes.
򐂰 The quick initialization process can be disabled in circumstances where it is not
necessary. For example, if the volume is the target of a Copy Services function, the
Copy Services operation formats the volume.The quick initialization process can also
be disabled for performance testing so that the measurements of the raw system
capabilities can take place without waiting for the process to complete.

Reference: Infocenter:- Fully allocated volumes: https://ibm.biz/BdHv8w

368 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

7.3.2 Creating Mirrored volumes using Quick Volume Creation


IBM SVC offers the capability to mirror volumes, which means a single volume, presented to a
host, can have two physical copies. Each volume copy can belong to a different pool, and
each copy has the same virtual capacity as the volume. When a server writes to a mirrored
volume, the system writes the data to both copies and when it reads a mirrored volume, the
system picks one of the copies to read. Normally this is the primary copy (as indicated in the
management GUI, an asterisk (*)). If one of the mirrored volume copies is temporarily
unavailable; for example, because the storage system that provides the pool is unavailable,
the volume remains accessible to servers. The system remembers which areas of the volume
are written and resynchronizes these areas when both copies are available.

The use of mirrored volumes:


򐂰 Improving availability of volumes by protecting them from a single storage system failure.
򐂰 Providing concurrent maintenance of a storage system that does not natively support
concurrent maintenance.
򐂰 Providing an alternative method of data migration with better availability characteristics.
򐂰 Converting between fully allocated volumes and thin-provisioned volumes.

Note: Volume mirroring is not a true disaster recovery solution, because both copies are
accessed by the same node pair and addressable by only a single cluster, but it can
improve availability.

To create a mirrored volume, complete the following steps:


1. In the Create Volumes window Click Mirrored (Figure 7-11 on page 370) and enter the
Volume Details:- Quantity, Capacity, Capacity savings, and Name. Next in the
Mirrored copies subsection choose the Pool of Copy1 and Copy2 using the drop-down
menu. Although the mirrored volume can be created in the same pool, this is not typical.
We suggest that you keep mirrored volumes on a separate set of physical disks - Pools.
Leave I/O group option at its default setting of Automatic. See (Figure 7-11).

Chapter 7. Volume creation and provisioning 369


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-11 Mirrored Volume creation

2. Click Create (or Create and Map to Host)


3. Next the GUI will display the underlying CLI commands being executed to create the
mirrored volume and indicate completion (Figure 7-12).

370 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-12 Task complete - created Mirrored Volume

Note: When creating a Mirrored Volume using this menu you are not required to specify
the Mirrored Sync rate. It defaults to 16 MiB. Customization of this synchronization rate
can be achieved using Advanced - Custom menu.

Quick Volume Creation - with Capacity Saving options


The Quick Volume Creation menu also provides, using the Capacity Savings parameter, the
ability to alter the provisioning of a Basic or Mirrored volume into Thin-provisioned or
Compressed. This is achieved by selecting either; Thin-provisioned or Compressed from the
Drop-down menu (Figure 7-13 on page 372).

Chapter 7. Volume creation and provisioning 371


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-13 Quick Volume Creation - with Capacity Saving option set to Compressed

Alternatively select Thin-provisioned from the Drop-down to define a Thin-provisioned


volume.

7.4 Mapping a volume to the host


Once created a volume can be mapped to a host. From the Volumes menu highlight the
volume you want to create a mapping for and then select “Actions” from the menu bar.

Tip: An alternative way of opening the Actions menu is to highlight / select a volume an
then use the right-hand mouse button.

1. From the Actions menu select the Map to Host option. (Figure 7-14).

372 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-14 Map to Host

2. This opens a Map to Host window. In this window use the Select the Host Drop-down to
select a host to map the volume to. (Figure 7-15).

Figure 7-15 Mapping a Volume to Host

3. Selected Host from Drop-down, tick selection and then map. (Figure 7-16 on page 374).

Chapter 7. Volume creation and provisioning 373


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-16 Selected host from Drop-down.

4. The Modify Mappings window will display the command details and then a Task
completed message. (Figure 7-17).

Figure 7-17 Successful completion of Host to Volume mapping

374 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

7.5 Creating Custom volumes using Advanced


The Advanced menu of the Create Volumes window enables Custom volume creation. It
provides an alternative method of defining Capacity savings options i.e. Thin-provisioning
and/or Compression, but also expands on the base level default options for available Basic
and Mirrored volumes. A Custom volume can be customized with respect to Mirror synch
rate, Cache mode and Fast-Format.

The Advanced menu consists of a number of submenus


򐂰 Volume Details (Mandatory - defines the Capacity savings option)
򐂰 Volume Location (Mandatory - defines Pool(s) to be used)
򐂰 Thin Provisioning
򐂰 Compression
򐂰 General - for changing default options for Cache mode and/or Formatting
򐂰 Summary

Work through these submenus to customize your Custom volume as desired and then commit
these changes using Create. See (Figure 7-18).

Completion of additional subsection(s) required if selected

Figure 7-18 Customization Submenus

7.5.1 Creating a Custom Thin-provisioned volume


A thin-provisioned volume can be defined and created using the Advanced menu. With
respect to application reads and writes, thin-provisioned volumes behave as though they
were fully allocated. When creating a thin-provisioned volume, you may specify two
capacities:

Chapter 7. Volume creation and provisioning 375


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 The real physical capacity allocated to the volume from the storage pool. The real capacity
determines the quantity of extents that are initially allocated to the volume.
򐂰 Its virtual capacity available to the host. The virtual capacity is the capacity of the volume
that is reported to all other components (for example, FlashCopy, cache, and remote copy)
and to the hosts.

To create a thin-provisioned volume, complete the following steps:


1. From the Create Volumes window select the Advanced option. This will open the
subsection Volume Details where Quantity, Capacity (virtual), Capacity Savings
(choose Thin-provisioned from the Drop-down), and Name of the volume being created
can be input (Figure 7-19).

Figure 7-19 Create a thin-provisioned volume

2. Next Click the Volume Location subsection to define the pool in which the volume will be
created. Using the drop-down in the Pool option to choose the pool. All other options,
Volume copy type, Caching I/O grp, Preferred node, and Accessible I/O groups can
be left with their default options (Figure 7-20 on page 376).

Figure 7-20 Volume Location

3. The real and virtual capacity, the expansion criteria and grain size (Figure 7-21 on
page 377).

376 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

4. Next Click the Thin Provisioning subsection to manage the real and virtual capacity, the
expansion criteria and grain size (Figure 7-21 on page 377).

Figure 7-21 Thin Provisioning

The Thin Provisioning options are as follows: (defaults displayed in brackets)


– Real capacity: (2%). Specify the size of the real capacity space used during creation.
– Automatically Expand: (Enabled). This option enables the automatic expansion of
real capacity, if additional capacity is to be allocated.
– Warning threshold: (Enabled). Enter a threshold for receiving capacity alerts.
– Grain Size: (32 KiB). Specify the grain size for real capacity. This describes the size of
chunk of storage to be added to used capacity. For example, when the host writes 1
MB of new data the capacity will be increased by adding four chunks of 256 KiB each

Important: If you do not use the autoexpand feature, the volume will go offline after
reaching real capacity.

The GUI also defaults to a grain size of 32KiB, however 256KiB is the default when
using the CLI. The optimum choice of grain size is dependent upon volume use
type. See: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003982
򐂰 If you are not going to use the thin-provisioned volume as a FlashCopy® source
or target volume, use 256 KiB to maximise performance.
򐂰 If you are going to use the thin-provisioned volume as a FlashCopy source or
target volume, specify the same grain size for the volume and for the FlashCopy
function.

Chapter 7. Volume creation and provisioning 377


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

5. Apply all required changes and click Create to define the volume (Figure 7-22).

Figure 7-22 task complete thin-provisioned volume created

6. Again, you can directly start the wizard for mapping this volume to the host by clicking
Create and Map to Host.

7.5.2 Creating Custom Compressed volumes


The configuration compressed volumes is similar to thin-provisioned volumes.To create a
Compressed volume, complete the following steps:
1. From the Create Volumes window select the Advanced option. This will open the
subsection Volume Details where Quantity, Capacity (virtual), Capacity Savings
(choose Compressed from the Drop-down), and Name can be input (Figure 7-23 on
page 378).

Figure 7-23 Defining a volume as compressed using the Capacity Savings option

378 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

2. Open the Volume Location subsection and select, from the drop-down menu, the Pool in
which the compressed volume will be created. (use all other parameter defaults as
defined).
3. Open the Compression subsection and check Real Capacity is set to a minimum of the
the default value of 2%. (use all other parameter defaults as defined). See Figure 7-24 on
page 379.

Figure 7-24 Checking Compressed volume Custom / Advanced settings

4. Confirm and commit section by clicking Create.

7.5.3 Custom Mirrored Volumes


The Advanced option in the Create Volumes window is used to customize volume creation.
Using this feature the default options can be overridden and volume creation can be tailored
to the specifics of the clients environment.

Modifying Mirrored sync rate


The Mirror Sync rate can be changed from the default setting using the Advanced option,
subsection Volume Location, of the Create Volume window. This option sets the priority of
copy synchronization progress, allowing a preferential rate to be set for more important
volumes (Figure 7-25 on page 380).

Chapter 7. Volume creation and provisioning 379


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-25 Customization of Mirrored Sync rate

The progress of formatting and synchronization of a newly created Mirrored Volume can be
checked from the Running Tasks menu. This menu reports the progress of all currently
running tasks; including: Volume Format and Volume Synchronization (Figure 7-26 on
page 381).

380 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-26

Creating a Custom Thin-provisioned Mirrored volume


The Advanced option in the Create Volumes window is used to customize volume creation.
Using this feature the default options can be overridden and volume creation can be tailored
to the specifics of the clients environment.

The Mirror Sync rate can be changed from the default setting using the Advanced option,
subsection Volume Location, of the Create Volume window. This option sets the priority of
copy synchronization progress, allowing a preferential rate to be set for more important
mirrored volumes.

The summary shows you the capacity information and the allocated space. You can click
Advanced and customize the thin-provision settings or the mirror synchronization rate. After
you create the volume, the confirmation window opens (Figure 7-27 on page 382).

Chapter 7. Volume creation and provisioning 381


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-27 Confirmation window

The initial synchronization of thin-mirrored volumes is fast when a small amount of real and
virtual capacity is used.

7.6 Stretched Volumes


Using the Modify Topology Wizard setup your SVC cluster into a stretched topology.

System → Actions → Modify System Topology (Figure 7-28 on page 383)

In order to complete the switch to stretched topology you will need to assign site awareness to
the following cluster objects:
򐂰 Host(s)
򐂰 Controller(s) (External Storage)
򐂰 Nodes

Site awareness must be defined for each of these object classes before the new topology of
stretched can be set.

382 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-28 Modify Topology Wizard

Assign Site Names - All 3 fields must be assigned before proceeding using “Next”. See
(Figure 7-29 on page 383).

Figure 7-29 Assign Site Names

Chapter 7. Volume creation and provisioning 383


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Next, Hosts and Storage must be assigned sites. Each host must be assigned to a site. See
(Figure 7-30) for an example host.

Figure 7-30 Assign Hosts to a Site

Next Controllers, including Quorum, must be assigned a site. (Figure 7-31)

Figure 7-31 Assign External Storage to a Site.

The next objects to be set with site awareness are nodes (Figure 7-32 on page 385)

384 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-32 Assign Nodes to Sites

After completing site awareness assignments for Host, Controller and Node objects you can
now proceeds to change topology to Stretched.

Note: The above example shows a 2 x node cluster, which is an unsupported stretched
configuration. A more accurate representation is shown in Figure 7-1 on page 386.

Chapter 7. Volume creation and provisioning 385


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Table 7-1 4 x Node Cluster Stretched example

A summary of new topology configuration will be displayed before the change is committed.
For our 2 x node example this would look like (Figure 7-33).

Figure 7-33 Summary of Node Site assignments before change committed

Select “Finish” to commit the changes. (Figure 7-34 on page 387)

386 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-34 Stretched topology now set

With topology now set the “Stretched” icon is visible in the Quick Volume Creation menu.
Select this and define the volume’s attributes. Note: creation options are restricted based on
site awareness attributes of controllers. (Figure 7-35 on page 388).

Chapter 7. Volume creation and provisioning 387


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-35 Creating Stretched Volume

The bottom left section of the above example (Figure 7-35) summarizes the volume creation
activities about to be performed i.e. 1 x volume with 2 x copies at sites “Site1” and “Site2”.

7.7 HyperSwap and the mkvolume command


HyperSwap volume configuration is not possible until site awareness has been configured.

In this section we discuss the new mkvolume command and how the GUI uses this command,
when HyperSwap topology has been configured, instead of the “traditional” mkvdisk
command. The GUI continues to use mkvdisk when all other classes of volumes are created.

Note: It is still possible to create HyperSwap volumes as in V7.5 release as described in


the reference white paper:

https://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102538

Or the IBM redbook:

https://ibm.biz/BdHvgN

388 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

HyperSwap volumes are a new type of HA volumes to be supported by SVC. They


supplement the existing Enhanced Stretched Cluster topology solution and are built off two
existing SVC technologies - Metro Mirror and (VDisk) Volume Mirroring.

These technologies have been combined in an active-active configuration deployed using


Change Volumes (as used in the Global Mirror with Change Volumes) to create a Single
Volume - from a host perspective - a in HA form. The volume presented is a combination of 4
x “traditional” volumes, but is a single entity from a host (and administrative) perspective
(Figure 7-36).

Figure 7-36 What makes up a HyperSwap Volume

The GUI simplifies the complexity of HyperSwap volume creation, by only presenting the
volume class of HyperSwap as a Quick Volume Creation option after HyperSwap topology
has been configured.

In the following example HyperSwap topology has been configured and the Quick Volume
Creation window is being used to define a HyperSwap Volume. (Figure 7-37 on page 390).

The capacity and name characteristics are defined as for a Basic volume - highlighted in blue
in the example - and the mirroring characteristics are defined by the Site parameters -
highlighted in red.

Chapter 7. Volume creation and provisioning 389


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-37 HyperSwap Volume creation - with Summary of actions

The drop-downs assist in creation and the summary (bottom left of the creation window)
indicates the actions that will be carried out once the Create option is selected. In the
example above (Figure 7-37) a single volume will created, with volume copies in site1 and
site2. This volume will be in an active-active (Metro-Mirror) relationship with additional
resilience provided by 2 x Change volumes.

The command executed to created this volume is shown (Figure 7-38 on page 391). and can
be summarized as:

svctask mkvolume -name <name_of_volume> -pool <X:Y> -size <Size_of_volume> -unit


<units>

390 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-38 Example mkvolume command

With a single mkvolume we achieve the creation of a HyperSwap volume. Previously (using
Spectrum Virtualize release V7.5) this was only possible with careful planning and issuing
multiple commands:
򐂰 mkvdisk master_vdisk
򐂰 mkvdisk aux_vdisk
򐂰 mkvdisk master_change_volume
򐂰 mkvdisk aux_change_volume
򐂰 mkrcrelationship –activeactive
򐂰 chrcrelationship -masterchange
򐂰 chrcrelationship -auxchange
򐂰 addvdiskacces

7.7.1 Volume manipulation commands


Five new CLI commands for administering Volumes will be released in Spectrum Virtualize
V7.6 but the GUI will continue to use legacy commands, for all volume administration, with the
exception of HyperSwap volume creation - mkvolume and deletion - rmvolume.

The five new CLI commands for administering Volumes are:


򐂰 mkvolume
򐂰 mkimagevolume
򐂰 addvolumecopy*
򐂰 rmvolumecopy*
򐂰 rmvolume

Commands marked with * will not be available in the GA release of V7.6.

Also:

lsvdisk now includes “volume_id”, “volume_name” and “function” fields to easily identify the
individual vdisks that make up a HyperSwap volume. These views are “rolled-up” up in the

Chapter 7. Volume creation and provisioning 391


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

GUI to provide views that reflect the client’s view of the HyperSwap Volumes and it’s site
dependent copies, as opposed to the “low-level” VDisk(s) and VDisk-change-volume(s).
E.g.The Volumes → Volumes view below (Figure 7-39 on page 392) shows the HyperSwap
Volume “My hs volume” with an expanded view opened - using “+” to reveal 2 x volume copies
“My hs volume (London)” (Master VDisk) and “My hs volume (Hursley)” (Aux VDisk), i.e. we
do not show the VDisk-Change-Volumes.

Figure 7-39 Hidden Change Volumes

Likewise the status the HyperSwap volume is reported at “parent” level, i.e. if one of the
copies is syncing or offline the “parent” HyperSwap volume reflects this state. (Figure 7-40).

Figure 7-40 Parent Volume reflects state of copy volume

Individual commands are briefly discussed in this next section, but refer to the infocenter for
full details, and current support status, of these new commands.
򐂰 mkvolume

392 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Create a new empty volume using storage from existing storage pools. The type of volume
created is determined by the system topology and the number of storage pools specified.
Volume is always formatted (zeroed). This command can be used to create:
– Basic volume- any topology
– Mirrored volume- standard topology
– Stretched volume- stretched topology
– HyperSwap volume- hyperswap topology
򐂰 rmvolume
Remove a volume. For a HyperSwap volume this includes deleting the active-active
relationship and the change volumes.
The –force parameter with rmvdisk is replaced by individual override parameters, making
it clearer to the user exactly what protection they are bypassing
򐂰 mkimagevolume
Create a new image mode volume. Can be used to import a volume, preserving existing
data. Implemented as a separate command to provide greater differentiation between the
action of creating a new empty volume and creating a volume by importing data on an
existing mdisk.
򐂰 addvolumecopy
Add a new copy to an existing volume. The new copy will always be synchronized from the
existing copy. For stretched and hyperswap topology systems this creates a highly
available volume. This command can be used to create:
– Mirrored volume- standard topology
– Stretched volume- stretched topology
– HyperSwap volume- hyperswap topology
򐂰 rmvolumecopy
Remove a copy of a volume. Leaves the volume intact. Converts a Mirrored, Stretched or
HyperSwap volume into a basic volume. For a HyperSwap volume this includes deleting
the active-active relationship and the change volumes.
Allows a copy to be identified simply by its site.
The –force parameter with rmvdiskcopy is replaced by individual override parameters,
making it clearer to the user exactly what protection they are bypassing.

7.8 Mapping Volumes to Host

Author Comment: Please do not forget to check with Hartmut, if that is the case and if it covers the
topics then that is fine.

<< I’m proposing that we delete all / the majority of this section for this release of the redbook.
Currently this section is split into 4 parts
򐂰 5.8.1 Windows 2008
򐂰 5.8.2 Window 2008 with iSCSI
򐂰 5.8.3 VMware
򐂰 5.8.4 VMware with iSCSI

Chapter 7. Volume creation and provisioning 393


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

There is a capture on Host configuration which intersects or perhaps replicates this


information - need to discuss with Hartmut. We could perhaps include a more in depth - but
still high level - introduction to Hosts and how these GUI menu intersect with Volumes>>

original information starts here

You can map the newly created volume to the host at creation time or map it later. If you did
not click Create and Map to Host when you created the volume, follow the steps in 7.8.1,
“Mapping newly created volumes to the host using the wizard” on page 394.

7.8.1 Mapping newly created volumes to the host using the wizard
We continue to map the volume that was created in 7.3, “Creating volumes using the Quick
Volume Creation” on page 366. We assume that you followed that procedure and clicked
Continue as, for example, shown in Figure 7-14 on page 373.

To map the volumes, complete the following steps:


1. Select a host to which the new volume should be attached (Figure 7-41).

Figure 7-41 Choose a host

394 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

2. The Modify Host Mappings window opens, and your host and the newly created volume
are already selected. Click Map Volumes to map the volume to the host (Figure 7-42).

Figure 7-42 Modify mappings

3. The confirmation window shows the result of mapping volume task (Figure 7-43).

Figure 7-43 Confirmation of volume to host mapping

4. After the task completes, the wizard returns to the Volumes window. By double-clicking the
volume, you can see the host maps (Figure 7-44).

Chapter 7. Volume creation and provisioning 395


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-44 Host maps

The host is now able to access the volumes and store data on them. See 7.9, “Discovering
volumes on hosts and multipathing” on page 396 for information about discovering the
volumes on the host and making additional host settings, if required.

You can also create multiple volumes in preparation for discovering them later, and customize
mappings.

7.9 Discovering volumes on hosts and multipathing


This section explains how to discover the volumes that were created and mapped in 7.3,
“Creating volumes using the Quick Volume Creation” on page 366 and 7.8, “Mapping
Volumes to Host” on page 393, and how to configure additional multipath settings, if required.

We assume that you have completed all previous steps in this book so that the hosts and the
IBM SVC is prepared:
򐂰 Prepare your operating systems for attachment (Chapter 4, “Initial configuration” on
page 133).
򐂰 Create hosts using the GUI (Chapter 4, “Initial configuration” on page 133).
򐂰 Perform basic volume configuration and host mapping.

Our examples illustrate how to discover Fibre Channel and Internet Small Computer System
Interface (iSCSI) volumes on Microsoft Windows 2008 and VMware ESX 4.x hosts.

From the dynamic menu of the IBM SVC GUI, click the Hosts icon to open the Hosts menu,
and click the Hosts option (Figure 7-45).

396 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-45 Navigate to hosts menu

An overview of the configured and mapped hosts is displayed (Figure 7-46).

Figure 7-46 The existing hosts

7.9.1 Windows 2008 Fibre Channel volume attachment


To complete Fibre Channel volume attachment in Windows 2008, use the following steps:
1. Right-click your Windows 2008 Fibre Channel host in the Hosts view (Figure 7-47) and
select Properties.

Figure 7-47 Host properties

Chapter 7. Volume creation and provisioning 397


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Navigate to the Mapped Volumes tab (Figure 7-48).

Figure 7-48 Mapped volumes to a host

The host details show you which volumes are currently mapped to the host, and you also
see the volume UID and the SCSI ID. In our example, four volumes with SCSI ID 0-3 are
mapped to the host.
2. Log on to your Microsoft host and click Start → All Programs → Subsystem Device
Driver DSM → Subsystem Device Driver DSM. A command-line interface opens. Type
the datapath query device command and press Enter to display IBM SVC disks that are
connected to this host (Example 7-1).

Example 7-1 datapath query device


C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 4

DEV#: 0 DEVICE NAME: Disk5 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050764008200083800000000000019 LUN SIZE: 50.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 0 0
1 * Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 0 0
2 Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 7554 0
3 Scsi Port2 Bus0/Disk5 Part0 OPEN NORMAL 7860 0
4 * Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 0 0
5 * Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 0 0
6 Scsi Port3 Bus0/Disk5 Part0 OPEN NORMAL 7693 0

DEV#: 1 DEVICE NAME: Disk6 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076400820008380000000000001A LUN SIZE: 10.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk6 Part0 OPEN NORMAL 5571 0
1 Scsi Port2 Bus0/Disk6 Part0 OPEN NORMAL 5700 0
2 * Scsi Port2 Bus0/Disk6 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk6 Part0 OPEN NORMAL 5508 0
4 * Scsi Port2 Bus0/Disk6 Part0 OPEN NORMAL 0 0
5 Scsi Port3 Bus0/Disk6 Part0 OPEN NORMAL 5601 0
6 * Scsi Port3 Bus0/Disk6 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk7 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 6005076400820008380000000000001B LUN SIZE: 20.0GB

398 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port3 Bus0/Disk7 Part0 OPEN NORMAL 12 0
1 * Scsi Port2 Bus0/Disk7 Part0 OPEN NORMAL 17 0
2 * Scsi Port2 Bus0/Disk7 Part0 OPEN NORMAL 0 0
3 Scsi Port2 Bus0/Disk7 Part0 OPEN NORMAL 7355 0
4 Scsi Port2 Bus0/Disk7 Part0 OPEN NORMAL 7546 0
5 * Scsi Port3 Bus0/Disk7 Part0 OPEN NORMAL 0 0
6 Scsi Port3 Bus0/Disk7 Part0 OPEN NORMAL 7450 0

DEV#: 3 DEVICE NAME: Disk8 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050764008200083800000000000024 LUN SIZE: 1.0GB
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port3 Bus0/Disk8 Part0 OPEN NORMAL 59 0
1 Scsi Port2 Bus0/Disk8 Part0 OPEN NORMAL 47 0
2 Scsi Port2 Bus0/Disk8 Part0 OPEN NORMAL 46 0
3 * Scsi Port2 Bus0/Disk8 Part0 OPEN NORMAL 0 0
4 * Scsi Port2 Bus0/Disk8 Part0 OPEN NORMAL 0 0
5 Scsi Port3 Bus0/Disk8 Part0 OPEN NORMAL 59 0
6 * Scsi Port3 Bus0/Disk8 Part0 OPEN NORMAL 0 0

The output provides information about mapped volumes. In our example, four disks are
connected, Disk5, Disk6, Disk7, Disk8, and eight paths to the disks are available (State
indicates OPEN).
3. Open the Windows Disk Management window (Figure 7-49 on page 400) by clicking
Start → Run, and then type diskmgmt.msc, and click OK.

Chapter 7. Volume creation and provisioning 399


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-49 Windows Disk Management

Windows device discovery: Usually, Windows discovers new devices, such as disks,
by itself (Plug&Play function). If you completed all the steps but do not see any disks,
click Actions → Rescan Disk in Disk Management to discover potential volumes.

In our example, three of four disks are already initialized. We will use the fourth, unknown,
1 GB disk as an example for the next initialization steps.
4. Right-click the disk in the left pane and select Online (Figure 7-50).

Figure 7-50 Place a disk online

400 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

5. Right-click the disk again, select Initialize Disk (Figure 7-51), and click OK.

Figure 7-51 Initialize Disk menu

6. Right-click in the right pane and select New Simple Volume (Figure 7-52).

Figure 7-52 New Simple Volume

7. Follow the wizard and the volume is ready to use from your Windows host (Figure 7-53 on
page 402).

Chapter 7. Volume creation and provisioning 401


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-53 Volume is ready to use

The basic setup is now complete. The IBM SVC is configured. And the host is prepared to
access the volumes over several paths and is able to store data on the storage subsystem.

7.9.2 Windows 2008 iSCSI volume attachment


To perform iSCSI volume attachment in Windows 2008, complete the following steps:
1. Right-click your Windows 2008 iSCSI host in the Hosts view, click Properties, and click
the Port Definitions tab to see the defined host iSCSI address (Figure 7-54).

Figure 7-54 iSCSI host address

402 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Clicking the Mapped Volumes tab shows you which volumes are currently mapped to the
host, and you also see the volume UID and the SCSI ID. In our example, there are no
mapped volumes so far (Figure 7-55).

Figure 7-55 Volumes mapped to iSCSI host

2. Log on to your Windows 2008 host and click Start → Administrative Tools → iSCSI
Initiator to open the iSCSI Configuration tab (Figure 7-56).

Figure 7-56 Windows iSCSI Configuration tab

Chapter 7. Volume creation and provisioning 403


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

3. Enter the IP address of one of the IBM SVC iSCSI ports and click Quick Connect
(Figure 7-57).

iSCSI IP addresses: The iSCSI IP addresses are different from the cluster and
canister IP addresses, and they are configured in Chapter 4, “Initial configuration” on
page 133.

Figure 7-57 iSCSI Quick Connect

The IBM SVC initiator is discovered and connected (Figure 7-58).

Figure 7-58 iSCSI Initiator target is connected

404 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Now you have completed the steps to connect the storage disk to your iSCSI host, but you
are using only a single path at the moment. To enable multipathing for iSCSI targets, more
actions are required. Complete the following steps:
1. Click Start → Run and type cmd to open a command prompt. Run the following command
and press Enter (Example 7-2):
ServerManagerCMD.exe -install Multipath-IO

Example 7-2 Installing MPIO


C:\Users\Administrator>ServerManagerCmd.exe -Install Multipath-IO

Start Installation...
[Installation] Succeeded: [Multipath I/O] Multipath I/O.
<100/100>

Success: Installation succeeded.

2. Click Start → Administrative Tools → MPIO, click the Discover Multi-Paths tab, and
select the Add support for iSCSI devices check box (Figure 7-59).

Figure 7-59 Enable iSCSI MPIO

3. Click Add and at the prompt, confirm to reboot your host.

Chapter 7. Volume creation and provisioning 405


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

4. After reboot, select Start → Administrative Tools → iSCSI Initiator to open the iSCSI
Initiator Properties window (Configuration tab). Click the Discovery tab (Figure 7-60).

Figure 7-60 iSCSI Properties Discovery tab

5. Click Discover Portal, enter the IP address of another IBM SVC iSCSI port (Figure 7-61),
and click OK.

Figure 7-61 Discover Target Portal window

406 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

6. Return to the Targets tab (Figure 7-62); the new connection is listed there as Inactive.

Figure 7-62 Inactive target ports

7. Highlight the inactive port and click Connect. The Connect to Target window opens
(Figure 7-63).

Figure 7-63 Connect to a target

Chapter 7. Volume creation and provisioning 407


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

8. Select the Enable Multipath check box and click OK. The status of the second port now
indicates Connected (Figure 7-64).

Figure 7-64 Second target port connected

Repeat this step for each IBM SVC port you want to use for iSCSI traffic. You may have up
to four port paths to the system.

408 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

9. Click Devices → MPIO to ensure that the multipath policy for Windows 2008 is set to the
default, which is Round Robin with Subset (Figure 7-65), and click OK to close this view.

Figure 7-65 Round Robin with Subset

10.Map volume to the iSCSI host if you have not done it already. In our example, we use 2 GB
disk.
11.Open the Windows Disk Management window (Figure 7-66 on page 410) by clicking
Start → Run, and then type diskmgmt.msc, and click OK.

Chapter 7. Volume creation and provisioning 409


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-66 Windows Disk Management window

12.Set the disk online, initialize it, create a file system on it, and then it is ready to use. The
detailed steps of this process are the same as described in 7.9.1, “Windows 2008 Fibre
Channel volume attachment” on page 397.

Now the storage disk is ready for use (Figure 7-67). In our example, we mapped a 2 GB disk,
from IBM SVC Generation 2, to a Windows 2008 host using iSCSI protocol.

Figure 7-67 Windows Disk Management: Disk is ready to use

410 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

7.9.3 VMware ESX Fibre Channel attachment


To do the VMware ESX Fibre Channel attachment, complete the following steps:
1. Right-click your VMware ESX Fibre Channel host in the Hosts view, select Properties,
and then click the Mapped Volumes tab (Figure 7-68).

Figure 7-68 Mapped volume to ESX Fibre Channel (FC) host

The Host Details window shows that one volume is connected to the ESX FC host using
SCSI ID 0. The UID of the volume is also displayed.
2. Connect to your VMware ESX Server using the vSphere client, navigate to the
Configuration tab, and select Storage Adapters or Storage view (Figure 7-69).

Figure 7-69 vSphere Client: Storage adapters

3. Select Rescan All and click OK (Figure 7-70 on page 412) to scan for new storage
devices.

Chapter 7. Volume creation and provisioning 411


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-70 Rescan devices

4. Select Storage and click Add Storage (Figure 7-71).

Figure 7-71 vSphere Client: Storage

5. The Add Storage wizard opens. Click Select Disk/LUN and click Next. The IBM SVC disk
is displayed (Figure 7-72 on page 413). Select it and click Next.

412 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-72 Select Disk/LUN menu

6. Follow the wizard to complete the attachment of the disk. After you click Finish, the wizard
closes and you return to the storage view.
Figure 7-73 shows that the new volume is added to the configuration.

Figure 7-73 Add Storage task complete

7. Highlight the new data store and click Properties to see the details of it (Figure 7-74 on
page 414).

Chapter 7. Volume creation and provisioning 413


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-74 Data store properties

8. Click Manage Paths to customize the multipath settings. Select Round Robin
(Figure 7-75) and click Change.

Figure 7-75 Select a data store multipath setting

414 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

The storage disk is available and ready to use for your VMware ESX server using Fibre
Channel attachment.

7.9.4 VMware ESX iSCSI attachment


To do a VMware ESX iSCSI attachment, complete the following steps:
1. Right-click your VMware ESX Fibre iSCSI host in the Hosts view and select Properties.
Click the Mapped Volumes tab (Figure 7-76).

Figure 7-76 iSCSI ESX host properties

The Host Details window shows that one volume is connected to the ESX iSCSI host
using SCSI ID 0. The UID of the volume is also displayed.
2. Connect to your VMware ESX Server using the vSphere Client, click the Configuration
tab (Figure 7-77), and select Storage Adapters.

Figure 7-77 vSphere Client: Storage

3. Select iSCSI Software Adapter and click Properties. The iSCSI initiator properties
window opens. Select the Dynamic Discovery tab (Figure 7-78 on page 416) and click
Add.

Chapter 7. Volume creation and provisioning 415


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

Figure 7-78 iSCSI Initiator Properties window

4. To add a target, enter the target IP address (Figure 7-79). The target IP address is the IP
address of a node canister in the I/O group from which you are mapping the iSCSI volume.
Keep the IP port number at the default value of 3260, and click OK. The connection
between the initiator and target is established.

Figure 7-79 Enter a target IP address

Repeat this step for each IBM SVC iSCSI port that you want to use for iSCSI connections.

416 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

iSCSI IP addresses: The iSCSI IP addresses are different from the cluster and
canister IP addresses. They have been configured in Chapter 4, “Initial configuration”
on page 133.

5. After you have added all the ports required, close the iSCSI Initiator properties by clicking
Close (Figure 7-78 on page 416).
You are prompted to rescan for new storage devices. Confirm the scan by clicking Yes
(Figure 7-80).

Figure 7-80 Confirm the rescan

6. Go to the storage view shown in Figure 7-81 and click Add Storage.

Figure 7-81 Click the Add Storage menu

Chapter 7. Volume creation and provisioning 417


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

7. The Add Storage wizard opens (Figure 7-82). Select Disk/LUN and click Next.

Figure 7-82 Select Disk/LUN menu

8. The new iSCSI logical unit number (LUN) displays. Select it and click Next (Figure 7-83 on
page 419).

418 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

Figure 7-83 Select iSCSI LUN menu

9. Select the file system version (Figure 7-84).

Figure 7-84 Select VMFS version

Chapter 7. Volume creation and provisioning 419


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

10.Review the disk layout and click Next (Figure 7-85).

Figure 7-85 Current Disk Layout

11.Enter a name for the data store and click Next (Figure 7-86).

Figure 7-86 Enter a data store name

12.Select the maximum file system size and click Next (Figure 7-87).

Figure 7-87 Select maximum file system size

420 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

13.Review your selections and click Finish (Figure 7-88).

Figure 7-88 Finish the wizard

The new iSCSI LUN is now in the process of being added. After the task completes, the
new data store is listed in the storage view (Figure 7-89).

Figure 7-89 New data store available

Chapter 7. Volume creation and provisioning 421


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

14.Highlight the new data store and click Properties to open and review the data store
settings (Figure 7-90).

Figure 7-90 iSCSI data store properties

422 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm

15.Click Manage Paths, select Round Robin as the multipath policy (Figure 7-91), and then
click Change.

Figure 7-91 Change the multipath policy

Click Close twice to return to the storage view, and now the storage disk is available and
ready to use for your VMware ESX server using an iSCSI attachment.

Chapter 7. Volume creation and provisioning 423


7933 05A VOLUME CONFIGURATION JON.fm Draft Document for Review February 4, 2016 8:01 am

424 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Chapter 8. Advanced features for storage


efficiency
In this chapter, we introduce the basic concepts of dynamic data relocation and storage
optimization features. IBM Spectrum Virtualize running inside IBM System Storage SAN
Volume Controller (SVC) offers the following functions for storage efficiency:
򐂰 Easy Tier
򐂰 Thin provisioning
򐂰 Real-time Compression (RtC)

We provide a basic technical overview and the benefits of each feature. For more information
about planning and configuration, see the following IBM Redbooks publications:
򐂰 Easy Tier:
– Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
– IBM System Storage SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521
– IBM DS8000 Easy Tier, REDP-4667 (This concept is similar to SVC Easy Tier.)
򐂰 Thin provisioning:
– Thin Provisioning in an IBM SAN or IP SAN Enterprise Environment, REDP-4265
– DS8000 Thin Provisioning, REDP-4554 (similar concept to IBM SAN Volume Controller
thin provisioning)
򐂰 RtC:
– Real-time Compression in SAN Volume Controller and Storwize V7000, REDP-4859
– Implementing IBM Real-time Compression in SAN Volume Controller and IBM
Storwize V7000, TIPS1083
– Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072

This chapter includes the following topics:


򐂰 Introduction
򐂰 Easy Tier

© Copyright IBM Corp. 2015. All rights reserved. 425


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Thin provisioning
򐂰 Real-time Compression Software

426 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

8.1 Introduction
In modern and complex application environments, the increasing and often unpredictable
demands for storage capacity and performance lead to issues of planning and optimization of
storage resources.

Consider the following typical storage management issues:


򐂰 Usually when a storage system is implemented, only a portion of the configurable physical
capacity is deployed. When the storage system runs out of the installed capacity and more
capacity is requested, a hardware upgrade is implemented to add physical resources to
the storage system. This new physical capacity can hardly be configured to keep an even
spread of the overall storage resources. Typically, the new capacity is allocated to fulfill
only new storage requests. The existing storage allocations do not benefit from the new
physical resources. Similarly, the new storage requests do not benefit from the existing
resources; only new resources are used.
򐂰 In a complex production environment, it is not always possible to optimize storage
allocation for performance. The unpredictable rate of storage growth and the fluctuations
in throughput requirements, which are I/O per second (IOPS), often lead to inadequate
performance. Furthermore, the tendency to use even larger volumes to simplify storage
management works against the granularity of storage allocation, and a cost-efficient
storage tiering solution becomes difficult to achieve. With the introduction of high
performing technologies, such as solid-state drives (SSD) or all flash arrays, this challenge
becomes even more important.
򐂰 The move to larger and larger physical disk drive capacities means that previous access
densities that were achieved with low-capacity drives can no longer be sustained.
򐂰 Any business has applications that are more critical than others, and a need exists for
specific application optimization. Therefore, the ability to relocate specific application data
to a faster storage media is needed.
򐂰 Although more servers are purchased with local SSDs attached for better application
response time, the data distribution across these direct-attached SSDs and external
storage arrays must be carefully addressed. An integrated and automated approach is
crucial to achieve performance improvement without compromise to data consistency,
especially in a disaster recovery situation.

All of these issues deal with data placement and relocation capabilities or data volume
reduction. Most of these challenges can be managed by having spare resources available
and by moving data, and by the use of data mobility tools or operating systems features (such
as host level mirroring) to optimize storage configurations. However, all of these corrective
actions are expensive in terms of hardware resources, labor, and service availability.
Relocating data among the physical storage resources that dynamically or effectively reduces
the amount of data, that is, transparently to the attached host systems, is becoming
increasingly important.

Chapter 8. Advanced features for storage efficiency 427


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

8.2 Easy Tier


In today’s storage market, SSDs and flash arrays are emerging as an attractive alternative to
hard disk drives (HDDs). Because of their low response times, high throughput, and
IOPS-energy-efficient characteristics, SSDs and flash arrays have the potential to allow your
storage infrastructure to achieve significant savings in operational costs. However, the current
acquisition cost per GiB for SSDs or flash array is higher than for HDDs. SSD and flash arrays
performance depends greatly on workload characteristics; therefore, they should be used
with HDDs for optimal performance.

Choosing the correct mix of drives and the correct data placement is critical to achieve
optimal performance at low cost. Maximum value can be derived by placing “hot” data with
high I/O density and low response time requirements on SSDs or flash arrays, and targeting
HDDs for “cooler” data that is accessed more sequentially and at lower rates.

Easy Tier automates the placement of data among different storage tiers and it can be
enabled for internal and external storage. This IBM Spectrum Virtualize feature boosts your
storage infrastructure performance to achieve optimal performance through a software,
server, and storage solution. Additionally, the new, no charge feature called storage pool
balancing, introduced in the V7.3 IBM Spectrum Virtualize software version, automatically
moves extents within the same storage tier, from overloaded to less loaded managed disks
(MDisks). Storage pool balancing ensures that your data is optimally placed among all disks
within storage pools.

8.2.1 Easy Tier concepts


IBM Spectrum Virtualize implements Easy Tier enterprise storage functions, which were
originally available on IBM DS8000 and IBM XIV enterprise class storage systems. It enables
automated subvolume data placement throughout different or within the same storage tiers to
intelligently align the system with current workload requirements and to optimize the usage of
SSDs or flash arrays. This functionality includes the ability to automatically and
non-disruptively relocate data (at the extent level) from one tier to another tier or even within
the same tier, in either direction to achieve the best available storage performance for your
workload in your environment. Easy Tier reduces the I/O latency for hot spots, but it does not
replace storage cache. Both Easy Tier and storage cache solve a similar access latency
workload problem, but these two methods weigh differently in the algorithmic construction
that is based on “locality of reference”, recency, and frequency. Because Easy Tier monitors
I/O performance from the device end (after cache), it can pick up the performance issues that
cache cannot solve and complement the overall storage system performance. Figure 8-1
shows placement of the Easy Tier engine within the IBM Spectrum Virtualize software stack.

428 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Figure 8-1 Easy Tier in the IBM Spectrum Virtualize software stack

In general, the storage environments’ I/O is monitored at a volume level and the entire volume
is always placed inside one appropriate storage tier. Determining the amount of I/O, moving
part of the underlying volume to an appropriate storage tier and reacting to workload changes
are too complex for manual operation. This area is where the Easy Tier feature can be used.

Easy Tier is a performance optimization function because it automatically migrates (or moves)
extents that belong to a volume between different storage tiers (Figure 8-2) or the same
storage tier. Because this migration works at the extent level, it is often referred to as
sub-LUN migration. The movement of the extents is online and unnoticed from the host’s
point of view. As a result of extent movement, the volume no longer has all its data in one tier
but rather in two or three tiers. Figure 8-2 shows the basic Easy Tier principle of operation.

Chapter 8. Advanced features for storage efficiency 429


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 8-2 Easy Tier

You can enable Easy Tier on a volume basis. It monitors the I/O activity and latency of the
extents on all Easy Tier enabled volumes over a 24-hour period. Based on the performance
log, Easy Tier creates an extent migration plan and dynamically moves (promotes) high
activity or hot extents to a higher disk tier within the same storage pool. It also moves
(demotes) extents whose activity dropped off, or cooled, from a higher disk tier MDisk back to
a lower tier MDisk. When Easy Tier runs in a storage pool rebalance mode, it moves extents
from busy MDisks to less busy MDisks of the same type.

8.2.2 SSD arrays and flash MDisks


The SSDs or flash arrays are treated no differently by the IBM SAN Volume Controller than
normal HDDs regarding RAID arrays or MDisks.

The individual SSDs in the storage that is managed by the SVC are combined into an array,
usually in RAID 10 or RAID 5 format. It is unlikely that RAID 6 SSD arrays are used because
of the double parity overhead, with two logical SSDs used for parity only. A LUN is created on
the array and then presented to the SVC as a normal MDisk.

430 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

As is the case for HDDs, the SSD RAID array format helps to protect against individual SSD
failures. Depending on your requirements, you can achieve more high availability protection
above the RAID level by using volume mirroring.

The internal storage configuration of flash arrays can differ depending on an array vendor. But
regardless of the methods that are used to configure flash-based storage, the flash system
maps a volume to a host, in this case, to the SVC. From the SVC perspective, a volume that is
presented from flash storage is also seen as a normal managed disk.

Starting with SVC 2145-DH8 nodes and IBM Storwize V7.3, up to two expansion drawers can
be connected to the one SVC IO Group. Each drawer can have up to 24 SDDs and only SDD
drives are supported. The SDD drives are then gathered together to form RAID arrays in the
same way that RAID arrays are formed in the IBM Storwize systems.

After creation of an SSD RAID array it appears as an MDisk but with a tier of flash, which
differs from MDisks presented from external storage systems. Because the SVC does not
know from what kind of physical disks the presented MDisks are formed from, the default
MDisk tier that SVC adds to each external MDisk is enterprise. It is up to the user or
administrators to change the type of MDisks to flash, enterprise, or nearline (NL).

To change a tier of an MDisk in the CLI, use the chmdisk command, as in Example 8-1.

Example 8-1 Changing the MDisk tier


IBM_2145:ITSO_SVC2:superuser>lsmdisk -delim " "
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier encrypt
0 mdisk0 online unmanaged 100.0GB 0000000000000000 controller0
6005076400820008380000000000000000000000000000000000000000000000 enterprise no
1 mdisk1 online unmanaged 100.0GB 0000000000000001 controller0
6005076400820008380000000000000100000000000000000000000000000000 enterprise no

IBM_2145:ITSO_SVC2:superuser>chmdisk -tier nearline mdisk0

IBM_2145:ITSO_SVC2:superuser>lsmdisk -delim " "


id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier encrypt
0 mdisk0 online unmanaged 100.0GB 0000000000000000 controller0
6005076400820008380000000000000000000000000000000000000000000000 nearline no
1 mdisk1 online unmanaged 100.0GB 0000000000000001 controller0
6005076400820008380000000000000100000000000000000000000000000000 enterprise no

It is also possible to change the MDisk tier from the GUI but this only applies to external
MDisks. To change the tier go to Pools → External Storage and click the “+” sign next to the
controller which owns the MDisks for which you want to change the tier. Then right-click on
the desired MDisk and select Modify Tier (Figure 8-3).

Chapter 8. Advanced features for storage efficiency 431


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 8-3 Change the MDisk tier

The new window opens with options to change the tier (Figure 8-4).

Figure 8-4 Select desired MDisk tier

This change happens online and has no impact on hosts or availability of the volumes.

If you do not see the Tier column right-click the blue title row and select the Tier check box as
presented in Figure 8-5.

Figure 8-5 Customizing title row to show tier column

8.2.3 Disk tiers


The MDisks (LUNs) that are presented to the SVC cluster are likely to have different
performance attributes because of the type of disk or RAID array on which they reside. The
MDisks can be created on 15K revolutions per minute (RPM) Fibre Channel (FC) or

432 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

serial-attached SCSI (SAS) disks, nearline SAS or Serial Advanced Technology Attachment
(SATA), or even SSDs or flash storage systems.

The SVC does not automatically detect the type of MDisks, except for MDisks that are formed
of SSD drives from attached expansion drawers. Instead, all external MDisks are initially put
into the enterprise tier, by default. Then, the administrator must manually change the tier of
MDisks and add them to storage pools. Depending on what type of disks are gathered to form
a storage pool, we distinguish two types of storage pools: single-tier and multi-tier.

Single-tier storage pools


Figure 8-6 shows a scenario in which a single storage pool is populated with MDisks that are
presented by an external storage controller. In this solution, the striped volumes can be
measured by Easy Tier and can benefit from storage pool balancing mode, which moves
extents between MDisks of the same type.

Figure 8-6 Single tier storage pool with striped volume

MDisks that are used in a single-tier storage pool should have the same hardware
characteristics, for example, the same RAID type, RAID array size, disk type, disk revolutions
per minute (RPM), and controller performance characteristics.

Multitier storage pools


A multitier storage pool has a mix of MDisks with more than one type of disk tier attribute, for
example, a storage pool that contains a mix of enterprise and SSD MDisks or enterprise and
NL-SAS MDisk. Figure 8-7 shows a scenario in which a storage pool is populated with two
different MDisk types: one belonging to an SSD array, one belonging to SAS HDD array, and
one belonging to an NL-SAS HDD array. Although this example shows RAID 5 arrays, other
RAID types can be used as well.

Chapter 8. Advanced features for storage efficiency 433


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 8-7 Multi tier storage pool with striped volumes

Adding SSDs to the pool also means that more space is now available for new volumes or
volume expansion.

Note: Image mode and sequential volumes are not candidates for Easy Tier automatic
data placement because all extents for those types of volumes must reside on one, specific
MDisk and cannot be moved.

The Easy Tier setting can be changed on a storage pool and volume level. Depending on the
Easy Tier setting and the number of tiers in the storage pool, Easy Tier services may function
in a different way. Table 8-1 on page 434 shows possible combinations of Easy Tier settings.

Table 8-1 EasyTier settings


Storage pool Easy Number of tiers in the Volume copy Easy Volume copy Easy
Tier setting storage pool Tier setting Tier status

Off One Off inactive (see note 2)

Off One On inactive (see note 2)

Off Two or three Off inactive (see note 2)

Off Two or three On inactive (see note 2)

Measure One Off measured (see note 3)

Measure One On measured (see note 3)

Measure Two or three Off measured (see note 3)

Measure Two or three On measured (see note 3)

434 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Storage pool Easy Number of tiers in the Volume copy Easy Volume copy Easy
Tier setting storage pool Tier setting Tier status

Auto One Off measured (see note 3)

Auto One On balanced (see note 4)

Auto Two or three Off measured (see note 3)

Auto Two or three On active (see note 5)

On One Off measured (see note 3)

On One On balanced (see note 4)

On Two or three Off measured (see note 3)

On Two or three On active (see note 5)

Table notes:
1. If the volume copy is in image or sequential mode or is being migrated, the volume copy
Easy Tier status is measured instead of active.
2. When the volume copy status is inactive, no Easy Tier functions are enabled for that
volume copy.
3. When the volume copy status is measured, the Easy Tier function collects usage
statistics for the volume but automatic data placement is not active.
4. When the volume copy status is balanced, the Easy Tier function enables
performance-based pool balancing for that volume copy.
5. When the volume copy status is active, the Easy Tier function operates in automatic
data placement mode for that volume.
6. The default Easy Tier setting for a storage pool is Auto, and the default Easy Tier setting
for a volume copy is On. Therefore, Easy Tier functions, except pool performance
balancing, are disabled for storage pools with a single tier. Automatic data placement
mode is enabled by default for all striped volume copies in a storage pool with two or
more tiers.

Figure 8-8 shows the naming convention and all supported combinations of storage tiering
that are used by Easy Tier.

Chapter 8. Advanced features for storage efficiency 435


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 8-8 Easy Tier supported storage pools

8.2.4 Easy Tier process


The Easy Tier function includes the following four main processes:
򐂰 I/O Monitoring
This process operates continuously and monitors volumes for host I/O activity. It collects
performance statistics for each extent and derives averages for a rolling 24-hour period of
I/O activity.
Easy Tier makes allowances for large block I/Os; therefore, it considers only I/Os of up
to 64 KiB as migration candidates.
This process is efficient and adds negligible processing overhead to the SVC nodes.
򐂰 Data Placement Advisor
The Data Placement Advisor uses workload statistics to make a cost benefit decision as to
which extents are to be candidates for migration to a higher performance tier.
This process also identifies extents that must be migrated back to a lower tier.
򐂰 Data Migration Planner (DMP)
By using the extents that were previously identified, the DMP builds the extent migration
plans for the storage pool. The DMP builds two plans:
– Automatic Data Relocation (ADR mode) plan to migrate extents across adjacent tiers
– Rebalance (RB mode) plan to migrate extents within the same tier
򐂰 Data Migrator
This process involves the actual movement or migration of the volume’s extents up to, or
down from, the higher disk tier. The extent migration rate is capped so that a maximum of
up to 30 MBps is migrated, which equates to approximately 3 TB per day that is migrated
between disk tiers.

436 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

When enabled, Easy Tier performs the following actions between three tiers presented in
Figure 8-8:
򐂰 Promote
Moves the relevant hot extents to higher performing tier.
򐂰 Swap
Exchange cold extent in upper tier with hot extent in lower tier.
򐂰 Warm demote:
– Prevents performance overload of a tier by demoting a warm extent to the lower tier.
– Triggered when bandwidth or IOPS exceeds predefined threshold.
򐂰 Demote or cold demote
Coldest data is moved to lower HDD tier. Only supported between HDD tiers.
򐂰 Expanded cold demote
Demotes appropriate sequential workloads to the lowest tier to better utilize nearline disk
bandwidth.
򐂰 Storage pool balancing:
– Redistribute extents within a tier to balance utilization across MDisks for maximum
performance.
– Moves hot extents from high utilized MDisks to low utilized MDisks.
– Exchanges extents between high utilized MDisks and low utilized MDisks.
򐂰 Easy Tier attempts to migrate the most active volume extents up to SSD first.
򐂰 A previous migration plan and any queued extents that are not yet relocated are
abandoned.

Note: Extent migration occurs only between adjacent tiers. In a three-tiered storage pool,
Easy Tier will not move extents from SSDs directly to NL-SAS and vice versa without
moving them first to SAS drives.

Easy Tier extent migration types are presented in Figure 8-9.

Chapter 8. Advanced features for storage efficiency 437


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 8-9 Easy Tier extent migration types

8.2.5 Easy Tier operating modes


Easy Tier includes the following main operating modes:
򐂰 Off
򐂰 Evaluation or measurement only
򐂰 Automatic data placement or extent migration
򐂰 Storage pool balancing

Easy Tier off mode


With Easy Tier turned off, no statistics are recorded and no cross-tier extent migration occurs.

Evaluation or measurement only mode


Easy Tier evaluation or measurement only mode collects usage statistics for each extent in a
single-tier storage pool where the Easy Tier value is set to On for both the volume and the
pool. This collection is typically done for a single-tier pool that contains only HDDs so that the
benefits of adding SSDs to the pool can be evaluated before any major hardware acquisition.

A dpa_heat.nodeid.yymmdd.hhmmss.data statistics summary file is created in the /dumps


directory of the SVC nodes. This file can be offloaded from the SVC nodes with the PuTTY
Secure Copy Client (PSCP) pscp -load command or by using the GUI, as described in IBM
System Storage SAN Volume Controller Best Practices and Performance Guidelines,
SG24-7521. A web browser is used to view the report that is created by the tool.

438 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Automatic data placement or extent migration mode


In automatic data placement or extent migration operating mode, the storage pool parameter
-easytier on or auto must be set, and the volumes in the pool must have -easytier on. The
storage pool must also contain MDisks with different disk tiers which makes it a multi tiered
storage pool.

Dynamic data movement is not apparent to the host server and application users of the data,
other than providing improved performance. Extents are automatically migrated, as explained
in “Implementation rules” on page 439.

The statistic summary file is also created in this mode. This file can be offloaded for input to
the advisor tool. The tool produces a report on the extents that are moved to a higher tier and
a prediction of performance improvement that can be gained if more higher tier disks are
available.

Options: The Easy Tier function can be turned on or off at the storage pool level and at the
volume level.

Storage pool balancing


Although storage pool balancing is associated with Easy Tier, it operates independently of
Easy Tier and does not require an Easy Tier license. This feature assesses the extents that
are written in a pool and balances them automatically across all MDisks within the pool. This
process works along with Easy Tier when multiple classes of disks exist in a single pool. In
such case Easy Tier moves extents between the different tiers and storage pool balancing
moves extents within the same tier, to better utilize MDisks.

The process will automatically balance existing data when new MDisks are added into an
existing pool even if the pool only contains a single type of drive. This does not mean that the
process will migrate extents from existing MDisks to achieve even extent distribution among
all, old and new, MDisks in the storage pool. Easy Tier RB process within a tier migration plan
is based on performance and not the capacity of underlying MDisks.

Note: Storage pool balancing can be used to balance extents when mixing different size
disks of the same performance tier. For example, when adding larger capacity drives to a
pool with smaller capacity drives of the same class, storage pool balancing redistributes
the extents to take advantage of the additional performance of the new MDisks.

8.2.6 Implementation considerations


Easy Tier is a licensed feature, except for storage pool balancing, which is a no-charge
feature that is enabled, by default. Easy Tier comes as part of the IBM Spectrum Virtualize
code. For Easy Tier to migrate extents between different tier disks, you must have disk
storage available that offers different tiers; for example, a mix of SSD and HDD. Easy Tier will
use Storage Pool Balancing if you have only single tier pool.

Implementation rules
Remember the following implementation and operational rules when you use the IBM System
Storage Easy Tier function on the SVC:
򐂰 Easy Tier automatic data placement is not supported on image mode or sequential
volumes. I/O monitoring for such volumes is supported, but you can not migrate extents on
these volumes unless you convert image or sequential volume copies to striped volumes.

Chapter 8. Advanced features for storage efficiency 439


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Automatic data placement and extent I/O activity monitors are supported on each copy of
a mirrored volume. Easy Tier works with each copy independently of the other copy.

Volume mirroring consideration: Volume mirroring can have different workload


characteristics on each copy of the data because reads are normally directed to the
primary copy and writes occur to both copies. Therefore, the number of extents that
Easy Tier migrates between the tiers might differ for each copy.

򐂰 If possible, the SVC creates volumes or expands volumes by using extents from MDisks
from the HDD tier. However, it uses extents from MDisks from the SSD tier, if necessary.

When a volume is migrated out of a storage pool that is managed with Easy Tier, Easy Tier
automatic data placement mode is no longer active on that volume. Automatic data
placement is also turned off while a volume is being migrated, even if when it is between
pools that both have Easy Tier automatic data placement enabled. Automatic data placement
for the volume is re-enabled when the migration is complete.

Limitations
When you use Easy Tier on the SVC, keep in mind the following limitations:
򐂰 Removing an MDisk by using the -force parameter
When an MDisk is deleted from a storage pool with the -force parameter, extents in use
are migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from the other tier are used.
򐂰 Migrating extents
When Easy Tier automatic data placement is enabled for a volume, you cannot use the
svctask migrateexts CLI command on that volume.
򐂰 Migrating a volume to another storage pool
When the SVC migrates a volume to a new storage pool, Easy Tier automatic data
placement between the two tiers is temporarily suspended. After the volume is migrated to
its new storage pool, Easy Tier automatic data placement between the generic SSD tier
and the generic HDD tier resumes for the moved volume, if appropriate.
When the SVC migrates a volume from one storage pool to another, it attempts to migrate
each extent to an extent in the new storage pool from the same tier as the original extent.
In several cases, such as where a target tier is unavailable, the other tier is used. For
example, the generic SSD tier might be unavailable in the new storage pool.
򐂰 Migrating a volume to an image mode
Easy Tier automatic data placement does not support image mode. When a volume with
active Easy Tier automatic data placement mode is migrated to an image mode, Easy Tier
automatic data placement mode is no longer active on that volume.
򐂰 Image mode and sequential volumes cannot be candidates for automatic data placement;
however, Easy Tier supports evaluation mode for image mode volumes.

8.2.7 Modifying the Easy Tier setting


The Easy Tier setting for storage pools and volumes can be changed only via the command
line. Use the chvdisk command to turn off or turn on Easy Tier on selected volumes. Use the
chmdiskgrp command to change the status of Easy Tier on selected storage pools, as shown
in Example 8-2 on page 441.

440 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Example 8-2 Changing the EasyTier setting


IBM_2145:ITSO SVC DH8:superuser>lsvdisk test
id 1
name test
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
capacity 1.00GB
type striped
formatted yes
formatting no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801FF00840800000000000002
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
owner_type none
owner_id
owner_name
encrypt no
volume_id 1
volume_name test
function

copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
type striped
mdisk_id
mdisk_name

Chapter 8. Advanced features for storage efficiency 441


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier off
easy_tier_status measured
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 1.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 1.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
encrypt no

IBM_2145:ITSO SVC DH8:superuser>chvdisk -easytier on test

IBM_2145:ITSO SVC DH8:superuser>lsvdisk test


id 1
name test
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
capacity 1.00GB
type striped
formatted yes
formatting no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801FF00840800000000000002
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency

442 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
owner_type none
owner_id
owner_name
encrypt no
volume_id 1
volume_name test
function

copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 1.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 1.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
encrypt no

IBM_2145:ITSO SVC DH8:superuser>lsmdiskgrp Pool0_Site1


id 0
name Pool0_Site1
status online
mdisk_count 4
vdisk_count 12
capacity 1.95TB
extent_size 1024

Chapter 8. Advanced features for storage efficiency 443


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

free_capacity 1.93TB
virtual_capacity 22.00GB
used_capacity 22.00GB
real_capacity 22.00GB
overallocation 1
warning 80
easy_tier auto
easy_tier_status balanced
tier ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 4
tier_capacity 1.95TB
tier_free_capacity 1.93TB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
site_id 1
site_name ITSO_DC1
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type parent
encrypt no
owner_type none
owner_id
owner_name

IBM_2145:ITSO SVC DH8:superuser>chmdiskgrp -easytier off Pool0_Site1

IBM_2145:ITSO SVC DH8:superuser>lsmdiskgrp Pool0_Site1


id 0
name Pool0_Site1
status online
mdisk_count 4
vdisk_count 12
capacity 1.95TB
extent_size 1024
free_capacity 1.93TB
virtual_capacity 22.00GB
used_capacity 22.00GB
real_capacity 22.00GB
overallocation 1
warning 80
easy_tier off
easy_tier_status inactive
tier ssd

444 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 4
tier_capacity 1.95TB
tier_free_capacity 1.93TB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
site_id 1
site_name ITSO_DC1
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type parent
encrypt no
owner_type none
owner_id
owner_name

Tuning Easy Tier


It is also possible to change more advanced parameters of Easy Tier. Those should be used
with caution because changing the default values can impact system performance.

The first setting we discuss is called Easy Tier acceleration. This is a system-wide setting and
is disabled by default. Turning this setting on makes Easy Tier move extents up to four times
faster than the default setting. In accelerate mode Easy Tier can move up to 48 GiB per 5
minutes, while in normal mode it moves up to 12 GiB. Enabling Easy Tier acceleration is
advised only during periods of low system activity. The two most probable use cases for
acceleration are:
򐂰 When adding new capacity to the pool, accelerating Easy Tier can quickly spread existing
volumes onto the new MDisks.
򐂰 Migrating the volumes between the storage pools when target storage pool has more tiers
than the source storage pool so Easy Tier can quickly promote or demote extents in the
target pool.

This setting can be changed online, without any impact on host or data availability. To turn on
or off Easy Tier acceleration mode use chsystem as in Example 8-3

Example 8-3 chsystem


IBM_2145:ITSO SVC DH8:superuser>lssystem
id 000002007FC02102
name ITSO SVC DH8
location local
partnership
total_mdisk_capacity 11.7TB

Chapter 8. Advanced features for storage efficiency 445


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

space_in_mdisk_grps 3.9TB
space_allocated_to_vdisks 522.00GB
total_free_space 11.2TB
total_vdiskcopy_capacity 522.00GB
total_used_capacity 522.00GB
total_overallocation 4
total_vdisk_capacity 522.00GB
total_allocated_extent_capacity 525.00GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 7.6.0.0 (build 121.17.1510192058000)
console_IP 10.18.228.140:443
id_alias 000002007FC02102
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply [email protected]
email_contact no
email_contact_primary 1234567
email_contact_alternate
email_contact_location ff
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 7
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret 1010
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_capacity 3.90TB
tier_free_capacity 3.39TB
tier nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
easy_tier_acceleration off
has_nas_key no
layer replication
rc_buffer_size 48
compression_active no

446 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
cache_prefetch on
email_organization ff
email_machine_address ff
email_machine_city ff
email_machine_state XX
email_machine_zip 12345
email_machine_country US
total_drive_raw_capacity 0
compression_destage_mode off
local_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology standard
topology_status
rc_auth_method chap
vdisk_protection_time 15
vdisk_protection_enabled no
product_name IBM SAN Volume Controller
odx off
max_replication_delay 0

IBM_2145:ITSO SVC DH8:superuser>chsystem -easytieracceleration on

id 000002007FC02102
name ITSO SVC DH8
location local
partnership
total_mdisk_capacity 11.7TB
space_in_mdisk_grps 3.9TB
space_allocated_to_vdisks 522.00GB
total_free_space 11.2TB
total_vdiskcopy_capacity 522.00GB
total_used_capacity 522.00GB
total_overallocation 4
total_vdisk_capacity 522.00GB
total_allocated_extent_capacity 525.00GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 7.6.0.0 (build 121.17.1510192058000)
console_IP 10.18.228.140:443
id_alias 000002007FC02102
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply [email protected]
email_contact no
email_contact_primary 1234567

Chapter 8. Advanced features for storage efficiency 447


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

email_contact_alternate
email_contact_location ff
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 7
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret 1010
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_capacity 3.90TB
tier_free_capacity 3.39TB
tier nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
easy_tier_acceleration on
has_nas_key no
layer replication
rc_buffer_size 48
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
cache_prefetch on
email_organization ff
email_machine_address ff
email_machine_city ff
email_machine_state XX
email_machine_zip 12345
email_machine_country US
total_drive_raw_capacity 0
compression_destage_mode off
local_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology standard
topology_status
rc_auth_method chap
vdisk_protection_time 15
vdisk_protection_enabled no

448 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

product_name IBM SAN Volume Controller


odx off
max_replication_delay 0

The second setting is called MDisk Easy Tier load. This setting is set on a per MDisk basis
and indicates how much load Easy Tier can put on a particular MDisk. There are five different
values that can be set to each MDisk: default, low, medium, high, very high.

The system uses the default setting based on the discovered storage system the MDisk is
presented from. Change the default setting to any other value only when you are certain a
particular MDisk is under-utilized and can handle more load, or that the MDisk is overutilized
and the load should be lowered. Change this setting to very high only for SDD and flash
MDisks.

Setting can be changed online, without any impact on the hosts or data availability. To change
this setting use chmdisk as seen in Example 8-4.

Example 8-4 chmdisk


IBM_2145:ITSO SVC DH8:superuser>lsmdisk mdisk0
id 0
name mdisk0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
capacity 500.0GB
quorum_index
block_size 512
controller_name controller0
ctrl_type 4
ctrl_WWNN 50050768020000EF
controller_id 0
path_count 4
max_path_count 4
ctrl_LUN_# 0000000000000000
UID 60050768028a8002680000000000000000000000000000000000000000000000
preferred_WWPN 50050768021000F0
active_WWPN many
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier enterprise
slow_write_priority
fabric_type fc
site_id 1
site_name ITSO_DC1
easy_tier_load high
encrypt no
distributed no
drive_class_id

Chapter 8. Advanced features for storage efficiency 449


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

drive_count 0
stripe_width 0
rebuild_areas_total
rebuild_areas_available
rebuild_areas_goal

IBM_2145:ITSO SVC DH8:superuser>chmdisk -easytierload medium mdisk0

IBM_2145:ITSO SVC DH8:superuser>lsmdisk mdisk0


id 0
name mdisk0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
capacity 500.0GB
quorum_index
block_size 512
controller_name controller0
ctrl_type 4
ctrl_WWNN 50050768020000EF
controller_id 0
path_count 4
max_path_count 4
ctrl_LUN_# 0000000000000000
UID 60050768028a8002680000000000000000000000000000000000000000000000
preferred_WWPN 50050768021000F0
active_WWPN many
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier nearline
slow_write_priority
fabric_type fc
site_id 1
site_name ITSO_DC1
easy_tier_load medium
encrypt no
distributed no
drive_class_id
drive_count 0
stripe_width 0
rebuild_areas_total
rebuild_areas_available
rebuild_areas_goal

450 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

8.2.8 Monitoring tools


The IBM Storage Tier Advisor Tool (STAT) is a Windows console application that analyzes
heat data files produced by Easy Tier. STAT creates a graphical display of the amount of “hot”
data per volume and predicts how additional flash drives (or SSD) capacity, enterprise drives,
and nearline drives might improve performance of the system by storage pool.

Heat data files are produced approximately once a day (that is, every 24 hours) when Easy
Tier is active on one or more storage pools and summarizes the activity per volume since the
prior heat data file was produced. On the SVC the heat data file is in the /dumps directory on
the configuration node and named dpa_heat.node_name.time_stamp.data.

Any existing heat data file is erased after seven days. The file must be offloaded by the user
and STAT must be invoked from a Windows command prompt console with the file specified
as a parameter. The user can also specify the output directory. STAT creates a set of HTML
files and the user can then open the index.html in a browser to view the results.

Updates to the STAT for V7.3 have introduced an additional capability for reporting. As a
result, when the STAT tool is run on a heat map file an additional three CSV files are created
and placed in the Data_files directory.

IBM STAT can be downloaded from the IBM Support website:


http://www.ibm.com/support/docview.wss?uid=ssg1S4000935

Figure 8-10 shows the CSV files highlighted in the Data_files directory after running the stat
tool over an SVC heatmap.

Figure 8-10 CSV files created by STAT for Easy Tier

In addition to STAT tool, IBM Spectrum Virtualize has an additional utility, which is a Microsoft
SQL file for creating additional graphical reports of the workload that Easy Tier performs. The
IBM STAT Charting Utility takes the output of the three CSV files and turns them into graphs
for simple reporting.

Chapter 8. Advanced features for storage efficiency 451


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

The three new graphs display the following information:


򐂰 Workload Categorization report
New workload visuals help you compare activity across tiers within and across pools to
help determine the optimal drive mix for the current workloads. The output is illustrated in
Figure 8-11.

Figure 8-11 STAT Charting Utility Workload Categorization report

򐂰 Daily Movement report


A new Easy Tier summary report every 24 hours illustrating data migration activity
(5-minute intervals) can help visualize migration types and patterns for current workloads.
The output is illustrated in Figure 8-12.

452 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Figure 8-12 STAT Charting Utility Daily Summary report

򐂰 Workload Skew report


This report shows the skew of all workloads across the system in a graph to help you
visualize and accurately tier configurations when you add capacity or a new system. The
output is illustrated in Figure 8-13.

Figure 8-13 STAT Charting Utility Workload Skew report

The STAT Charting Utility can be downloaded from the IBM Support website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251

Chapter 8. Advanced features for storage efficiency 453


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

8.2.9 More information


For more information about planning and configuration considerations, best practices, and
monitoring and measurement tools, see IBM System Storage SAN Volume Controller Best
Practices and Performance Guidelines, SG24-7521, and Implementing IBM Easy Tier with
IBM Real-time Compression, TIPS1072.

8.3 Thin provisioning


In a shared storage environment, thin provisioning is a method for optimizing the usage of
available storage. It relies on the allocation of blocks of data on demand versus the traditional
method of allocating all of the blocks up front. This methodology eliminates almost all white
space, which helps avoid the poor usage rates (often as low as 10%) that occur in the
traditional storage allocation method where large pools of storage capacity are allocated to
individual servers but remain unused (not written to).

Thin provisioning presents more storage space to the hosts or servers that are connected to
the storage system than is available on the storage system. The IBM SVC has this capability
for Fibre Channel and iSCSI provisioned volumes.

An example of thin provisioning is when a storage system contains 5000 GiB of usable
storage capacity, but the storage administrator mapped volumes of 500 GiB each to 15 hosts.
In this example, the storage administrator makes 7500 GiB of storage space visible to the
hosts, even though the storage system has only 5000 GiB of usable space, as shown in
Figure 8-14. In this case, all 15 hosts cannot immediately use all 500 GiBs that are
provisioned to them. The storage administrator must monitor the system and add storage, as
needed.

Figure 8-14 Concept of thin provisioning

454 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

You can imagine thin provisioning as the same process as when airlines sell more tickets on a
flight than physical seats are available, assuming that some passengers do not appear at
check-in. They do not assign actual seats at the time of sale, which avoids each client having
a claim on a specific seat number. The same concept applies to thin provisioning (airline)
SVC (plane) and its volumes (seats). The storage administrator (airline ticketing system) must
closely monitor the allocation process and set proper thresholds.

8.3.1 Configuring a thin-provisioned volume


Volumes can be configured as thin-provisioned or fully allocated. Thin-provisioned volumes
are created with real and virtual capacities. You can still create volumes by using a striped,
sequential, or image mode virtualization policy, as you can with any other volume.

Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the
capacity of the volume that is reported to other SVC components (such as FlashCopy or
remote copy) and to the hosts. For example, you can create a volume with a real capacity of
only 100 GiB but a virtual capacity of 1 TiB. The actual space that is used by the volume on
the SVC will be 100 GiB but hosts will see a 1 TiB volume.

A directory maps the virtual address space to the real address space. The directory and the
user data share the real capacity.

Thin-provisioned volumes are available in two operating modes: autoexpand and


non-autoexpand. You can switch the mode at any time. If you select the autoexpand feature,
the SVC automatically adds a fixed amount of more real capacity to the thin volume, as
required. Therefore, the autoexpand feature attempts to maintain a fixed amount of unused
real capacity for the volume. This amount is known as the contingency capacity. The
contingency capacity is initially set to the real capacity that is assigned when the volume is
created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.

A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and the volume must
expand.

Warning threshold: Enable the warning threshold, by using email or a Simple Network
Management Protocol (SNMP) trap, when you work with thin-provisioned volumes. You
can enable the warning threshold on the volume, and on the storage pool side, especially
when you do not use the autoexpand mode. Otherwise, the thin volume goes offline if it
runs out of space.

Autoexpand mode does not cause real capacity to grow much beyond the virtual capacity.
The real capacity can be manually expanded to more than the maximum that is required by
the current virtual capacity, and the contingency capacity is recalculated.

A thin-provisioned volume can be converted non-disruptively to a fully allocated volume, or


vice versa, by using the volume mirroring function. For example, you can add a
thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated
copy from the volume after they are synchronized.

The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so


that grains that contain all zeros do not cause any real capacity to be used. Usually, if the
SVC is to detect zeros on the volume, you must use software on the host side to write zeros to
all unused space on the disk or file system.

Chapter 8. Advanced features for storage efficiency 455


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Tip: Consider the use of thin-provisioned volumes as targets in FlashCopy mappings.

Space allocation
When a thin-provisioned volume is created, a small amount of the real capacity is used for
initial metadata. Write I/Os to the grains of the thin volume (that were not previously written to)
cause grains of the real capacity to be used to store metadata and user data. Write I/Os to the
grains (that were previously written to) update the grain where data was previously written.

Grain definition: The grain is defined when the volume is created and can be 32 KiB,
64 KiB, 128 KiB, or 256 KiB.

Smaller granularities can save more space, but they have larger directories. When you use
thin-provisioning with FlashCopy, specify the same grain size for the thin-provisioned volume
and FlashCopy.

To create a thin-provisioned volume from the dynamic menu go to Volumes → Volumes →


Create Volumes and select Advanced, as shown in Figure 8-15. Enter the required capacity
and volume name.

Figure 8-15 Creating thin provisioned volume

Select the volume size, name and choose Thin-provisioned from Capacity savings drop down
menu. If you want to create more volumes just click the “+” sign next to volume name. Click
Volume Location tab and select the storage pool for the volume. If you have more than one
I/O group, here you can also select the caching I/O group and preferred node, as shown in
Figure 8-16.

456 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Figure 8-16 Choosing the thin provisioned volume location

Next go to Thin Provisioning tab and enter the thin provisioned parameters, such as real
capacity, warning threshold or autoexpand enabled or disabled, as seen in Figure 8-17.

Figure 8-17 Choosing the thin provisioned parameters

Check your selections in the General tab and click Create button as shown in Figure 8-18.

Chapter 8. Advanced features for storage efficiency 457


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 8-18 Creating thin provisioned volume summary

8.3.2 Performance considerations


Thin-provisioned volumes save capacity only if the host server does not write to whole
volumes. Whether the thin-provisioned volume works well partly depends on how the file
system allocated the space.

Some file systems (for example, New Technology File System [NTFS]) write to the whole
volume before overwriting deleted files. Other file systems reuse space in preference to
allocating new space.

File system problems can be moderated by tools, such as “defrag,” or by managing storage by
using host Logical Volume Managers (LVMs).

The thin-provisioned volume also depends on how applications use the file system. For
example, some applications delete log files only when the file system is nearly full.

Important: Do not use defrag on thin-provisioned volumes. The defragmentation process


can write data to different areas of a volume, which can cause a thin-provisioned volume to
grow up to its virtual size.

There is no recommendation for thin-provisioned volumes. As explained previously, the


performance of thin-provisioned volumes depends on what is used in the particular
environment. For the best performance, use fully allocated volumes instead of
thin-provisioned volumes.

Note: Starting with V7.3 the cache subsystem architecture was redesigned. Now,
thin-provisioned volumes can benefit from lower cache functions (such as coalescing
writes or prefetching), which greatly improve performance.

458 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

8.3.3 Limitations of virtual capacity


A few factors (extent and grain size) limit the virtual capacity of thin-provisioned volumes
beyond the factors that limit the capacity of regular volumes. Table 8-2 shows the maximum
thin provisioned volume virtual capacities for an extent size.

Table 8-2 Maximum thin-provisioned volume virtual capacities for an extent size
Extent size in MB Maximum volume real capacity Maximum thin-provisioned
in GB volume virtual capacity in GB

16 2,048 2,000

32 4,096 4,000

64 8,192 8,000

128 16,384 16,000

256 32,768 32,000

512 65,536 65,000

1024 131,072 130,000

2048 262,144 260,000

4096 262,144 262,144

8192 262,144 262,144

Table 8-3 on page 459 shows the maximum thin-provisioned volume virtual capacities for a
grain size.

Table 8-3 Maximum thin-provisioned volume virtual capacities for a grain size
Grain size in KiB Maximum thin-provisioned volume virtual capacity
in GiB

32 260,000

64 520,000

128 1,040,000

256 2,080,000

For more information and detailed performance considerations for configuring thin
provisioning, see IBM System Storage SAN Volume Controller Best Practices and
Performance Guidelines, SG24-7521.

8.4 Real-time Compression Software


The IBM Real-time Compression (RtC) Software that is embedded in IBM Spectrum
Virtualize addresses the requirements of primary storage data reduction, including
performance. It does so by using a purpose-built technology that is called RtC that uses the
Random Access Compression Engine (RACE). It offers the following benefits:

Chapter 8. Advanced features for storage efficiency 459


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Compression for active primary data


IBM RtC can be used with active primary data. Therefore, it supports workloads that are
not candidates for compression in other solutions. The solution supports online
compression of existing data. Storage administrators can regain free disk space in an
existing storage system without requiring administrators and users to clean up or archive
data. This configuration significantly enhances the value of existing storage assets, and
the benefits to the business are immediate. The capital expense of upgrading or
expanding the storage system is delayed.
򐂰 Compression for replicated or mirrored data
Remote volume copies can be compressed in addition to the volumes at the primary
storage tier. This process reduces storage requirements in Metro Mirror and Global Mirror
destination volumes, as well.
򐂰 No changes to the existing environment required
IBM RtC is part of the storage system. It was designed with the goal of transparency so
that it can be implemented without changes to applications, hosts, networks, fabrics, or
external storage systems. The solution is not apparent to hosts, so users and applications
continue to work as-is. Compression occurs within the SVC system.
򐂰 Overall savings in operational expenses
More data is stored in a rack space, so fewer storage expansion enclosures are required
to store a data set. This reduced rack space has the following benefits:
– Reduced power and cooling requirements. More data is stored in a system, which
requires less power and cooling per gigabyte or used capacity.
– Reduced software licensing for more functions in the system. More data that is stored
per enclosure reduces the overall spending on licensing.

Tip: Implementing compression in IBM Spectrum Virtualize provides the same benefits
to internal SSDs and externally virtualized storage systems.

򐂰 Disk space savings are immediate


The space reduction occurs when the host writes the data. This process is unlike other
compression solutions in which some or all of the reduction is realized only after a
post-process compression batch job is run.

8.4.1 Common use cases


This section addresses the most common use cases for implementing compression:
򐂰 General-purpose volumes
򐂰 Databases
򐂰 Virtualized infrastructures
򐂰 Log server datastores

General-purpose volumes
Most general-purpose volumes are used for highly compressible data types, such as home
directories, CAD/CAM, oil and gas geoseismic data, and log data. Storing such types of data
in compressed volumes provides immediate capacity reduction to the overall consumed
space. More space can be provided to users without any change to the environment.

460 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Many file types can be stored in general-purpose servers. However, for practical information,
the estimated compression ratios are based on actual field experience. Expected
compression ratios are 50% - 60%.

File systems that contain audio, video files, and compressed files are not good candidates for
compression. The overall capacity savings on these file types are minimal.

Databases
Database information is stored in table space files. High compression ratios are common in
database volumes. Examples of databases that can greatly benefit from RtC are IBM DB2®,
Oracle, and Microsoft SQL Server. Expected compression ratios are 50% - 80%.

Important: Certain databases offer optional built-in compression. Generally, do not


compress already compressed database files.

Virtualized infrastructures
The proliferation of open systems virtualization in the market has increased the use of storage
space, with more virtual server images and backups kept online. The use of compression
reduces the storage requirements at the source.

Examples of virtualization solutions that can greatly benefit from RtC are VMware, Microsoft
Hyper-V, and kernel-based virtual machine (KVM). Expected compression ratios are 45% -
75%.

Tip: Virtual machines (VMs) with file systems that contain compressed files are not good
compression candidates.

Log server datastores


Logs are a critical part for any IT department in any organization. Log aggregates or syslog
servers are a central point for the administrators, and immediate access and a smooth work
process is necessary. Log server datastores are very good candidates for Real-time
Compression. Expected compression ratios are up to 90%.

8.4.2 Real-time Compression concepts


The Random Access Compression Engine (RACE) technology is based on over 50 patents
that are not primarily about compression. Instead, they define how to make industry standard
Lempel-Ziv (LZ) compression of primary storage operate in real time and allow random
access. The primary intellectual property behind this technology is the RACE component.

At a high level, the IBM RACE component compresses data that is written into the storage
system dynamically. This compression occurs transparently, so Fibre Channel and iSCSI
connected hosts are not aware of the compression. RACE is an inline compression
technology, which means that each host write is compressed as it passes through IBM
Spectrum Virtualize to the disks. This technology has a clear benefit over other compression
technologies that are post-processing based. These technologies do not provide immediate
capacity savings; therefore, they are not a good fit for primary storage workloads, such as
databases and active data set applications.

Chapter 8. Advanced features for storage efficiency 461


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

RACE is based on the Lempel-Ziv lossless data compression algorithm and operates in a
real-time method. When a host sends a write request, the request is acknowledged by the
write cache of the system, and then staged to the storage pool. As part of its staging, the
write request passes through the compression engine and is then stored in compressed
format onto the storage pool. Therefore, writes are acknowledged immediately after they are
received by the write cache with compression occurring as part of the staging to internal or
external physical storage.

Capacity is saved when the data is written by the host because the host writes are smaller
when they are written to the storage pool.

IBM RtC is a self-tuning solution, which is similar to the SVC system. It is adapting to the
workload that runs on the system at any particular moment.

8.4.3 Random Access Compression Engine


To understand why RACE is unique, you need to review the traditional compression
techniques. This description is not about the compression algorithm itself, that is, how the
data structure is reduced in size mathematically. Rather, the description is about how the data
is laid out within the resulting compressed output.

Compression utilities
Compression is probably most known to users because of the widespread use of
compression utilities, such as the zip and gzip utilities. At a high level, these utilities take a file
as their input, and parse the data by using a sliding window technique. Repetitions of data are
detected within the sliding window history, most often 32 KiB. Repetitions outside of the
window cannot be referenced. Therefore, the file cannot be reduced in size unless data is
repeated when the window “slides” to the next 32 KiB slot.

Figure 8-19 shows compression that uses a sliding window, where the first two repetitions of
the string “ABCD” fall within the same compression window, and can therefore be
compressed by using the same dictionary. The third repetition of the string falls outside of this
window and therefore cannot be compressed by using the same compression dictionary as
the first two repetitions, reducing the overall achieved compression ratio (Figure 8-19).

Figure 8-19 Compression that uses a sliding window

462 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Traditional data compression in storage systems


The traditional approach that is taken to implement data compression in storage systems is
an extension of how compression works in the compression utilities previously mentioned.
Similar to compression utilities, the incoming data is broken into fixed chunks, and then each
chunk is compressed and extracted independently.

However, drawbacks exist to this approach. An update to a chunk requires a read of the
chunk followed by a recompression of the chunk to include the update. The larger the chunk
size chosen, the heavier the I/O penalty to recompress the chunk. If a small chunk size is
chosen, the compression ratio is reduced because the repetition detection potential is
reduced.

Figure 8-20 shows an example of how the data is broken into fixed size chunks (in the
upper-left corner of the figure). It also shows how each chunk gets compressed
independently into variable length compressed chunks (in the upper-right side of the figure).
The resulting compressed chunks are stored sequentially in the compressed output.

Although this approach is an evolution from compression utilities, it is limited to low


performance use cases mainly because this approach does not provide real random access
to the data (Figure 8-20).

Figure 8-20 Traditional data compression in storage systems

Random Access Compression Engine


The IBM patented Random Access Compression Engine implements an inverted approach
when compared to traditional approaches to compression. RACE uses variable-size chunks
for the input, and produces fixed-size chunks for the output.

This method enables an efficient and consistent method to index the compressed data
because the data is stored in fixed-size containers.

Figure 8-21 shows Random Access Compression.

Chapter 8. Advanced features for storage efficiency 463


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 8-21 Random Access Compression

Location-based compression
Both compression utilities and traditional storage systems compression compress data by
finding repetitions of bytes within the chunk that is being compressed. The compression ratio
of this chunk depends on how many repetitions can be detected within the chunk. The
number of repetitions is affected by how much the bytes stored in the chunk are related to
each other. The relationship between bytes is driven by the format of the object. For example,
an office document might contain textual information, and an embedded drawing, such as this
page. Because the chunking of the file is arbitrary, it has no notion of how the data is laid out
within the document. Therefore, a compressed chunk can be a mixture of the textual
information and part of the drawing. This process yields a lower compression ratio because
the different data types mixed together cause a suboptimal dictionary of repetitions. That is,
fewer repetitions can be detected because a repetition of bytes in a text object is unlikely to be
found in a drawing.

This traditional approach to data compression is also called location-based compression. The
data repetition detection is based on the location of data within the same chunk.

This challenge was also addressed with the predecide mechanism that was introduced in
version 7.1.

Predecide mechanism
Certain data chunks have a higher compression ratio than others. Compressing some of the
chunks saves little space but still requires resources, such as CPU and memory. To avoid
spending resources on uncompressible data, and to provide the ability to use a different,
more effective (in this particular case) compression algorithm, IBM invented a predecide
mechanism that was first introduced in version 7.1.

The chunks that are below a certain compression ratio are skipped by the compression
engine, therefore saving CPU time and memory processing. Chunks that are decided not to
be compressed with the main compression algorithm, but that still can be compressed well
with another algorithm, will be marked and processed. The result can vary because predecide
does not check the entire block, only a sample of it.

464 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Figure 8-23 shows how the detection mechanism works.

Figure 8-22 Detection mechanism

Temporal compression
RACE offers a technology leap, which is called temporal compression, beyond location-based
compression.

When host writes arrive to RACE, they are compressed and fill fixed size chunks that are also
called compressed blocks. Multiple compressed writes can be aggregated into a single
compressed block. A dictionary of the detected repetitions is stored within the compressed
block. When applications write new data or update existing data, the data is typically sent
from the host to the storage system as a series of writes. Because these writes are likely to
originate from the same application and be from the same data type, more repetitions are
usually detected by the compression algorithm.

This type of data compression is called temporal compression because the data repetition
detection is based on the time that the data was written into the same compressed block.
Temporal compression adds the time dimension that is not available to other compression
algorithms. Temporal compression offers a higher compression ratio because the
compressed data in a block represents a more homogeneous set of input data.

Figure 8-23 shows how three writes sent one after the other by a host end up in different
chunks. They get compressed in different chunks because their location in the volume is not
adjacent. This approach yields a lower compression ratio because the same data must be
compressed non-natively by using three separate dictionaries.

Chapter 8. Advanced features for storage efficiency 465


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 8-23 Location-based compression

When the same three writes, as shown on Figure 8-24, are sent through RACE, the writes are
compressed together by using a single dictionary. This approach yields a higher compression
ratio than location-based compression.

Figure 8-24 Temporal compression

8.4.4 Dual RACE component


In SVC V7.4 version compression code was enhanced by the addition of a second RACE
component per SVC node. This feature takes advantage of multi-core controller architecture
and uses the compression accelerator cards more effectively. The dual RACE component
adds the second RACE instance, which works in parallel with the first instance, as shown in
Figure 8-25.

466 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Figure 8-25 Dual RACE architecture

With dual RACE enhancement the compression performance can be boosted up to two times
for compressed workloads when compared to previous versions.

To take advantage of dual RACE, several software and hardware requirements must be met:
򐂰 The software must be at level V7.4.
򐂰 Only 2145-DH8 nodes are supported.
򐂰 A second eight-core CPU must be installed per SVC node.
򐂰 An additional 32 GB must be installed per SVC node.
򐂰 At least one Coleto Creek acceleration card must be installed per SVC node. The second
acceleration card is not required.

Note: We recommend using two acceleration cards for the best performance.

When using the dual RACE feature, the acceleration cards are shared between RACE
components, which means that the acceleration cards are used simultaneously by both
RACE components. The rest of resources, such as CPU cores and RAM memory, are evenly
divided between the RACE components. You do not need to manually enable dual RACE;
dual RACE triggers automatically when all minimal software and hardware requirements are
met. If the SVC is compression capable but the minimal requirements for dual RACE are not
met, only one RACE instance is used (as in the previous versions of the code).

8.4.5 Random Access Compression Engine in the IBM Spectrum Virtualize


software stack
It is important to understand where the RACE technology is implemented in the IBM
Spectrum Virtualize software stack. This location determines how it applies to the SVC
components.

Chapter 8. Advanced features for storage efficiency 467


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

RACE technology is implemented into the IBM Spectrum Virtualize thin provisioning layer,
and it is an organic part of the stack. The IBM Spectrum Virtualize software stack is shown in
Figure 8-26. Compression is transparently integrated with existing system management
design. All of the IBM Spectrum Virtualize advanced features are supported on compressed
volumes. You can create, delete, migrate, map (assign), and unmap (unassign) a compressed
volume as though it were a fully allocated volume. In addition, you can use RtC with Easy Tier
on the same volumes. This compression method provides nondisruptive conversion between
compressed and decompressed volumes. This conversion provides a uniform user
experience and eliminates the need for special procedures when dealing with compressed
volumes.

Figure 8-26 RACE integration within IBM Spectrum Virtualize stack

8.4.6 Data write flow


When a host sends a write request to the SVC, it reaches the upper-cache layer. The host is
immediately sent an acknowledgment of its I/Os.

When the upper cache layer destages to the RACE, the I/Os are sent to the thin-provisioning
layer. They are then sent to the RACE, and if necessary, to the original host write or writes.
The metadata that holds the index of the compressed volume is updated, if needed, and
compressed, as well.

8.4.7 Data read flow


When a host sends a read request to the SVC for compressed data, the request is forwarded
directly to the RtC component:
򐂰 If the RtC component contains the requested data, the SVC cache replies to the host with
the requested data without having to read the data from the lower-level cache or disk.

468 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

򐂰 If the RtC component does not contain the requested data, the request is forwarded to the
SVC lower-level cache.
򐂰 If the lower-level cache contains the requested data, it is sent up the stack and returned to
the host without accessing the storage.
򐂰 If the lower-level cache does not contain the requested data, it sends a read request to the
storage for the requested data.

8.4.8 Compression of existing data


In addition to compressing data in real time, you can also compress existing data sets
(convert volume to compressed). To do so, you have to change the capacity savings settings
of the volume by right-clicking a particular volume and selecting Modify Capacity Settings
as shown in Figure 8-27.

Figure 8-27 Modifying Capacity Settings

In the drop-down menu select Compression as the Capacity Savings option as shown in
Figure 8-28.

Figure 8-28 Selecting Capacity Setting

After the copies are fully synchronized, the original volume copy will be deleted automatically.

As a result, you have compressed data on the existing volume. This process is nondisruptive,
so the data remains online and accessible by applications and users.

Chapter 8. Advanced features for storage efficiency 469


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

This capability enables clients to regain space from the storage pool, which can then be
reused for other applications.

With the virtualization of external storage systems, the ability to compress already stored data
significantly enhances and accelerates the benefit to users. This capability allows them to see
a tremendous return on their SVC investment. On the initial purchase of an SVC with RtC,
clients can defer their purchase of new storage. When new storage needs to be acquired, IT
purchases less of the required storage before compression.

Important: The SVC reserves some of its resources, such as CPU cores and RAM
memory, after you create one compressed volume or volume copy. This reserve might
affect your system performance if you do not plan for the reserve in advance.

8.4.9 Creating compressed volumes


Licensing is required to use compression on the SVC. With the SVC, RtC is licensed by
capacity, per terabyte of virtual data.

There are 2 ways of creating a compressed volume: Basic and Advanced

In order to create a compressed volume using Basic option, from the top bar under Volumes
menu chose Create Volumes and select Basic in Quick Volume Creation section, as shown
in Figure 8-29.

Figure 8-29 Creating Basic compressed volume

470 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

In order to create a compressed volume using Advanced option, from the top bar under
Volumes menu chose Create Volumes and select Custom in Advanced section, as shown
in Figure 8-30.

Figure 8-30 Creating Advanced compressed volume

In Volume Details section set up Capacity, Saving Type (Compression) and give volume a
Name as shown in Figure 8-31.

Figure 8-31 Setting up basic properties

Set location properties in Volume Location section while setting Pool as shown in
Figure 8-32.

Chapter 8. Advanced features for storage efficiency 471


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 8-32 Setting up location properties

Compressed section provides the ability to set or change allocated (virtual) capacity and the
real capacity that data uses on this volume, autoexpand and warning thresholds
(Figure 8-33).

472 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm

Figure 8-33 Setting up capacity properties of the compressed volume

When the compressed volume is configured, you can directly map it to the host or map it later
on request.

8.4.10 Comprestimator
IBM Spectrum Virtualize V7.6 introduce a utility to estimate expected compression ratios on
existing volumes. V7.5 and V7.6 also includes a line of RAS improvements and features that
helps IBM services to troubleshoot and monitor customers environment in a much better way.

The built-in Comprestimator is a command line utility that can be used to estimate an
expected compression rate for a given volume.

Comprestimator uses advanced mathematical and statistical algorithms to perform the


sampling and analysis process in a very short and efficient way. The utility also displays its
accuracy level by showing the maximum error range of the results achieved based on the
formulas it uses.

List of available commands:


򐂰 analyzevdisk - Provides an option to analyze single volume.
usage: analyzevdisk <volume ID>
Example: analyzevdisk 0
Can be canceled by running: analyzevdisk <volume ID> -cancel
򐂰 lsvdiskanalysis - Provides a list and the status of the volume(s). Some of them can be
analyzed already, some of them not yet. The command can either be used for all volumes
on the system or it can be used per volume, similar to lsvdisk. See Example 8-5.

Chapter 8. Advanced features for storage efficiency 473


7933 07 Advanced Features Lev Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Example 8-5 shows an example of the command that run over one volume ID 0
IBM_2145:ITSO_SVC_DH8:superuser>lsvdiskanalysis 0
id 0
name SQL_Data0
state estimated
started_time 151012104343
analysis_time 151012104353
capacity 300.00GB
thin_size 290.85GB
thin_savings 9.15GB
thin_savings_ratio 3.05
compressed_size 141.58GB
compression_savings 149.26GB
compression_savings_ratio 51.32
total_savings 158.42GB
total_savings_ratio 52.80
accuracy 4.97

where state:
idle - Was never estimated and not currently scheduled.
scheduled - Volume is queued for estimation, will be processed based on lowest volume
ID first.
active - Volume is being analyzed.
canceling - Volume was requested to abort an active analysis, analysis was not yet
aborted.
estimated - Volume was analyzed and results show the expected savings of thin
provisioning and compression.
sparse - Volume was analyzed but comprestimator could not find enough non-zero
samples to establish a good estimation.
compression_savings_ratio - compression saving ratio is the estimated amount of space
that can be saved on the storage in the frame of this specific volume expressed as a
percentage.
򐂰 analyzevdiskbysystem - Provides an option to run Comprestimator on all volumes within
the system. The analyzing process is non-disruptive and should not affect the system
significantly. Analysis speed may vary due to the fullness of the volume, but should not
take more than couple of minutes per volume.
Can be canceled by running the analyzevdiskbysystem -cancel command.
򐂰 lsvdiskanalysisprogress - Shows progress of the Comprestimator analysis as shown in
Example 8-6.

Example 8-6 Comprestimator progress


id vdisk_count pending_analysis estimated_completion_time
0 45 12 151012154400

474 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Chapter 9. Advanced Copy Services


In this chapter, we describe the Advanced Copy Services functions that are enabled by IBM
Spectrum Virtualize software running inside IBM SAN Volume Controller and IBM Storwize
family products. This also includes native IP replication.

In Chapter 10, “Operations using the CLI” on page 565, we describe how to use the
command-line interface (CLI) and Advanced Copy Services.

In Chapter 11, “Operations using the GUI” on page 715, we explain how to use the GUI and
Advanced Copy Services.

This chapter includes the following topics:


򐂰 FlashCopy
򐂰 Reverse FlashCopy
򐂰 FlashCopy functional overview
򐂰 Implementing the IBM SAN Volume Controller FlashCopy
򐂰 Volume mirroring and migration options
򐂰 Native IP replication
򐂰 Remote Copy
򐂰 Remote Copy commands
򐂰 Metro Mirror process
򐂰 Metro Mirror commands
򐂰 Global Mirror process
򐂰 Global Mirror commands
򐂰 Troubleshooting remote copy

© Copyright IBM Corp. 2015. All rights reserved. 475


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

9.1 FlashCopy
By using the FlashCopy function of the IBM Spectrum Virtualize, you can perform a
point-in-time copy of one or more volumes. In this section, we describe the inner workings of
FlashCopy and provide details of its configuration and use.

You can use FlashCopy to help you solve critical and challenging business needs that require
duplication of data of your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system and it’s cache. Therefore the copy is not apparent
to the host.

Important: Because FlashCopy operates at the block level below the host operating
system and cache, those levels do need to be flushed for consistent FlashCopies.

While the FlashCopy operation is performed, the source volume is frozen briefly to initialize
the FlashCopy bitmap and then I/O can resume. Although several FlashCopy options require
the data to be copied from the source to the target in the background, which can take time to
complete, the resulting data on the target volume is presented so that the copy appears to
complete immediately. This process is performed by using a bitmap (or bit array), which tracks
changes to the data after the FlashCopy is started, and an indirection layer, which allows data
to be read from the source volume transparently.

9.1.1 Business requirements for FlashCopy


When you are deciding whether FlashCopy addresses your needs, you must adopt a
combined business and technical view of the problems that you want to solve. First,
determine the needs from a business perspective. Then, determine whether FlashCopy can
address the technical needs of those business requirements.

The business applications for FlashCopy are wide-ranging. Common use cases for
FlashCopy include, but are not limited to, the following examples:
򐂰 Rapidly creating consistent backups of dynamically changing data
򐂰 Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts
򐂰 Rapidly creating copies of production data sets for application development and testing
򐂰 Rapidly creating copies of production data sets for auditing purposes and data mining
򐂰 Rapidly creating copies of production data sets for quality assurance

Regardless of your business needs, FlashCopy within the IBM Spectrum Virtualize is flexible
and offers a broad feature set, which makes it applicable to many scenarios.

9.1.2 Backup improvements with FlashCopy


FlashCopy does not reduce the time that it takes to perform a backup to a traditional backup
infrastructure. However, it can be used to minimize and, under certain conditions, eliminate
application downtime that is associated with performing backups. FlashCopy can also transfer
the resource usage of performing intensive backups from production systems.

476 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

After the FlashCopy is performed, the resulting image of the data can be backed up to tape,
as though it were the source system. After the copy to tape is complete, the image data is
redundant and the target volumes can be discarded. For time-limited applications, such as
these examples, “no copy” or incremental FlashCopy is used most often. The use of these
methods puts less load on your infrastructure.

When FlashCopy is used for backup purposes, the target data usually is managed as
read-only at the operating system level. This approach provides extra security by ensuring
that your target data was not modified and remains true to the source.

9.1.3 Restore with FlashCopy


FlashCopy can perform a restore from any existing FlashCopy mapping. Therefore, you can
restore (or copy) from the target to the source of your regular FlashCopy relationships. (It
might be easier to think of this method as reversing the direction of the FlashCopy mappings.)
This capability has the following benefits:
򐂰 There is no need to worry about pairing mistakes; you trigger a restore.
򐂰 The process appears instantaneous.
򐂰 You can maintain a pristine image of your data while you are restoring what was the
primary data.

This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.

Preferred practices: Although restoring from a FlashCopy is quicker than a traditional


tape media restore, you must not use restoring from a FlashCopy as a substitute for good
archiving practices. Instead, keep one to several iterations of your FlashCopies so that you
can near-instantly recover your data from the most recent history and keep your long-term
archive as appropriate for your business.

In addition to the restore option, which copies the original blocks from the target volume to
modified blocks on the source volume, the target can be used to perform a restore of
individual files. To do that you need to make the target available on a host. We suggest that
you do not make the target available to the source host because seeing doubles of disks
causes problems for most host operating systems. Copy the files to the source via normal
host data copy methods for your environment.

9.1.4 Moving and migrating data with FlashCopy


FlashCopy can be used to facilitate the movement or migration of data between hosts while
minimizing downtime for applications. By using FlashCopy, application data can be copied
from source volumes to new target volumes while applications remain online. After the
volumes are fully copied and synchronized, the application can be brought down and then
immediately brought back up on the new server that is accessing the new FlashCopy target
volumes.

This method differs from the other migration methods, which are described later in this
chapter. Common uses for this capability are host and back-end storage hardware refreshes.

Chapter 9. Advanced Copy Services 477


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

9.1.5 Application testing with FlashCopy


It is often important to test a new version of an application or operating system that is using
actual production data. This testing ensures the highest quality possible for your environment.
FlashCopy makes this type of testing easy to accomplish without putting the production data
at risk or requiring downtime to create a constant copy. You create a FlashCopy of your
source and use that copy for your testing. This copy is a duplicate of your production data
down to the block level so that even physical disk identifiers are copied. Therefore, it is
impossible for your applications to tell the difference.

9.1.6 Host and application considerations to ensure FlashCopy integrity


Because FlashCopy is at the block level, it is necessary to understand the interaction
between your application and the host operating system. From a logical standpoint, it is
easiest to think of these objects as “layers” that sit on top of one another. The application is
the topmost layer, and beneath it is the operating system layer. Both of these layers have
various levels and methods of caching data to provide better speed. Because IBM SVC and
therefore, FlashCopy sit below these layers, they are unaware of the cache at the application
or operating system layers.

To ensure the integrity of the copy that is made, it is necessary to flush the host operating
system and application cache for any outstanding reads or writes before the FlashCopy
operation is performed. Failing to flush the host operating system and application cache
produces what is referred to as a crash consistent copy. The resulting copy requires the same
type of recovery procedure, such as log replay and file system checks, that is required
following a host crash. FlashCopies that are crash consistent often can be used following file
system and application recovery procedures.

Note: Although the best way to perform FlashCopy is to flush host cache first, certain
companies, such as Oracle, support using snapshots without it, as stated in Metalink note
604683.1.

Various operating systems and applications provide facilities to stop I/O operations and
ensure that all data is flushed from host cache. If these facilities are available, they can be
used to prepare for a FlashCopy operation. When this type of facility is unavailable, the host
cache must be flushed manually by quiescing the application and unmounting the file system
or drives.

Preferred practice: From a practical standpoint, when you have an application that is
backed by a database and you want to make a FlashCopy of that application’s data, it is
sufficient in most cases to use the write-suspend method that is available in most modern
databases because the database maintains strict control over I/O. This method is opposed
to flushing data from both the application and the backing database (which is the
recommended method because it is safer). However, this method can be used when
facilities do not exist or your environment includes time sensitivity.

9.1.7 FlashCopy attributes


The FlashCopy function in the IBM Spectrum Virtualize features the following attributes:
򐂰 The target is the time-zero copy of the source, which is known as the FlashCopy mapping
target.

478 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

򐂰 FlashCopy produces an exact copy of the source volume, including any metadata that was
written by the host operating system, Logical Volume Manager (LVM), and applications.
򐂰 The source volume and target volume are available (almost) immediately following the
FlashCopy operation.
򐂰 The source and target volumes must be the same “virtual” size.
򐂰 The source and target volumes must be on the same SVC clustered system.
򐂰 The source and target volumes do not need to be in the same I/O Group or storage pool.
򐂰 The storage pool extent sizes can differ between the source and target.
򐂰 The source volumes can have up to 256 target volumes (Multiple Target FlashCopy).
򐂰 The target volumes can be the source volumes for other FlashCopy relationships
(cascaded FlashCopy).
򐂰 Consistency Groups are supported to enable FlashCopy across multiple volumes in the
same time.
򐂰 Up to 255 FlashCopy Consistency Groups are supported per system.
򐂰 Up to 512 FlashCopy mappings can be placed in one Consistency Group.
򐂰 The target volume can be updated independently of the source volume.
򐂰 Bitmaps that are governing I/O redirection (I/O indirection layer) are maintained in both
nodes of the IBM SVC I/O Group to prevent a single point of failure.
򐂰 FlashCopy mapping and Consistency Groups can be automatically withdrawn after the
completion of the background copy.
򐂰 Thin-provisioned FlashCopy (or Snapshot in the GUI) use disk space only when updates
are made to the source or target data and not for the entire capacity of a volume copy.
򐂰 FlashCopy licensing is based on the virtual capacity of the source volumes.
򐂰 Incremental FlashCopy copies all of the data when you first start FlashCopy and then only
the changes when you stop and start FlashCopy mapping again. Incremental FlashCopy
can substantially reduce the time that is required to re-create an independent image.
򐂰 Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original
copy operation to complete.
򐂰 The maximum number of supported FlashCopy mappings is 4096 per IBM SVC system.
򐂰 The size of the source and target volumes cannot be altered (increased or decreased)
while a FlashCopy mapping is defined.

9.2 Reverse FlashCopy


Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original copy
operation to complete. It supports multiple targets (up to 256) and therefore multiple rollback
points.

A key advantage of the IBM Spectrum Virtualize Multiple Target Reverse FlashCopy function
is that the reverse FlashCopy does not destroy the original target, which allows processes by
using the target, such as a tape backup, to continue uninterrupted.

Chapter 9. Advanced Copy Services 479


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

The IBM Spectrum Virtualize also provides the ability to create an optional copy of the source
volume to be made before the reverse copy operation starts. This ability to restore back to the
original source data can be useful for diagnostic purposes.

Complete the following steps to restore from an on-disk backup:


1. Optional: Create a target volume (volume Z) and use FlashCopy to copy the production
volume (volume X) onto the new target for later problem analysis.
2. Create a FlashCopy map with the backup to be restored (volume Y) or (volume W) as the
source volume and volume X as the target volume, if this map does not exist.
3. Start the FlashCopy map (volume Y → volume X) with the -restore option to copy the
backup data onto the production disk. If the -restore option is specified and no
FlashCopy mapping exists, the command is ignored, which preserves your data integrity.

The production disk is instantly available with the backup data. Figure 9-1 shows an example
of Reverse FlashCopy.

Figure 9-1 Reverse FlashCopy

Regardless of whether the initial FlashCopy map (volume X → volume Y) is incremental, the
Reverse FlashCopy operation copies the modified data only.

Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and
adding them to a new reverse Consistency Group. Consistency Groups cannot contain more
than one FlashCopy map with the same target volume.

9.2.1 FlashCopy and Tivoli Storage FlashCopy Manager


The management of many large FlashCopy relationships and Consistency Groups is a
complex task without a form of automation for assistance.

480 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

IBM Tivoli Storage FlashCopy Manager provides fast application-aware backups and restores
using advanced point-in-time image technologies in the IBM Spectrum Virtualize. In addition,
it provides an optional integration with IBM Tivoli Storage Manager for the long-term storage
of snapshots. Figure 9-2 shows the integration of Tivoli Storage Manager and FlashCopy
Manager from a conceptual level.

Figure 9-2 Tivoli Storage Manager for Advanced Copy Services features

Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for
Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli
FlashCopy Manager, you can coordinate and automate host preparation steps before you
issue FlashCopy start commands to ensure that a consistent backup of the application is
made. You can put databases into hot backup mode and flush the file system cache before
starting the FlashCopy.

FlashCopy Manager also allows for easier management of on-disk backups that use
FlashCopy, and provides a simple interface to perform the “reverse” operation.

Figure 9-3 on page 482 shows the FlashCopy Manager feature.

Chapter 9. Advanced Copy Services 481


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-3 Tivoli Storage Manager FlashCopy Manager features

Released December 2013, IBM Tivoli FlashCopy Manager V4.1 adds support for VMware 5.5
and vSphere environments with Site Recovery Manager (SRM), with instant restore for
VMware Virtual Machine File System (VMFS) data stores. This release also integrates with
IBM Tivoli Storage Manager for Virtual Environments, and it allows backup of point-in-time
images into the Tivoli Storage Manager infrastructure for long-term storage.

The addition of VMware vSphere brings support and application awareness for FlashCopy
Manager to the following applications:
򐂰 Microsoft Exchange and Microsoft SQL Server, including SQL Server 2012 Availability
Groups
򐂰 IBM DB2 and Oracle databases, for use either with or without SAP environments
򐂰 IBM General Parallel File System (GPFS) software snapshots for DB2 pureScale®
򐂰 Other applications supported through script customizing

For more information about IBM Tivoli FlashCopy Manager, see this website:
http://www.ibm.com/software/products/en/tivostorflasmana/

FlashCopy Manager integration with Remote Copy Services


One of the interesting features of FlashCopy Manager is the capability of creating flash copy
backups from remote copy target volumes. Therefore, the backup does not have to be copied
from the primary site to the secondary site because it is already copied via Metro Mirror (MM)
or Global Mirror (GM). Applications running in the primary site can have its backup taken in
the secondary site, where the source of this backup is target remote copy volumes. This
concept is presented in Figure 9-4 on page 483.

482 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Figure 9-4 FlashCopy Manager integration with remote copy services

9.3 FlashCopy functional overview


FlashCopy works by defining a FlashCopy mapping that consists of one source volume with
one target volume. Multiple FlashCopy mappings (source-to-target relationships) can be
defined, and point-in-time consistency can be maintained across multiple individual mappings
by using Consistency Groups. For more information, see “Consistency Group with Multiple
Target FlashCopy” on page 487.

Before you start a FlashCopy (regardless of the type and specified options), you must issue a
prestartfcmap or prestartfcconsistgrp, which puts the IBM SVC cache into write-through
mode and flushes the I/O that is currently bound for your volume. After FlashCopy is started,
an effective copy of a source volume to a target volume is created. The content of the source
volume is presented immediately on the target volume, and the original content of the target
volume is lost. This FlashCopy operation is also referred to as a time-zero copy (T0).

Note: Instead of using prestartfcmap or prestartfcconsistgrp, you can also use the
-prep parameter in the startfcmap or startfcconsistgrp command to prepare and start
FlashCopy in one step.

The source and target volumes are available for use immediately after the FlashCopy
operation. The FlashCopy operation creates a bitmap that is referenced and maintained to
direct I/O requests within the source and target relationship. This bitmap is updated to reflect
the active block locations as data is copied in the background from the source to the target
and updates are made to the source.

Chapter 9. Advanced Copy Services 483


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

For more information about background copy, see 9.4.5, “Grains and the FlashCopy bitmap”
on page 489. Figure 9-5 shows the redirection of the host I/O toward the source volume and
the target volume.

Figure 9-5 Redirection of host I/O

9.4 Implementing FlashCopy


In this section, we describe how FlashCopy is implemented in the IBM SAN Volume
Controller.

9.4.1 FlashCopy mappings


FlashCopy occurs between a source volume and a target volume. The source and target
volumes must be the same size. The minimum granularity that the IBM Spectrum Virtualize
supports for FlashCopy is an entire volume; it is not possible to use FlashCopy to copy only
part of a volume.

Important: As with any point-in-time copy technology, you are bound by operating system
and application requirements for interdependent data and the restriction to an entire
volume.

The source and target volumes must belong to the same IBM SVC system, but they do not
have to be in the same I/O Group or storage pool. FlashCopy associates a source volume to
a target volume through FlashCopy mapping.

To become members of a FlashCopy mapping, the source and target volumes must be the
same size. Volumes that are members of a FlashCopy mapping cannot have their size
increased or decreased while they are members of the FlashCopy mapping.

A FlashCopy mapping is the act of creating a relationship between a source volume and a
target volume. FlashCopy mappings can be stand-alone or a member of a Consistency

484 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Group. You can perform the actions of preparing, starting, or stopping FlashCopy on either a
stand-alone mapping or a Consistency Group.

Figure 9-6 shows the concept of FlashCopy mapping.

Figure 9-6 FlashCopy mapping

9.4.2 Multiple Target FlashCopy


The IBM SVC supports up to 256 target volumes from a single source volume. Each copy is
managed by a unique mapping. Figure 9-7 shows the Multiple Target FlashCopy
implementation.

Figure 9-7 Multiple Target FlashCopy implementation

Figure 9-7 also shows four targets and mappings that are taken from a single source, with
their interdependencies. In this example, Target 1 is the oldest (as measured from the time
that it was started) through to Target 4, which is the newest. The ordering is important
because of how the data is copied when multiple target volumes are defined and because of
the dependency chain that results.

A write to the source volume does not cause its data to be copied to all of the targets. Instead,
it is copied to the newest target volume only (Target 4 in Figure 9-7). The older targets refer to
new targets first before referring to the source.

From the point of view of an intermediate target disk (neither the oldest nor the newest), it
treats the set of newer target volumes and the true source volume as a type of composite
source.

Chapter 9. Advanced Copy Services 485


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

It treats all older volumes as a kind of target (and behaves like a source to them). If the
mapping for an intermediate target volume shows 100% progress, its target volume contains
a complete set of data. In this case, mappings treat the set of newer target volumes (up to and
including the 100% progress target) as a form of composite source. A dependency
relationship exists between a particular target and all newer targets (up to and including a
target that shows 100% progress) that share the source until all data is copied to this target
and all older targets.

For more information about Multiple Target FlashCopy, see 9.4.6, “Interaction and
dependency between multiple target FlashCopy mappings” on page 490.

9.4.3 Consistency Groups


Consistency Groups address the requirement to preserve point-in-time data consistency
across multiple volumes for applications that include related data that spans multiple
volumes. For these volumes, Consistency Groups maintain the integrity of the FlashCopy by
ensuring that “dependent writes” are run in the application’s intended sequence.

When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy
Consistency Group, which performs the operation on all FlashCopy mappings that are
contained within the Consistency Group at the same time.

Figure 9-8 shows a Consistency Group that includes two FlashCopy mappings.

Figure 9-8 FlashCopy Consistency Group

Important: After an individual FlashCopy mapping is added to a Consistency Group, it can


be managed as part of the group only. Operations, such as prepare, start, and stop, are no
longer allowed on the individual mapping.

486 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Dependent writes
To show why it is crucial to use Consistency Groups when a data set spans multiple volumes,
consider the following typical sequence of writes for a database update transaction:
1. A write is run to update the database log, which indicates that a database update is about
to be performed.
2. A second write is run to perform the actual update to the database.
3. A third write is run to update the database log, which indicates that the database update
completed successfully.

The database ensures the correct ordering of these writes by waiting for each step to
complete before the next step is started. However, if the database log (updates 1 and 3) and
the database (update 2) are on separate volumes, it is possible for the FlashCopy of the
database volume to occur before the FlashCopy of the database log. This sequence can
result in the target volumes seeing writes 1 and 3 but not 2 because the FlashCopy of the
database volume occurred before the write was completed.

In this case, if the database was restarted by using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction completed
successfully. In fact, it did not complete successfully because the FlashCopy of the volume
with the database file was started (the bitmap was created) before the write completed to the
volume. Therefore, the transaction is lost and the integrity of the database is in question.

To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, a FlashCopy operation must be performed on multiple volumes as an
atomic operation. To accomplish this method, the IBM Spectrum Virtualize supports the
concept of Consistency Groups.

A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings. The maximum
number of FlashCopy mappings that is supported by the SVC system V7.4 is 4,096.
FlashCopy commands can then be issued to the FlashCopy Consistency Group and,
therefore, simultaneously for all of the FlashCopy mappings that are defined in the
Consistency Group.

For example, when a FlashCopy start command is issued to the Consistency Group, all of
the FlashCopy mappings in the Consistency Group are started at the same time. This
simultaneous start results in a point-in-time copy that is consistent across all of the FlashCopy
mappings that are contained in the Consistency Group.

Consistency Group with Multiple Target FlashCopy


A Consistency Group aggregates FlashCopy mappings, not volumes. Therefore, where a
source volume has multiple FlashCopy mappings, they can be in the same or separate
Consistency Groups.

If a particular volume is the source volume for multiple FlashCopy mappings, you might want
to create separate Consistency Groups to separate each mapping of the same source
volume. Regardless of whether the source volume with multiple target volumes is in the same
Consistency Group or in separate Consistency Groups, the resulting FlashCopy produces
multiple identical copies of the source data.

Maximum configurations
Table 9-1 on page 488 lists the FlashCopy properties and maximum configurations.

Chapter 9. Advanced Copy Services 487


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Table 9-1 FlashCopy properties and maximum configurations


FlashCopy property Maximum Comment

FlashCopy targets per source 256 This maximum is the number of FlashCopy
mappings that can exist with the same source
volume.

FlashCopy mappings per system 4,096 The number of mappings is no longer limited by
the number of volumes in the system, so the
FlashCopy component limit applies.

FlashCopy Consistency Groups 255 This maximum is an arbitrary limit that is policed
per system by the software.

FlashCopy volume capacity per 4 PiB This maximum is a limit on the quantity of
I/O Group FlashCopy mappings that are using bitmap space
from this I/O Group. This maximum configuration
uses all 4GiB of bitmap space for the I/O Group
and allows no Metro or Global Mirror bitmap
space. The default is 40 TiB.

FlashCopy mappings per 512 This limit is because of the time that is taken to
Consistency Group prepare a Consistency Group with many
mappings.

9.4.4 FlashCopy indirection layer


The FlashCopy indirection layer governs the I/O to the source and target volumes when a
FlashCopy mapping is started, which is done by using a FlashCopy bitmap. The purpose of
the FlashCopy indirection layer is to enable the source and target volumes for read and write
I/O immediately after the FlashCopy is started.

To show how the FlashCopy indirection layer works, we examine what happens when a
FlashCopy mapping is prepared and then started.

When a FlashCopy mapping is prepared and started, the following sequence is applied:
1. Flush the write cache to the source volume or volumes that are part of a Consistency
Group.
2. Put cache into write-through mode on the source volumes.
3. Discard cache for the target volumes.
4. Establish a sync point on all of the source volumes in the Consistency Group (which
creates the FlashCopy bitmap).
5. Ensure that the indirection layer governs all of the I/O to the source volumes and target
volumes.
6. Enable cache on the source volumes and target volumes.

FlashCopy provides the semantics of a point-in-time copy by using the indirection layer, which
intercepts I/O that is directed at the source or target volumes. The act of starting a FlashCopy
mapping causes this indirection layer to become active in the I/O path, which occurs
automatically across all FlashCopy mappings in the Consistency Group. The indirection layer
then determines how each I/O is to be routed, which is based on the following factors:
򐂰 The volume and the logical block address (LBA) to which the I/O is addressed
򐂰 Its direction (read or write)
򐂰 The state of an internal data structure, the FlashCopy bitmap

488 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

The indirection layer allows the I/O to go through to the underlying volume, redirects the I/O
from the target volume to the source volume, or queues the I/O while it arranges for data to be
copied from the source volume to the target volume. To explain in more detail which action is
applied for each I/O, we first look at the FlashCopy bitmap.

9.4.5 Grains and the FlashCopy bitmap


When data is copied between volumes, it is copied in units of address space that are known
as grains. Grains are units of data that are grouped to optimize the use of the bitmap that
tracks changes to the data between the source and target volume. You can use 64 KiB or
256 KiB grain sizes (256 KiB is the default). The FlashCopy bitmap contains 1 bit for each
grain and used to track whether the source grain was copied to the target. The 64 KiB grain
size uses bitmap space at a rate of four times the default 256 KiB size.

The FlashCopy bitmap dictates read and write behavior for the source and target volumes.

Source reads
Reads are performed from the source volume, which is the same as for non-FlashCopy
volumes.

Source writes
Writes to the source cause:
򐂰 If the grain was not copied to the target yet, the grain is copied before the actual write is
performed to the source. Bitmap is updated indicating this grain is already copied to the
target.
򐂰 If the grain was already copied, the write is performed to the source as usual.

Target reads
Reads are performed from the target if the grain was copied. Otherwise, the read is
performed from the source and no copy is performed.

Target writes
Writes to the target cause:
򐂰 If the grain was not copied from the source to the target, the grain is copied from the
source to the target before the actual write is performed to the source. Bitmap is updated
indicating this grain is already copied to the target.
򐂰 If the entire grain is being updated on the target, the target is marked as split with the
source (if there is no I/O error during the write) and the write goes directly to the target.
򐂰 If the grain in question was already copied from the source to the target, the write goes
directly to the target.

The FlashCopy indirection layer algorithm


Imagine the FlashCopy indirection layer as the I/O traffic director when a FlashCopy mapping
is active. The I/O is intercepted and handled according to whether it is directed at the source
volume or at the target volume, depending on the nature of the I/O (read or write) and the
state of the grain (whether it was copied).

Figure 9-9 on page 490 shows how the background copy runs while I/Os are handled
according to the indirection layer algorithm.

Chapter 9. Advanced Copy Services 489


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-9 I/O processing with FlashCopy

9.4.6 Interaction and dependency between multiple target FlashCopy


mappings
Figure 9-10 shows a set of four FlashCopy mappings that share a common source. The
FlashCopy mappings target volumes Target 0, Target 1, Target 2, and Target 3.

Figure 9-10 Interactions among multiple target FlashCopy mappings

490 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Target 0 is not dependent on a source because it completed copying. Target 0 has two
dependent mappings (Target 1 and Target 2).

Target 1 depends on Target 0. It remains dependent until all of Target 1 is copied. Target 2
depends on it because Target 2 is 20% copy complete. After all of Target 1 is copied, it can
then move to the idle_copied state.

Target 2 is dependent upon Target 0 and Target 1 and remains dependent until all of Target 2
is copied. No target depends on Target 2; therefore, when all of the data is copied to Target 2,
it can move to the idle_copied state.

Target 3 completed copying, so it is not dependent on any other maps.

Target writes with multiple target FlashCopy


A write to an intermediate or newest target volume must consider the state of the grain within
its own mapping, and the state of the grain of the next oldest mapping.

If the grain of the next oldest mapping is not yet copied, it must be copied before the write can
proceed to preserve the contents of the next oldest mapping. The data that is written to the
next oldest mapping comes from a target or source.

If the grain in the target that is being written is not yet copied, the grain is copied from the
oldest copied grain in the mappings that are newer than the target or the source if none are
copied. After this copy is done, the write can be applied to the target.

Target reads with multiple target FlashCopy


If the grain being read is copied from the source to the target, the read returns data from the
target that is being read. If the grain is not yet copied, each of the newer mappings is
examined in turn and the read is performed from the first copy that is found. If none are found,
the read is performed from the source.

Stopping the copy process


When a stop command is issued to a mapping that contains a target that has dependent
mappings, the mapping enters the stopping state and begins copying all grains that are
uniquely held on the target volume of the mapping that is being stopped to the next oldest
mapping that is in the copying state. The mapping remains in the stopping state until all grains
are copied and then enters the stopped state.

Note: The stopping copy process can be ongoing for several mappings that share the
source at the same time. At the completion of this process, the mapping automatically
makes an asynchronous state transition to the stopped state or the idle_copied state if the
mapping was in the copying state with progress = 100%.

For example, if the mapping that is associated with Target 0 was issued a stopfcmap or
stopfcconsistgrp command, Target 0 enters the stopping state while a process copies the
data of Target 0 to Target 1. After all of the data is copied, Target 0 enters the stopped state
and Target 1 is no longer dependent upon Target 0; however, Target 1 remains dependent on
Target 2.

9.4.7 Summary of the FlashCopy indirection layer algorithm


Table 9-2 on page 492 summarizes the indirection layer algorithm.

Chapter 9. Advanced Copy Services 491


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Table 9-2 Summary table of the FlashCopy indirection layer algorithm


Accessed Was the grain Host I/O operation
volume copied?
Read Write

Source No Read from the source Copy grain to most recently


volume. started target for this source,
then write to the source.

Yes Read from the source Write to the source volume.


volume.

Target No If any newer targets exist for Hold the write. Check the
this source in which this grain dependency target volumes
was copied, read from the to see whether the grain was
oldest of these targets. copied. If the grain is not
Otherwise, read from the copied to the next oldest
source. target for this source, copy
the grain to the next oldest
target. Then, write to the
target.

Yes Read from the target volume. Write to the target volume.

9.4.8 Interaction with the cache


Starting with V7.3 the entire cache subsystem was redesigned and changed accordingly.
Cache has been divided into upper and lower cache. Upper cache serves mostly as write
cache and hides the write latency from the hosts and application. Lower cache is a read/write
cache and optimizes I/O to and from disks. Figure 9-11 shows the new IBM Spectrum
Virtualize cache architecture.

Figure 9-11 New cache architecture

492 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

This copy-on-write process introduces significant latency into write operations. To isolate the
active application from this additional latency, the FlashCopy indirection layer is placed
logically between upper and lower cache. Therefore, the additional latency that is introduced
by the copy-on-write process is encountered only by the internal cache operations and not by
the application.

The logical placement of the FlashCopy indirection layer is shown in Figure 9-12.

Figure 9-12 Logical placement of the FlashCopy indirection layer

Introduction of the two level cache provides additional performance improvements to the
FlashCopy mechanism. Because the FlashCopy layer is now above lower cache in the
software stack, it can benefit from read prefetching and coalescing writes to backend storage.
Also, preparing FlashCopy is much faster because upper cache write data does not have to
go directly to backend storage but to lower cache layer. Additionally, in the multitarget
FlashCopy the target volumes of the same image share cache data. This design is opposite
to previous IBM Spectrum Virtualize code versions where each volume had its own copy of
cached data.

9.4.9 FlashCopy and image mode volumes


FlashCopy can be used with image mode volumes. Because the source and target volumes
must be the same size, you must create a target volume with the same size as the image
mode volume when you are creating a FlashCopy mapping. To accomplish this task, use the
Storwizeinfo lsvdisk -bytes volumeName command. The size in bytes is then used to
create the volume that is used in the FlashCopy mapping. This method provides an exact
number of bytes because image mode volumes might not line up one-to-one on other
measurement unit boundaries. In Example 9-1 on page 494, we list the size of the
ds_3400_img_vol volume. The ds_3400_img_vol_copy volume is then created, which
specifies the same size.

Chapter 9. Advanced Copy Services 493


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Example 9-1 Listing the size of a volume in bytes and creating a volume of equal size
IBM_2145:ITSO SVC DH8:superuser>lsvdisk -bytes test_image_vol_1
id 12
name test_image_vol_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 3
mdisk_grp_name temp_migration_pool
capacity 21474836480
type image
formatted no
formatting no
mdisk_id 5
mdisk_name mdisk3
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680283818B300000000000000E
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 3
parent_mdisk_grp_name temp_migration_pool
owner_type none
owner_id
owner_name
encrypt no
volume_id 12
volume_name test_image_vol_1
function

copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 3
mdisk_grp_name temp_migration_pool
type image
mdisk_id 5
mdisk_name mdisk3

494 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

fast_write_state empty
used_capacity 21474836480
real_capacity 21474836480
free_capacity 0
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status measured
tier ssd
tier_capacity 0
tier enterprise
tier_capacity 21474836480
tier nearline
tier_capacity 0
compressed_copy no
uncompressed_used_capacity 21474836480
parent_mdisk_grp_id 3
parent_mdisk_grp_name temp_migration_pool
encrypt no

IBM_2145:ITSO SVC DH8:superuser>mkvdisk -mdiskgrp test_pool_1 -iogrp 0 -size


21474836480 -unit b -name test_image_vol_copy_1
Virtual Disk, id [13], successfully created

IBM_2145:ITSO SVC DH8:superuser>lsvdisk -delim " "


12 test_image_vol_1 0 io_grp0 online 3 temp_migration_pool 20.00GB image
600507680283818B300000000000000E 0 1 empty 0 no 0 3 temp_migration_pool no no 12
test_image_vol_1
13 test_image_vol_copy_1 0 io_grp0 online 0 test_pool_1 20.00GB striped
600507680283818B300000000000000F 0 1 not_empty 0 no 0 0 test_pool_1 yes no 13
test_image_vol_copy_1

Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume
commands to modify the size of the volume.

These actions must be performed before a mapping is created.

You can use an image mode volume as a FlashCopy source volume or target volume.

9.4.10 FlashCopy mapping events


In this section, we describe the events that modify the states of a FlashCopy. We also
describe the mapping events that are listed in Table 9-3.

Chapter 9. Advanced Copy Services 495


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Overview of a FlashCopy sequence of events: The following tasks show the FlashCopy
sequence:
1. Associate the source data set with a target location (one or more source and target
volumes).
2. Create a FlashCopy mapping for each source volume to the corresponding target
volume. The target volume must be equal in size to the source volume.
3. Discontinue access to the target (application dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush the cache for the source.
b. Discard the cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.

Table 9-3 Mapping events


Mapping event Description

Create A FlashCopy mapping is created between the specified source volume


and the specified target volume. The operation fails if any one of the
following conditions is true:
򐂰 The source volume is a member of 256 FlashCopy mappings.
򐂰 The node has insufficient bitmap memory.
򐂰 The source and target volumes are different sizes.

Prepare The prestartfcmap or prestartfcconsistgrp command is directed to a


Consistency Group for FlashCopy mappings that are members of a
normal Consistency Group or to the mapping name for FlashCopy
mappings that are stand-alone mappings. The prestartfcmap or
prestartfcconsistgrp command places the FlashCopy mapping into
the preparing state.

The prestartfcmap or prestartfcconsistgrp command can corrupt any


data that was on the target volume because cached writes are discarded.
Even if the FlashCopy mapping is never started, the data from the target
might be changed logically during the act of preparing to start the
FlashCopy mapping.

Flush done The FlashCopy mapping automatically moves from the preparing state to
the prepared state after all cached data for the source is flushed and all
cached data for the target is no longer valid.

496 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Mapping event Description

Start When all of the FlashCopy mappings in a Consistency Group are in the
prepared state, the FlashCopy mappings can be started. To preserve the
cross-volume Consistency Group, the start of all of the FlashCopy
mappings in the Consistency Group must be synchronized correctly
concerning I/Os that are directed at the volumes by using the startfcmap
or startfcconsistgrp command.

The following actions occur during the running of the startfcmap


command or the startfcconsistgrp command:
򐂰 New reads and writes to all source volumes in the Consistency
Group are paused in the cache layer until all ongoing reads and
writes beneath the cache layer are completed.
򐂰 After all FlashCopy mappings in the Consistency Group are paused,
the internal cluster state is set to allow FlashCopy operations.
򐂰 After the cluster state is set for all FlashCopy mappings in the
Consistency Group, read and write operations continue on the
source volumes.
򐂰 The target volumes are brought online.
As part of the startfcmap or startfcconsistgrp command, read and
write caching is enabled for the source and target volumes.

Modify The following FlashCopy mapping properties can be modified:


򐂰 FlashCopy mapping name
򐂰 Clean rate
򐂰 Consistency Group
򐂰 Copy rate (for background copy or stopping copy priority)
򐂰 Automatic deletion of the mapping when the background copy is
complete

Stop The following separate mechanisms can be used to stop a FlashCopy


mapping:
򐂰 Issue a command.
򐂰 An I/O error occurred.

Delete This command requests that the specified FlashCopy mapping is


deleted. If the FlashCopy mapping is in the copying state, the force flag
must be used.

Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the stopped state.

Copy complete After all of the source data is copied to the target and no dependent
mappings exist, the state is set to copied. If the option to automatically
delete the mapping after the background copy completes is specified, the
FlashCopy mapping is deleted automatically. If this option is not
specified, the FlashCopy mapping is not deleted automatically and it can
be reactivated by preparing and starting again.

Bitmap online/offline The node failed.

Chapter 9. Advanced Copy Services 497


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

9.4.11 FlashCopy mapping states


In this section, we describe the states of a FlashCopy mapping.

Idle_or_copied
The source and target volumes act as independent volumes even if a mapping exists between
the two. Read and write caching is enabled for the source and the target volumes.

If the mapping is incremental and the background copy is complete, the mapping records the
differences between the source and target volumes only. If the connection to both nodes in
the I/O group that the mapping is assigned to is lost, the source and target volumes are
offline.

Copying
The copy is in progress. Read and write caching is enabled on the source and the target
volumes.

Prepared
The mapping is ready to start. The target volume is online, but is not accessible. The target
volume cannot perform read or write caching. Read and write caching is failed by the Small
Computer System Interface (SCSI) front end as a hardware error. If the mapping is
incremental and a previous mapping is completed, the mapping records the differences
between the source and target volumes only. If the connection to both nodes in the I/O Group
that the mapping is assigned to is lost, the source and target volumes go offline.

Preparing
The target volume is online, but not accessible. The target volume cannot perform read or
write caching. Read and write caching is failed by the SCSI front end as a hardware error. Any
changed write data for the source volume is flushed from the cache. Any read or write data for
the target volume is discarded from the cache. If the mapping is incremental and a previous
mapping is completed, the mapping records the differences between the source and target
volumes only. If the connection to both nodes in the I/O Group that the mapping is assigned to
is lost, the source and target volumes go offline.

Performing the cache flush that is required as part of the startfcmap or startfcconsistgrp
command causes I/Os to be delayed while they are waiting for the cache flush to complete. To
overcome this problem, FlashCopy supports the prestartfcmap or prestartfcconsistgrp
command, which prepares for a FlashCopy start while still allowing I/Os to continue to the
source volume.

In the preparing state, the FlashCopy mapping is prepared by completing the following steps:
1. Flushing any modified write data that is associated with the source volume from the cache.
Read data for the source is left in the cache.
2. Placing the cache for the source volume into write-through mode so that subsequent
writes wait until data is written to disk before the write command that is received from the
host is complete.
3. Discarding any read or write data that is associated with the target volume from the cache.

498 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Stopped
The mapping is stopped because you issued a stop command or an I/O error occurred. The
target volume is offline and its data is lost. To access the target volume, you must restart or
delete the mapping. The source volume is accessible and the read and write cache is
enabled. If the mapping is incremental, the mapping is recording write operations to the
source volume. If the connection to both nodes in the I/O Group that the mapping is assigned
to is lost, the source and target volumes go offline.

Stopping
The mapping is copying data to another mapping.

If the background copy process is complete, the target volume is online while the stopping
copy process completes.

If the background copy process is not complete, data is discarded from the target volume
cache. The target volume is offline while the stopping copy process runs.

The source volume is accessible for I/O operations.

Suspended
The mapping started, but it did not complete. Access to the metadata is lost, which causes
the source and target volume to go offline. When access to the metadata is restored, the
mapping returns to the copying or stopping state and the source and target volumes return
online. The background copy process resumes. Any data that was not flushed and was
written to the source or target volume before the suspension is in cache until the mapping
leaves the suspended state.

Summary of FlashCopy mapping states


Table 9-4 lists the various FlashCopy mapping states and the corresponding states of the
source and target volumes.

Table 9-4 FlashCopy mapping state summary


State Source Target

Online/Offline Cache state Online/Offline Cache state

Idling/Copied Online Write-back Online Write-back

Copying Online Write-back Online Write-back

Stopped Online Write-back Offline N/A

Stopping Online Write-back Online if copy N/A


complete

Offline if copy
incomplete

Suspended Offline Write-back Offline N/A

Preparing Online Write-through Online but not N/A


accessible

Prepared Online Write-through Online but not N/A


accessible

Chapter 9. Advanced Copy Services 499


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

9.4.12 Thin provisioned FlashCopy


FlashCopy source and target volumes can be thin-provisioned.

Source or target thin-provisioned


The most common configuration is a fully allocated source and a thin-provisioned target. By
using this configuration, the target uses a smaller amount of real storage than the source.
With this configuration, use the NOCOPY (background copy rate = 0%) option only. Although
the COPY option is supported, this option creates a fully allocated target, which defeats the
purpose of thin provisioning.

Source and target thin-provisioned


When the source and target volumes are thin-provisioned, only the data that is allocated to
the source is copied to the target. In this configuration, the background copy option has no
effect.

Performance: The best performance is obtained when the grain size of the
thin-provisioned volume is the same as the grain size of the FlashCopy mapping.

Thin-provisioned incremental FlashCopy


The implementation of thin-provisioned volumes does not preclude the use of incremental
FlashCopy on the same volumes. It does not make sense to have a fully allocated source
volume and then use incremental FlashCopy (which is always a full copy the first time) to copy
this fully allocated source volume to a thin-provisioned target volume. However, this action is
not prohibited.

Consider the following optional configuration:


򐂰 A thin-provisioned source volume can be copied incrementally by using FlashCopy to a
thin-provisioned target volume. Whenever the FlashCopy is performed, only data that was
modified is recopied to the target. If space is allocated on the target because of I/O to the
target volume, this space is not reclaimed with subsequent FlashCopy operations.
򐂰 A fully allocated source volume can be copied incrementally by using FlashCopy to
another fully allocated volume at the same time while it is being copied to multiple
thin-provisioned targets (taken at separate points in time). By using this combination, a
single full backup can be kept for recovery purposes and the backup workload is
separated from the production workload. At the same time, older thin-provisioned backups
can be retained.

9.4.13 Background copy


With FlashCopy background copy enabled, the source volume data is copied to the
corresponding target volume. With the FlashCopy background copy disabled, only data that
changed on the source volume is copied to the target volume.

The benefit of the use of a FlashCopy mapping with background copy enabled is that the
target volume becomes a real clone (independent from the source volume) of the FlashCopy
mapping source volume after the copy is complete. When the background copy function is not
performed, the target volume remains a valid copy of the source data only while the
FlashCopy mapping remains in place.

500 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

The background copy rate is a property of a FlashCopy mapping that is defined as a value
0 - 100. The background copy rate can be defined and changed dynamically for individual
FlashCopy mappings. A value of 0 disables the background copy.

Table 9-5 shows the relationship of the background copy rate value to the attempted number
of grains to be copied per second.

Table 9-5 Background copy rate


Value Data copied per Grains per second Grains per second
second (256 KB grain) (64 KB grain)

1 - 10 128 KiB 0.5 2

11 - 20 256 KiB 1 4

21 - 30 512 KiB 2 8

31 - 40 1 MiB 4 16

41 - 50 2 MiB 8 32

51 - 60 4 MiB 16 64

61 - 70 8 MiB 32 128

71 - 80 16 MiB 64 256

81 - 90 32 MiB 128 512

91 - 100 64 MiB 256 1024

The grains per second numbers represent the maximum number of grains that the SVC
copies per second, assuming that the bandwidth to the managed disks (MDisks) can
accommodate this rate.

If the SVC cannot achieve these copy rates because of insufficient bandwidth from the SVC
nodes to the MDisks, the background copy I/O contends for resources on an equal basis with
the I/O that is arriving from the hosts. Background copy I/O and I/O that is arriving from the
hosts may see an increase in latency. Background copy and foreground I/O continue to make
progress, and do not stop, hang, or cause the node to fail. The background copy is performed
by both nodes of the I/O Group in which the source volume is found.

9.4.14 Serialization of I/O by FlashCopy


In general, the FlashCopy function in the IBM Spectrum Virtualize introduces no explicit
serialization into the I/O path. Therefore, many concurrent I/Os are allowed to the source and
target volumes.

However, there is a lock for each grain. The lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared and the mappings are derived from a particular
source volume. The lock is used in the following modes under the following conditions:
򐂰 The lock is held in shared mode during a read from the target volume, which touches a
grain that was not copied from the source.
򐂰 The lock is held in exclusive mode while a grain is being copied from the source to the
target.

If the lock is held in shared mode and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.

Chapter 9. Advanced Copy Services 501


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.

Similarly, if the lock is held in exclusive mode, a process that is wanting to use the lock in
shared or exclusive mode must wait for it to be freed.

9.4.15 Event handling


When a FlashCopy mapping is not copying or stopping, the FlashCopy function does not
affect the handling or reporting of events for error conditions that are encountered in the I/O
path. Event handling and reporting are affected only by FlashCopy when a FlashCopy
mapping is copying or stopping, that is, actively moving data.

We describe these scenarios next.

Node failure
Normally, two copies of the FlashCopy bitmap are maintained. One copy of the FlashCopy
bitmap is on each of the two nodes that make up the I/O Group of the source volume. When a
node fails, one copy of the bitmap for all FlashCopy mappings whose source volume is a
member of the failing node’s I/O Group becomes inaccessible. FlashCopy continues with a
single copy of the FlashCopy bitmap that is stored as non-volatile in the remaining node in the
source I/O Group. The system metadata is updated to indicate that the missing node no
longer holds a current bitmap. When the failing node recovers or a replacement node is
added to the I/O Group, the bitmap redundancy is restored.

Path failure (path offline state)


In a fully functioning system, all of the nodes have a software representation of every volume
in the system within their application hierarchy.

Because the storage area network (SAN) that links the SVC nodes to each other and to the
MDisks is made up of many independent links, a subset of the nodes can be temporarily
isolated from several of the MDisks. When this situation happens, the managed disks are said
to be path offline on certain nodes.

Other nodes: Other nodes might see the managed disks as online because their
connection to the managed disks is still functioning.

When an MDisk enters the path offline state on an SVC node, all of the volumes that have
extents on the MDisk also become path offline. Again, this situation happens only on the
affected nodes. When a volume is path offline on a particular SVC node, the host access to
that volume through the node fails with the SCSI check condition indicating offline.

Path offline for the source volume


If a FlashCopy mapping is in the copying state and the source volume goes path offline, this
path offline state is propagated to all target volumes up to, but not including, the target volume
for the newest mapping that is 100% copied but remains in the copying state. If no mappings
are 100% copied, all of the target volumes are taken offline. Path offline is a state that exists
on a per-node basis. Other nodes might not be affected. If the source volume comes online,
the target and source volumes are brought back online.

Path offline for the target volume


If a target volume goes path offline but the source volume is still online and if any dependent
mappings exist, those target volumes also go path offline. The source volume remains online.

502 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

9.4.16 Asynchronous notifications


FlashCopy raises informational event log entries for certain mapping and Consistency Group
state transitions.

These state transitions occur as a result of configuration events that complete


asynchronously. The informational events can be used to generate Simple Network
Management Protocol (SNMP) traps to notify the user. Other configuration events complete
synchronously, and no informational events are logged as a result of the following events:
򐂰 PREPARE_COMPLETED: This state transition is logged when the FlashCopy mapping or
Consistency Group enters the prepared state as a result of a user request to prepare. The
user can now start (or stop) the mapping or Consistency Group.
򐂰 COPY_COMPLETED: This state transition is logged when the FlashCopy mapping or
Consistency Group enters the idle_or_copied state when it was in the copying or stopping
state. This state transition indicates that the target disk now contains a complete copy and
no longer depends on the source.
򐂰 STOP_COMPLETED: This state transition is logged when the FlashCopy mapping or
Consistency Group enters the stopped state as a result of a user request to stop. It is
logged after the automatic copy process completes. This state transition includes
mappings where no copying needed to be performed. This state transition differs from the
event that is logged when a mapping or group enters the stopped state as a result of an
I/O error.

9.4.17 Interoperation with Metro Mirror and Global Mirror


FlashCopy can work with Metro Mirror and Global Mirror to provide better protection of the
data. For example, we can perform a Metro Mirror copy to duplicate data from Site_A to
Site_B and then perform a daily FlashCopy to back up the data to another location.

Table 9-6 on page 503 lists the supported combinations of FlashCopy and remote copy. In the
table, remote copy refers to Metro Mirror and Global Mirror.

Table 9-6 FlashCopy and remote copy interaction


Component Remote copy primary site Remote copy secondary site

FlashCopy Supported Supported latency: When the


Source FlashCopy relationship is in the
preparing and prepared states,
the cache at the remote copy
secondary site operates in
write-through mode. This
process adds latency to the
latent remote copy relationship.

FlashCopy This combination is supported This combination is supported


Target with the following restrictions: with the major restriction that
򐂰 Issuing a stop -force the FlashCopy mapping cannot
might cause the remote be copying, stopping, or
copy relationship to be fully suspended. Otherwise, the
resynchronized. restrictions are the same as at
򐂰 The code level must be the remote copy primary site.
6.2.x or higher.
򐂰 The I/O Group must be the
same.

Chapter 9. Advanced Copy Services 503


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

9.4.18 FlashCopy presets


The SVC GUI interface provides three FlashCopy presets (snapshot, clone, and backup) to
simplify the more common FlashCopy operations.

Although these presets meet most FlashCopy requirements, they do not provide support for
all possible FlashCopy options. If more specialized options are required that are not
supported by the presets, the options must be performed by using CLI commands.

In this section, we describe the three preset options and their use cases.

Snapshot
This preset creates a copy-on-write point-in-time copy. The snapshot is not intended to be an
independent copy. Instead, the copy is used to maintain a view of the production data at the
time that the snapshot is created. Therefore, the snapshot holds only the data from regions of
the production volume that changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.

Snapshot uses the following preset parameters:


򐂰 Background copy: None
򐂰 Incremental: No
򐂰 Delete after completion: No
򐂰 Cleaning rate: No
򐂰 Primary copy source pool: Target pool

Use case
The user wants to produce a copy of a volume without affecting the availability of the volume.
The user does not anticipate many changes to be made to the source or target volume; a
significant proportion of the volumes remains unchanged.

By ensuring that only changes require a copy of data to be made, the total amount of disk
space that is required for the copy is reduced; therefore, many snapshot copies can be used
in the environment.

Snapshots are useful for providing protection against corruption or similar issues with the
validity of the data, but they do not provide protection from physical controller failures.
Snapshots can also provide a vehicle for performing repeatable testing (including “what-if”
modeling that is based on production data) without requiring a full copy of the data to be
provisioned.

Clone
The clone preset creates a replica of the volume, which can be changed without affecting the
original volume. After the copy completes, the mapping that was created by the preset is
automatically deleted.

Clone uses the following preset parameters:


򐂰 Background copy rate: 50
򐂰 Incremental: No
򐂰 Delete after completion: Yes
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

Use case
Users want a copy of the volume that they can modify without affecting the original volume.
After the clone is established, there is no expectation that it is refreshed or that there is any

504 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

further need to reference the original production data again. If the source is thin-provisioned,
the target is thin-provisioned for the auto-create target.

Backup
The backup preset creates a point-in-time replica of the production data. After the copy
completes, the backup view can be refreshed from the production data, with minimal copying
of data from the production volume to the backup volume.

Backup uses the following preset parameters:


򐂰 Background copy rate: 50
򐂰 Incremental: Yes
򐂰 Delete after completion: No
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

Use case
The user wants to create a copy of the volume that can be used as a backup if the source
becomes unavailable, as in case of loss of the underlying physical controller. The user plans
to periodically update the secondary copy and does not want to suffer from the overhead of
creating new copy each time (and incremental FlashCopy times are faster than full copy,
which helps to reduce the window where the new backup is not yet fully effective). If the
source is thin-provisioned, the target is also thin-provisioned in this option for the auto-create
target.

Another use case, which is not supported by the name, is to create and maintain (periodically
refresh) an independent image that can be subjected to intensive I/O (for example, data
mining) without affecting the source volume’s performance.

9.5 Volume mirroring and migration options


Volume mirroring is a simple RAID 1-type function that allows a volume to remain online even
when the storage pool that backs it becomes inaccessible. Volume mirroring is designed to
protect the volume from storage infrastructure failures by seamless mirroring between
storage pools.

Volume mirroring is provided by a specific volume mirroring function in the I/O stack, and it
cannot be manipulated like a FlashCopy or other types of copy volumes. However, this feature
provides migration functionality, which can be obtained by splitting the mirrored copy from the
source or by using the “migrate to” function. Volume mirroring cannot control backend storage
mirroring or replication.

With volume mirroring, host I/O completes when both copies are written. Before V6.3, this
feature took a copy offline when it had an I/O timeout, and then resynchronized with the online
copy after it recovered. Starting with V6.3, this feature is enhanced with a tunable latency
tolerance. This tolerance provides an option to give preference to losing the redundancy
between the two copies. This tunable timeout value is latency or redundancy.

The latency tuning option, which is set with chvdisk -mirrowritepriority latency, is the
default. It prioritizes host I/O latency, which yields a preference to host I/O over availability.

However, you might need to give preference to redundancy in your environment when
availability is more important than I/O response time. Use the chvdisk -mirror
writepriority redundancy command to set the redundancy option.

Chapter 9. Advanced Copy Services 505


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Regardless of which option you choose, volume mirroring can provide extra protection for
your environment.

Migration offers the following options:


򐂰 Export to image mode: By using this option, you can move storage from managed mode to
image mode, which is useful if you are using the SVC as a migration device. For example,
vendor A’s product cannot communicate with vendor B’s product, but you must migrate
existing data from vendor A to vendor B. By using the Export to Image mode option, you
can migrate data by using SVC copy services functions and then return control to the
native array while maintaining access to the hosts.
򐂰 Import to image mode: By using this option, you can import an existing storage MDisk or
logical unit number (LUN) with its existing data from an external storage system without
putting metadata on it so that the existing data remains intact. After you import it, all copy
services functions can be used to migrate the storage to other locations while the data
remains accessible to your hosts.
򐂰 Volume migration by using volume mirroring and then by using split into new volume: By
using this option, you can use the available RAID 1 functionality. You create two copies of
data that initially has a set relationship (one volume with two copies - one primary and one
secondary) but then break the relationship (two volumes, both primary and no relationship
between them) to make them independent copies of data. You can use this option to
migrate data between storage pools and devices. You might use this option if you want to
move volumes to multiple storage pools. A volume can have two copies at a time, which
means that you can add only one copy to the original volume and then you have to split
those copies to create another copy of the volume.
򐂰 Volume migration by using move to another pool: By using this option, you can move any
volume between storage pools without any interruption to the host access. This option is a
quicker version of the “volume mirroring and split into new volume” option. You might use
this option if you want to move volumes in a single step or you do not have a volume mirror
copy already.

Migration: Although these migration methods do not disrupt access, you must take a brief
outage to install the host drivers for your SVC if they are not installed. For more
information, see the IBM SVC Host Attachment User’s Guide, SC26-7905. Ensure that you
consult the revision of the document that applies to your SVC.

With volume mirroring, you can move data to different MDisks within the same storage pool or
move data between different storage pools. Using volume mirroring over volume migration is
beneficial because with volume mirroring storage pools do not need to have the same extent
size as is the case with volume migration.

Note: Volume mirroring does not create a second volume before you split copies. Volume
mirroring adds a second copy of the data under the same volume so the result is one
volume presented to the host with two copies of data connected to this volume. Only
splitting copies creates another volume and then both volumes have only one copy of the
data.

Starting with V7.3 and the introduction of the new cache architecture, mirrored volume
performance has been significantly improved. Now, lower cache is beneath the volume
mirroring layer, which means that both copies have their own cache. This approach helps in
cases of copies of different types, for example, generic and compressed, because each copy
uses its independent cache and performs its own read prefetch. Destaging of the cache can

506 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

now be independent for each copy, so one copy does not affect the performance of a second
copy.

Also, because the Storwize destage algorithm is MDisk aware, it can tune or adapt the
destaging process, depending on the MDisk type and utilization, for each copy independently.

9.6 Native IP replication


Before we describe Remote Copy features that benefit from the use of multiple SVC systems,
it is important to describe the partnership option introduced with V7.2 native IP replication.

9.6.1 Native IP replication technology


Remote Mirroring over IP communication is supported on the IBM SVC and Storwize Family
systems by using Ethernet communication links. The IBM Spectrum Virtualize Software IP
replication uses innovative Bridgeworks SANSlide technology to optimize network bandwidth
and utilization. This new function enables the use of a lower-speed and lower-cost networking
infrastructure for data replication. Bridgeworks’ SANSlide technology, which is integrated into
the IBM Spectrum Virtualize Software, uses artificial intelligence to help optimize network
bandwidth utilization and adapt to changing workload and network conditions. This
technology can improve remote mirroring network bandwidth utilization up to three times,
which can enable clients to deploy a less costly network infrastructure or speed up remote
replication cycles to enhance disaster recovery effectiveness.

With an Ethernet network data flow the data transfer can slow down over time. This condition
occurs because of the latency that is caused by waiting for the acknowledgment of each set of
packets that are sent. The next packet set cannot be sent until the previous packet is
acknowledged, as shown in Figure 9-13.

Figure 9-13 Typical Ethernet network data flow

However, by using the embedded IP replication this behavior can be eliminated with the
enhanced parallelism of the data flow by using multiple virtual connections (VC) that share IP
links and addresses. The artificial intelligence engine can dynamically adjust the number of
VCs, receive window size, and packet size as appropriate to maintain optimum performance.
While the engine is waiting for one VC’s ACK, it sends more packets across other VCs. If
packets are lost from any VC, data is automatically retransmitted, as shown in Figure 9-14.

Chapter 9. Advanced Copy Services 507


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-14 Optimized network data flow by using Bridgeworks SANSlide technology

For more information about this technology, see IBM Storwize V7000 and SANSlide
Implementation, REDP-5023.

With native IP partnership, the following Copy Services features are supported:
򐂰 Metro Mirror (MM)
Referred to as synchronous replication, MM provides a consistent copy of a source virtual
disk on a target virtual disk. Data is written to the target virtual disk synchronously after it
is written to the source virtual disk so that the copy is continuously updated.
򐂰 Global Mirror (GM) and GM with Change Volumes
Referred to as asynchronous replication, GM provides a consistent copy of a source
virtual disk on a target virtual disk. Data is written to the target virtual disk asynchronously
so that the copy is continuously updated. However, the copy might not contain the last few
updates if a disaster recovery operation is performed. An added extension to GM is GM
with Change Volumes. GM with Change Volumes is the preferred method for use with
native IP replication.

9.6.2 IBM SVC and Storwize System Layers


An IBM Storwize family systems can be in one of the two layers: the replication layer or the
storage layer. The system layer affects how the system interacts with other SVC system and
external Storwize family systems.

In storage layer, a SVC or Storwize family system has the following characteristics and
requirements:
򐂰 The system can perform MM and GM replication with other storage-layer systems.
򐂰 The system can provide external storage for replication-layer systems or SVC.
򐂰 The system cannot use a storage-layer system as external storage.

In replication layer, a SVC or Storwize family system has the following characteristics and
requirements:
򐂰 The system can perform MM and GM replication with other replication-layer systems or
SVC.
򐂰 The system cannot provide external storage for a replication-layer system or SVC.
򐂰 The system can use a storage-layer system as external storage.

508 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

A Storwize family system is in the storage layer by default, but the layer can be changed. For
example, you might want to change an SVC to a replication layer if you want to virtualize
Storwize V3700 systems.

Note: Before you change the system layer, the following conditions must be met:
򐂰 No host object can be configured with worldwide port names (WWPNs) from a Storwize
family system.
򐂰 No system partnerships can be defined.
򐂰 No Storwize family system can be visible on the SAN fabric.

The layer can be changed during normal host I/O.

Use the lssystem command to check the current system layer, as shown in Example 9-2.

Example 9-2 Output from lssystem command showing the system layer
IBM_2145:ITSO SVC DH8:superuser>lssystem
id 000002007FE02102
name ITSO SVC DH8
...
lines omited for brevity
...
easy_tier_acceleration off
has_nas_key no
layer replication
...

Note: Consider the following rules for creating remote partnerships between the SVC and
Storwize Family systems:
򐂰 An SVC is always in the replication layer.
򐂰 By default, the SVC is in the storage layer but can be changed to the replication layer.
򐂰 A system can form partnerships only with systems in the same layer.
򐂰 An SVC can virtualize an SVC only if the SVC is in the storage layer.

Starting in V6.4, an SVC in the replication layer can virtualize an SVC in the storage layer.

9.6.3 IP partnership limitations


The following prerequisites and assumptions must be considered before IP partnership
between two SVC systems can be established:
򐂰 The SVC systems are successfully installed with V7.2 or later code levels.
򐂰 The SVC systems have the necessary licenses that allow remote copy partnerships to be
configured between two systems. No separate license is required to enable IP
partnership.
򐂰 The storage SANs are configured correctly and the correct infrastructure to support the
SVC systems in remote copy partnerships over IP links is in place.
򐂰 The two systems must be able to ping each other and perform the discovery.
򐂰 The maximum number of partnerships between the local and remote systems, including
both IP and FC partnerships, is limited to the current maximum that is supported, which is
three partnerships (four systems total).

Chapter 9. Advanced Copy Services 509


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Only a single partnership over IP is supported.


򐂰 A system can have simultaneous partnerships over FC and IP but with separate systems.
The FC zones between two systems must be removed before an IP partnership is
configured.
򐂰 IP partnerships are supported on both 10 Gbps links and 1 Gbps links. However, the
intermix of both on a single link is not supported.
򐂰 The maximum supported round-trip time is 80 milliseconds (ms) for 1 Gbps links.
򐂰 The maximum supported round-trip time is 10 ms for 10 Gbps links.
򐂰 The minimum supported link bandwidth is 10 Mbps.
򐂰 The inter-cluster heartbeat traffic uses 1 Mbps per link.
򐂰 Only nodes from two I/O Groups can have ports that are configured for an IP partnership.
򐂰 Migrations of remote copy relationships directly from FC-based partnerships to IP
partnerships are not supported.
򐂰 IP partnerships between the two systems can be over IPv4 or IPv6 only but not both.
򐂰 Virtual LAN (VLAN) tagging of the IP addresses that are configured for remote copy is
supported starting with IBM Storwize Spectrum Virtualize Software version 7.4.
򐂰 Management IP and iSCSI IP on the same port can be in a different network starting with
V7.4.
򐂰 An added layer of security is provided by using Challenge Handshake Authentication
Protocol (CHAP) authentication.
򐂰 TCP ports 3260 and 3265 are used for IP partnership communications; therefore, these
ports must be open in firewalls between the systems.
򐂰 Only a single Remote Copy data session per physical link can be established. It is
intended that only one connection (for sending/receiving Remote Copy data) is made for
each independent physical link between the systems.

Note: A physical link is the physical IP link between the two sites A (local) and B
(remote). Multiple IP addresses on local SVC cluster A could be connected (via
ethernet switches) to this physical link. Similarly, multiple IP address on SVC remote
cluster B could be connected (via ethernet switches) to the same physical link. At any
point of time, only a single IP address on cluster A can form a RC data session with an
IP address on cluster B.

򐂰 The following maximum throughput is restricted based on the use of 1 Gbps or 10 Gbps
ports:
– One 1 Gbps port might transfer up to 110 Mbps
– Two 1 Gbps ports might transfer up to 220 Mbps
– One 10 Gbps port might transfer up to 190 Mbps
– One 10 Gbps port might transfer up to 280 Mbps

510 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Note: The Bandwidth setting definition when the IP partnerships are created changed.
Previously, the bandwidth setting defaulted to 50 MBs and was the maximum transfer
rate from the primary site to the secondary site for initial sync/resyncs of volumes.

The Link Bandwidth setting is now configured by using Mbits not MBs. You set the Link
Bandwidth setting to a value that the communication link can sustain or to what is
allocated for replication. The Background Copy Rate setting is now a percentage of the
Link Bandwidth. The Background Copy Rate setting determines the available
bandwidth for the initial sync and resyncs or for GM with Change Volumes.

9.6.4 VLAN support


Starting with V7.4, VLAN tagging is supported for both iSCSI host attachment and IP
replication. Hosts and remote-copy operations can connect to the system through Ethernet
ports. Each traffic type has different bandwidth requirements, which can interfere with each
other if they share the same IP connections. VLAN tagging creates two separate connections
on the same IP network for different types of traffic. The system supports VLAN configuration
on both IPv4 and IPv6 connections.

When the VLAN ID is configured for the IP addresses that are used for either iSCSI host
attach or IP replication, the appropriate VLAN settings on the Ethernet network and servers
must be configured correctly in order not to experience connectivity issues. After the VLANs
are configured, changes to the VLAN settings will disrupt iSCSI and IP replication traffic to
and from the partnerships.

During the VLAN configuration for each IP address, the VLAN settings for the local and
failover ports on two nodes of an I/O Group can differ. To avoid any service disruption,
switches must be configured so the failover VLANs are configured on the local switch ports
and the failover of IP addresses from a failing node to a surviving node succeeds. If failover
VLANs are not configured on the local switch ports, there will be no paths to SVC during a
node failure and the replication will fail.

Consider the following requirements and procedures when implementing VLAN tagging:
򐂰 VLAN tagging is supported for IP partnership traffic between two systems.
򐂰 VLAN provides network traffic separation at the layer 2 level for Ethernet transport.
򐂰 VLAN tagging by default is disabled for any IP address of a node port. You can use the
command-line interface (CLI) to optionally set the VLAN ID for port IPs on both systems in
the IP partnership.
򐂰 When a VLAN ID is configured for the port IP addresses that are used in remote copy port
groups, appropriate VLAN settings on the Ethernet network must also be properly
configured to prevent connectivity issues.
򐂰 Setting VLAN tags for a port is disruptive. Therefore, VLAN tagging requires that you stop
the partnership first before you configure VLAN tags. Then, restart again when the
configuration is complete.

9.6.5 IP partnership and SVC terminology


The IP partnership terminology and abbreviations are listed in Table 9-7.

Chapter 9. Advanced Copy Services 511


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Table 9-7 Terminology for IP partnership


IP partnership and SVC terminology Description

Remote copy group or remote copy port group The following numbers group a set of IP addresses that are
connected to the same physical link. Therefore, only IP
addresses that are part of the same remote copy group can
form remote copy connections with the partner system:
򐂰 0: Ports that are not configured for remote copy
򐂰 1: Ports that belong to remote copy port group 1
򐂰 2: Ports that belong to remote copy port group 2
Each IP address can be shared for iSCSI host-attach and
remote copy functionality. Therefore, the correct settings
must be applied to each IP address.

IP partnership Two SVC systems that are partnered to perform remote copy
over native IP links.

FC partnership Two SVC systems that are partnered to perform remote copy
over native FC links.

Failover Failure of a node within an I/O Group causes all virtual disks
that are owned by this node to fail over to the surviving node.
When the configuration node of the system fails,
management IPs also fail over to an alternative node.

Failback When the failed node rejoins the system, all IP addresses that
failed over are failed back from the surviving node to the
rejoined node, and virtual disk access is restored through this
node.

linkbandwidthmbits The linkbandwidthmbits is the aggregate bandwidth of all


physical links between two sites in Mbps.

IP partnership or partnership over native IP links These terms are used to describe the IP partnership feature.

Discovery Process by which two SVC clusters exchange information


about their IP address configuration. For IP based
partnerships, only IP addresses configured for Remote Copy
are discovered. For example, the first Discovery takes place
when the user is executing the mkippartnership CLI.
Subsequent Discoveries may take place as a result of user
activities (configuration changes) or as a result of hardware
failures (e.g. node failure, ports failure etc.).

9.6.6 States of IP partnership


The different partnership states in the IP partnership are listed in Table 9-8.

Table 9-8 IP partnership states


State Systems Support for Comments
connected active remote
copy I/O

Partially_Configured_Local No No This state indicates that the initial


discovery is complete.

Fully_Configured Yes Yes Discovery successfully completed


between two systems, and the two
systems can establish remote copy
relationships.

512 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

State Systems Support for Comments


connected active remote
copy I/O

Fully_Configured_Stopped Yes Yes The partnership is stopped on the system.

Fully_Configured_Remote_Stopped Yes No The partnership is stopped on the remote


system.

Not_Present Yes No The two systems cannot communicate


with each other. This state is also seen
when data paths between the two systems
are not established.

Fully_Configured_Exceeded Yes No There are too many systems in the


network, and the partnership from the local
system to the remote system is disabled.

Fully_Configured_Excluded No No The connection is excluded because of too


many problems, or either system cannot
support the I/O workload for the MM and
GM relationships.

The following steps must be completed to establish two systems in the IP partnerships:
1. The administrator configures the CHAP secret on both the systems. This step is not
mandatory and users can choose to not configure the CHAP secret.
2. The administrator configures the system IP addresses on both local and remote systems
so that they can discover each other over the network.
3. If you want to use VLANs, configure your LAN switches and Ethernet ports to use VLAN
tagging (for more information on VLAN tagging, refer to 9.6.4, “VLAN support” on
page 511).
4. The administrator configures the SVC ports on each node in both of the systems by using
the GUI or cfgportip command and completes the following steps:
a. Configure the IP addresses for remote copy data.
b. Add the IP addresses in the respective remote copy port group.
c. Define whether the host access on these ports over iSCSI is allowed.
5. The administrator establishes the partnership with the remote system from the local
system where the partnership state then transitions to the Partially_Configured_Local
state.
6. The administrator establishes the partnership from the remote system with the local
system, and if successful, the partnership state then transitions to the Fully_Configured
state, which implies that the partnerships over the IP network were successfully
established. The partnership state momentarily remains in the not_present state before
transitioning to the fully_configured state.
7. The administrator creates MM, GM, and GM with Change Volume relationships.

Partnership consideration: When the partnership is created, no master or auxiliary


status is defined or implied. The partnership is equal. The concepts of master or auxiliary
and primary or secondary apply to volume relationships only, not to system partnerships.

Chapter 9. Advanced Copy Services 513


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

9.6.7 Remote copy groups


This section describes remote copy groups (or remote copy port groups) and different ways to
configure the links between the two remote systems. The two SVC systems can be
connected to each other over one link or, at most, two links. To address the requirement to
allow the SVC to know about the physical links between the two sites, the concept of remote
copy port groups was introduced.

Remote copy port group ID is a numerical tag associated with an IP port of SVC to indicate
which physical IP link it is connected to. Multiple SVC nodes could be connected to the same
physical long distance link and must therefore share the same remote copy port group id. In
scenarios where there are two physical links between the local and remote clusters, 2 remote
copy port group IDs must be used to designate which IP addresses are connected to which
physical link. This configuration must be done by the system administrator using the GUI or
the cfgportip CLI command.

Note: IP ports on both partners must have been configured with identical remote copy port
group IDs for the partnership to be established correctly.

The SVC IP addresses that are connected to the same physical link are designated with
identical remote copy port groups. The SVC supports three remote copy groups: 0, 1, and 2.
The SVC IP addresses are, by default, in remote copy port group 0. Ports in port group 0 are
not considered for creating remote copy data paths between two systems. For partnerships to
be established over IP links directly, IP ports must be configured in remote copy group 1 if a
single inter-site link exists, or in remote copy groups 1 and 2 if two inter-site links exist.

You can assign one IPv4 address and one IPv6 address to each Ethernet port on the SVC
platforms. Each of these IP addresses can be shared between iSCSI host attach and the IP
partnership. The user must configure the required IP address (IPv4 or IPv6) on an Ethernet
port with a remote copy port group. The administrator might want to use IPv6 addresses for
remote copy operations and use IPv4 addresses on that same port for iSCSI host attach. This
configuration also implies that for two systems to establish an IP partnership, both systems
must have IPv6 addresses that are configured.

Administrators can choose to dedicate an Ethernet port for IP partnership only. In that case,
host access must be explicitly disabled for that IP address and any other IP address that is
configured on that Ethernet port.

Note: To establish an IP partnership, each SVC node must have only a single remote copy
port group that is configured, that is, 1 or 2. The remaining IP addresses must be in remote
copy port group 0.

9.6.8 Supported configurations


The following supported configurations for an IP partnership that were in the first release are
described in this section:
򐂰 Two 2-node systems are in an IP partnership over a single inter-site link, as shown in
Figure 9-15 (configuration 1).

514 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Figure 9-15 Single link with only one remote copy port group that is configured in each system

As shown in Figure 9-15, two systems exist: System A and System B. A single remote
copy port group 1 is created on Node A1 on System A and on Node B2 on System B
because only a single inter-site link exists to facilitate the IP partnership traffic. (The
administrator might choose to configure the remote copy port group on Node B1 on
System B instead of Node B2.) At any time, only the IP addresses that are configured in
remote copy port group 1 on the nodes in System A and System B participate in
establishing data paths between the two systems after the IP partnerships are created. In
this configuration, no failover ports are configured on the partner node in the same I/O
Group.
This configuration has the following characteristics:
– Only one node in each system has a configured remote copy port group, and no
failover ports are configured.
– If Node A1 in System A or Node B2 in System B failed, the IP partnership stops and
enters the not_present state until the failed nodes recover.
– After the nodes recover, the IP ports fail back, the IP partnership recovers, and the
partnership state changes to the fully_configured state.
– If the inter-site system link fails, the IP partnerships transition to the not_present state.
– This configuration is not recommended because it is not resilient to node failures.
򐂰 Two 2-node systems are in an IP partnership over a single inter-site link (with configured
failover ports), as shown in Figure 9-16 (configuration 2).

Chapter 9. Advanced Copy Services 515


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-16 Only one remote copy group on each system and nodes with failover ports configured

As shown in Figure 9-16, two systems exist: System A and System B. A single remote
copy port group 1 is configured on two Ethernet ports, one each, on Node A1 and Node
A2 on System A and similarly, on Node B1 and Node B2 on System B. Although two ports
on each system are configured for remote copy port group 1, only one Ethernet port in
each system actively participates in the IP partnership process. This selection is
determined by a path configuration algorithm that is designed to choose data paths
between the two systems to optimize performance.
The other port on the partner node in the I/O Group behaves as a standby port that is
used in a node failure. If Node A1 fails in System A, the IP partnership continues servicing
replication I/O from Ethernet Port 2 because a failover port is configured on Node A2 on
Ethernet Port 2. However, discovery and path configuration logic to re-establish paths
post-failover might take time, which can cause partnerships to transition to the not_present
state for that period. The details of the particular IP port that actively participates in the IP
partnership are provided in the lsportip output (reported as used).
This configuration has the following characteristics:
– Each node in the I/O Group has the same remote copy port group that is configured.
However, only one port in that remote copy port group is active at any time at each
system.
– If Node A1 in System A or Node B2 in System B fails in its system, the rediscovery of
the IP partnerships is triggered and continues servicing the I/O from the failover port.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
򐂰 Two 4-node systems are in an IP partnership over a single inter-site link (with failover ports
that are configured), as shown in Figure 9-17 (configuration 3).

516 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Figure 9-17 Multinode systems single inter-site link with only one remote copy port group

As shown in Figure 9-17, there are two 4-node systems: System A and System B. A single
remote copy port group 1 is configured on nodes A1, A2, A3, and A4 on System A at Site
A, and on nodes B1, B2, B3, and B4 on System B at Site B. Although four ports are
configured for remote copy group 1, only one Ethernet port in each remote copy port
group on each system actively participates in the IP partnership process. Port selection is
determined by a path configuration algorithm. The other ports play the role of standby
ports.
If Node A1 fails in System A, the IP partnership selects one of the remaining ports that is
configured with remote copy port group 1 from any of the nodes from either of the two I/O
Groups in System A. However, it might take time (generally seconds) for discovery and
path configuration logic to re-establish the paths after the failover and this process can
cause partnerships to transition to the not_present state. This result leads remote copy
relationships to stop and the administrator might need to manually verify the issues in the
event log and start the relationships or remote copy Consistency Groups, if they do not
auto recover. The details about the particular IP port that is actively participating in the IP
partnership process are provided in the lsportip view (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both I/O Groups.
However, only one port in that remote copy port group remains active and participates
in the IP partnership on each system.
– If Node A1 in System A or Node B2 in System B encountered a failure in the system,
the discovery of the IP partnerships is triggered and it continues servicing the I/O from
the failover port.

Chapter 9. Advanced Copy Services 517


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
– The bandwidth of the single link is used completely.
򐂰 An eight-node system is in an IP partnership with a four-node system over a single
inter-site link, as shown in Figure 9-18 (configuration 4).

Figure 9-18 Multinode systems with single inter-site link with only one remote copy port group

As shown in Figure 9-18 on page 518, there is an eight-node system (System A in Site A)
and a four-node system (System B in Site B). A single remote copy port group 1 is
configured on nodes A1, A2, A5, and A6 on System A at Site A. Similarly, a single remote
copy port group 1 is configured on nodes B1, B2, B3, and B4 on System B.
Although there are four I/O Groups (eight nodes) in System A, any two I/O Groups at
maximum are supported to be configured for IP partnerships. If Node A1 fails in System A,
the IP partnership continues using one of the ports that is configured in the remote copy
port group from any of the nodes from either of the two I/O Groups in System A. However,
it might take time for discovery and path configuration logic to re-establish paths
post-failover and this delay might cause partnerships to transition to the not_present state.

518 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

This process can lead the remote copy relationships to stop and the administrator must
manually start them if the relationships do not auto recover. The details of which particular
IP port is actively participating in the IP partnership process is provided in the lsportip
output (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both the I/O Groups
that are identified for participating in IP replication. However, only one port in that
remote copy port group remains active on each system and participates in IP
replication.
– If Node A1 in System A or Node B2 in System B fails in the system, the IP partnerships
trigger discovery and continue servicing the I/O from the failover ports.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
– The bandwidth of the single link is used completely.
򐂰 Two 2-node systems exist with two inter-site links, as shown in Figure 9-19 (configuration
5).

Figure 9-19 Dual links with two remote copy groups on each system are configured

As shown in Figure 9-19, remote copy port groups 1 and 2 are configured on the nodes in
System A and System B because two inter-site links are available. In this configuration,
the failover ports are not configured on partner nodes in the I/O Group. Instead, the ports
are maintained in different remote copy port groups on both of the nodes and they remain
active and participate in the IP partnership by using both of the links.
However, if either of the nodes in the I/O Group fails (that is, if Node A1 on System A fails),
the IP partnership continues only from the available IP port that is configured in remote
copy port group 2. Therefore, the effective bandwidth of the two links is reduced to 50%.
Only the bandwidth of a single link is available until the failure is resolved.

Chapter 9. Advanced Copy Services 519


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

This configuration has the following characteristics:


– Two inter-site links exist and two remote copy port groups are configured.
– Each node has only one IP port in remote copy port group 1 or 2.
– Both the IP ports in the two remote copy port groups participate simultaneously in the
IP partnerships. Therefore, both of the links are used.
– During node failure or link failure, the IP partnership traffic continues from the other
available link and the port group. Therefore, if two links of 10 Mbps each are available
and you have 20 Mbps of effective link bandwidth, bandwidth is reduced to 10 Mbps
only during a failure.
– After the node failure or link failure is resolved and failback happens, the entire
bandwidth of both of the links is available as before.
򐂰 Two 4-node systems are in an IP partnership with dual inter-site links, as shown in
Figure 9-20 (configuration 6).

Figure 9-20 Multinode systems with dual inter-site links between the two systems

As shown in Figure 9-20, there are two 4-node systems: System A and System B. This
configuration is an extension of configuration 5 to a multinode multi-I/O Group
environment. As seen in this configuration, two I/O Groups exist and each node in the I/O

520 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Group has a single port that is configured in remote copy port group 1 or 2. Although two
ports are configured in remote copy port groups 1 and 2 on each system, only one IP port
in each remote copy port group on each system actively participates in the IP partnership.
The other ports that are configured in the same remote copy port group act as standby
ports in a failure. Which port in a configured remote copy port group participates in the IP
partnership at any moment is determined by a path configuration algorithm.
In this configuration, if Node A1 fails in System A, the IP partnership traffic continues from
Node A2 (that is, remote copy port group 2) and at the same time the failover also causes
discovery in remote copy port group 1. Therefore, the IP partnership traffic continues from
Node A3 on which remote copy port group 1 is configured. The details of the particular IP
port that is actively participating in the IP partnership process is provided in the lsportip
output (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in the I/O Group 1 or 2.
However, only one port per system in both remote copy port groups remains active and
participates in the IP partnership.
– Only a single port per system from each configured remote copy port group
participates simultaneously in the IP partnership. Therefore, both of the links are used.
– During node failure or port failure of a node that is actively participating in the IP
partnership, the IP partnership continues from the alternative port because another
port is in the system in the same remote copy port group but in a different I/O Group.
– The pathing algorithm can start the discovery of an available port in the affected
remote copy port group in the second I/O Group and pathing is re-established, which
restores the total bandwidth, that is, both of the links are available to support the IP
partnership.
򐂰 An eight-node system is in an IP partnership with a four-node system over dual inter-site
links, as shown in Figure 9-21 on page 522 (configuration 7).

Chapter 9. Advanced Copy Services 521


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-21 Multinode systems (two I/O Groups on each system) with dual inter-site links between
the two systems

As shown in Figure 9-21, an eight-node System A is in Site A and a four-node System B is


in Site B. Because a maximum of two I/O Groups in the IP partnership are supported in a
system, although there are four I/O Groups (eight nodes), nodes from only two I/O Groups
are configured with remote copy port groups in System A. The remaining or all of the I/O
Groups can be configured to be remote copy partnerships over Fibre Channel. In this
configuration, two links exist and two I/O Groups are configured with remote copy port
groups 1 and 2, but path selection logic is managed by an internal algorithm. Therefore,
this configuration depends on the pathing algorithm to decide which of the nodes actively
participates in the IP partnership. Even if Node A5 and Node A6 are configured with
remote copy port groups correctly, active IP partnership traffic on both of the links might be
driven from Node A1 and Node A2 only.

522 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

If Node A1 fails in System A, IP partnership traffic continues from Node A2 (that is, remote
copy port group 2) and the failover also causes IP partnership traffic to continue from
Node A5 on which remote copy port group 1 is configured. The details of the particular IP
port actively participating in the IP partnership process are provided in the lsportip
output (reported as used).
This configuration has the following characteristics:
– Two I/O Groups exist with nodes in those I/O Groups that are configured in two remote
copy port groups because two inter-site links are available for participating in the IP
partnership. However, only one port per system in a particular remote copy port group
remains active and participates in the IP partnership.
– One port per system from each remote copy port group participates in the IP
partnership simultaneously. Therefore, both of the links are used.
– If a node or a port on the node that is actively participating in the IP partnership fails,
the remote copy data path is established from that port because another port is
available on an alternative node in the system with the same remote copy port group.
– The path selection algorithm starts discovery of the available port in the affected
remote copy port group in the alternative I/O Groups and paths are re-established,
restoring the total bandwidth across both links.
– The remaining or all of the I/O Groups can be in remote copy partnerships with other
systems.
򐂰 An example of unsupported configuration for single inter-site link is shown in Figure 9-22
(configuration 8).

Figure 9-22 Two node systems with single inter-site link and remote copy port groups are
configured

As shown in Figure 9-22, this configuration is similar to configuration 2, but it differs


because each node now has the same remote copy port group that is configured on more
than one IP port.

Chapter 9. Advanced Copy Services 523


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

On any node, only one port at any time can participate in the IP partnership. Configuring
multiple ports in the same remote copy group on the same node is not supported.
򐂰 An example of an unsupported configuration for dual inter-site link is shown in Figure 9-23
(configuration 9).

Figure 9-23 Dual links with two remote copy port groups with failover port groups are configured

As shown in Figure 9-23, this configuration is similar to configuration 5, but it differs


because each node now also has two ports that are configured with remote copy port
groups. In this configuration, the path selection algorithm can select a path that might
cause partnerships to transition to the not_present state and then recover.
This result is a configuration restriction. The use of this configuration is not recommended
unless the configuration restriction is lifted in future releases.
򐂰 An example deployment for configuration 2 with a dedicated inter-site link is shown in
Figure 9-24 (configuration 10).

524 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Figure 9-24 Deployment example

In this configuration, one port on each node in System A and System B is configured in
remote copy group 1 to establish an IP partnership and to support remote copy
relationships. A dedicated inter-site link is used for IP partnership traffic and iSCSI host
attach is disabled on those ports.
The following configuration steps are used:
a. Configure system IP addresses correctly so that they can be reached over the inter-site
link.
b. Qualify whether the partnerships must be created over IPv4 or IPv6 and then assign IP
addresses and open firewall ports 3260 and 3265.
c. Configure the IP ports for remote copy on both systems by using the following settings:
• Remote copy group: 1
• Host: No
• Assign IP address
d. Check that the maximum transmission unit (MTU) levels across the network meet the
requirements as set. (The default MTU is 1500 on the SVC.)
e. Establish the IP partnerships from both of the systems.
f. After the partnerships are in the fully_configured state, you can create the remote copy
relationships.
򐂰 An example deployment for configuration 5 with ports that are shared with host access is
shown in Figure 9-25 (configuration 11).

Chapter 9. Advanced Copy Services 525


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-25 Deployment example

In this configuration, IP ports are shared by both iSCSI hosts and the IP partnership.
The following configuration steps are used:
a. Configure the system IP addresses correctly so that they can be reached over the
inter-site link.
b. Qualify whether the IP partnerships must be created over IPv4 or IPv6 and then assign
IP addresses and open firewall ports 3260 and 3265.
c. Configure the IP ports for remote copy on System A1 by using the following settings.
Node 1:
• Port 1, remote copy port group 1
• Host: Yes
• Assign IP address
Node 2:
• Port 4, remote copy port group 2
• Host: Yes
• Assign IP address
d. Configure the IP ports for remote copy on System B1 by using the following settings.
Node 1:
• Port 1, remote copy port group 1
• Host: Yes
• Assign IP address
Node 2:
• Port 4, remote copy port group 2
• Host: Yes
• Assign IP address
e. Check the MTU levels across the network and meet the requirements as set. (The
default MTU is 1500 on the SVC.)
f. Establish the IP partnerships from both systems.

526 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

g. After the partnerships are in the fully_configured state, you can create the remote copy
relationships.

9.6.9 Setting up the SVC system IP partnership by using the GUI


All required steps about how to create the IP partnership between the SVC systems by using
the SVC GUI, are described in details in Chapter 11, “Operations using the GUI” on
page 715.

9.7 Remote Copy


In this section, we describe the remote copy services, which are a synchronous remote copy
called Metro Mirror (MM), asynchronous remote copy called Global Mirror (GM), and Global
Copy with Changed Volumes. Remote Copy in the SVC is similar to Remote Copy in the IBM
System Storage DS8000 family at a functional level, but the implementation differs.

The IBM SVC provides a single point of control when remote copy is enabled in your network
(regardless of the disk subsystems that are used) if those disk subsystems are supported by
the SVC.

The general application of remote copy services is to maintain two real-time synchronized
copies of a volume. Often, two copies are geographically dispersed between two IBM SVC
systems, although it is possible to use MM or GM within a single system (within an I/O
Group). If the master copy fails, you can enable an auxiliary copy for I/O operation.

Tips: Intracluster MM/GM uses more resources within the system when compared to an
intercluster MM/GM relationship, where resource allocation is shared between the
systems.

Use intercluster MM/GM when possible. For mirroring volumes in the same IO Group, it is
better to use Volume Mirroring or the FlashCopy feature.

A typical application of this function is to set up a dual-site solution that uses two IBM SVC
systems. The first site is considered the primary or production site, and the second site is
considered the backup site or failover site, which is activated when a failure at the first site is
detected.

9.7.1 Multiple SVC system mirroring


Each IBM SVC system can maintain up to three partner system relationships, which allows as
many as four systems to be directly associated with each other. This IBM SVC partnership
capability enables the implementation of disaster recovery (DR) solutions.

Note: For more information about restrictions and limitations of native IP replication, see
9.6.3, “IP partnership limitations” on page 509.

Figure 9-26 shows an example of a multiple system mirroring configuration.

Chapter 9. Advanced Copy Services 527


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-26 Multiple system mirroring configuration example

IBM Spectrum Virtualize software level restrictions for multiple system mirroring:

Note the following points:


򐂰 A partnership between a system that is running V6.1 and a system that is running a
version earlier than V4.3 is not supported.
򐂰 Systems in a partnership where one system is running 6.1.0 and the other system is
running V4.3 cannot participate in other partnerships with other systems.
򐂰 Systems that are all running V6.1 or V5.1 can participate in up to three system
partnerships.
򐂰 To use an IBM SVC as a system partner, the IBM SVC must have V6.3 or newer code.
The layer settings can not be changed on IBM SVC systems.
򐂰 To use native IP replication between SVC systems, both systems must be at V7.2 or
higher. A maximum of one IP partnership is allowed per SVC system.

Object name length in different IBM Spectrum Virtualize code versions:

Starting with V6.1 object names up to 63 characters are supported. Previous levels
supported up to 15 characters only.

When V6.1 systems are partnered with V4.3 and V5.1 systems, various object names are
truncated at 15 characters when they are displayed from V4.3 and V5.1 systems.

528 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Supported multiple system mirroring topologies


Multiple system mirroring allows for various partnership topologies, as shown in the example
in Figure 9-27.

The following example is a star topology: A → B, A → C, and A → D.

Figure 9-27 V7000 star topology

Figure 9-27 shows four systems in a star topology, with System A at the center. System A can
be a central DR site for the three other locations.

By using a star topology, you can migrate applications by using a process, such as the one
described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.
3. Create the A → C relationship (or the B → C relationship).
4. Synchronize to system C, and ensure that A → C is established:
– A → B, A → C, A → D, B → C, B → D, and C → D
– A → B, A → C, and B → C

Figure 9-28 shows an example of a triangle topology:

A → B, A → C, and B → C.

Figure 9-28 SVC triangle topology

Figure 9-29 shows an example of an SVC fully connected topology.

A → B, A → C, A → D, B → D, and C → D.

Chapter 9. Advanced Copy Services 529


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-29 SVC fully connected topology

Figure 9-29 is a fully connected mesh in which every system has a partnership to each of the
three other systems. This topology allows volumes to be replicated between any pair of
systems, for example:

A → B, A → C, and B → C.

Figure 9-30 shows a daisy-chain topology.

Figure 9-30 SVC daisy-chain topology

Although systems can have up to three partnerships, volumes can be part of only one remote
copy relationship, for example, A → B.

System partnership intermix: All of the preceding topologies are valid for the intermix of
the IBM SAN Volume Controller with the Storwize V7000 if the Storwize V7000 is set to the
replication layer and running V6.3 or later.

9.7.2 Importance of write ordering


Many applications that use block storage have a requirement to survive failures, such as loss
of power or a software crash, and to not lose data that existed before the failure. Because
many applications must perform large numbers of update operations in parallel, maintaining
write ordering is key to ensuring the correct operation of applications following a disruption.

An application that performs a high volume of database updates is designed with the concept
of dependent writes. With dependent writes, it is important to ensure that an earlier write
completed before a later write is started. Reversing or performing the order of writes
differently than the application intended can undermine the application’s algorithms and can
lead to problems, such as detected or undetected data corruption.

The IBM Spectrum Virtualize Metro Mirror and Global Mirror implementation operates in a
manner that is designed to always keep a consistent image at the secondary site. The Global
Mirror implementation uses complex algorithms that operate to identify sets of data and
number those sets of data in sequence. The data is then applied at the secondary site in the
defined sequence.

530 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Operating in this manner ensures that if the relationship is in a consistent_synchronized state,


the Global Mirror target data is at least crash consistent and allows for quick recovery through
application crash recovery facilities.

For more information about dependent writes, see 9.4.3, “Consistency Groups” on page 486.

Remote Copy Consistency Groups


A Remote Copy Consistency Group can contain an arbitrary number of relationships up to the
maximum number of MM/GM relationships that is supported by the IBM SVC system.
MM/GM commands can be issued to a Remote Copy Consistency Group and, therefore,
simultaneously for all MM/GM relationships that are defined within that Consistency Group or
to a single MM/GM relationship that is not part of a Remote Copy Consistency Group. For
example, when a startrcconsistgrp command is issued to the Consistency Group, all of the
MM/GM relationships in the Consistency Group are started at the same time.

Figure 9-31 on page 531 shows the concept of Metro Mirror Consistency Groups. The same
applies to Global Mirror Consistency Groups.

Figure 9-31 MM Consistency Group

Because the MM_Relationship 1 and 2 are part of the Consistency Group, they can be
handled as one entity. The stand-alone MM_Relationship 3 is handled separately.

Certain uses of MM/GM require the manipulation of more than one relationship. Remote
Copy Consistency Groups can group relationships so that they are manipulated in unison.

Consider the following points:


򐂰 MM/GM relationships can be part of a Consistency Group, or they can be stand-alone,
and, therefore, handled as single instances.
򐂰 A Consistency Group can contain zero or more relationships. An empty Consistency
Group with zero relationships in it has little purpose until it is assigned its first relationship,
except that it has a name.

Chapter 9. Advanced Copy Services 531


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 All relationships in a Consistency Group must have corresponding master and auxiliary
volumes.
򐂰 All relationships in one Consistency Group must be the same type, for example only Metro
Mirror or only Global Mirror.

Although Consistency Groups can be used to manipulate sets of relationships that do not
need to satisfy these strict rules, this manipulation can lead to undesired side effects. The
rules behind a Consistency Group mean that certain configuration commands are prohibited.
These configuration commands are not prohibited if the relationship is not part of a
Consistency Group.

For example, consider the case of two applications that are independent, yet they are placed
into a single Consistency Group. If an error occurs, synchronization is lost and a background
copy process is required to recover synchronization. While this process is progressing,
MM/GM rejects attempts to enable access to the auxiliary volumes of either application.

If one application finishes its background copy more quickly than the other application,
MM/GM still refuses to grant access to its auxiliary volumes even though access is safe in this
case. The MM/GM policy is to refuse access to the entire Consistency Group if any part of it is
inconsistent.

Stand-alone relationships and Consistency Groups share a common configuration and state
model. All of the relationships in a non-empty Consistency Group have the same state as the
Consistency Group.

9.7.3 Remote copy intercluster communication


In the traditional Fibre Channel (FC), the intercluster communication between systems in an
Metro Mirror and Global Mirror partnership is performed over the SAN. In the following
section, we describe this communication path.

For more information about intercluster communication between systems in an IP


partnership, see 9.6.6, “States of IP partnership” on page 512.

Zoning
The IBM SVC node ports on each IBM SVC system must communicate with each other to
create the partnership. Switch zoning is critical to facilitating intercluster communication.

Intercluster communication channels


When an IBM SVC system partnership is defined on a pair of systems, the following
intercluster communication channels are established:
򐂰 A single control channel, which is used to exchange and coordinate configuration
information
򐂰 I/O channels between each of these nodes in the systems

These channels are maintained and updated as nodes and links appear and disappear from
the fabric, and are repaired to maintain operation where possible. If communication between
the IBM SVC systems is interrupted or lost, an event is logged (and the Metro Mirror and
Global Mirror relationships stop).

Alerts: You can configure the IBM SVC to raise Simple Network Management Protocol
(SNMP) traps to the enterprise monitoring system to alert on events that indicate an
interruption in internode communication occurred.

532 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Intercluster links
All IBM SVC nodes maintain a database of other devices that are visible on the fabric. This
database is updated as devices appear and disappear.

Devices that advertise themselves as IBM SVC nodes are categorized according to the IBM
SVC system to which they belong. The IBM SVC nodes that belong to the same system
establish communication channels between themselves and begin to exchange messages to
implement clustering and the functional protocols of the IBM SVC.

Nodes that are in separate systems do not exchange messages after initial discovery is
complete, unless they are configured together to perform a remote copy relationship.

The intercluster link carries control traffic to coordinate activity between two systems. The link
is formed between one node in each system. The traffic between the designated nodes is
distributed among logins that exist between those nodes.

If the designated node fails (or all of its logins to the remote system fail), a new node is
chosen to carry control traffic. This node change causes the I/O to pause, but it does not put
the relationships in a ConsistentStopped state.

Note: It is recommended to use chsystem with -partnerfcportmask to dedicate several


FC ports only to system-to-system traffic to ensure that remote copy is not affected by
other traffic, such as host-to-node traffic or node-to-node traffic within the same system.

9.7.4 Metro Mirror overview


Metro Mirror establishes a synchronous relationship between two volumes of equal size. The
volumes in a Metro Mirror relationship are referred to as the master (primary) volume and the
auxiliary (secondary) volume. Traditional FC Metro Mirror is primarily used in a metropolitan
area or geographical area up to a maximum distance of 300 km (186.4 miles) to provide
synchronous replication of data. With synchronous copies, host applications write to the
master volume, but they do not receive confirmation that the write operation completed until
the data is written to the auxiliary volume. This action ensures that both the volumes have
identical data when the copy completes. After the initial copy completes, the Metro Mirror
function always maintains a fully synchronized copy of the source data at the target site.

Metro Mirror has the following characteristics:


򐂰 Zero recovery point objective (RPO)
򐂰 Synchronous
򐂰 Production application performance that is affected by round-trip latency

Increased distance directly affects host I/O performance because the writes are synchronous.
Use the requirements for application performance when you are selecting your Metro Mirror
auxiliary location.

Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups (FlashCopy Consistency Groups are described in
9.4, “Implementing FlashCopy” on page 484).

The IBM SVC provides intracluster and intercluster Metro Mirror.

Intracluster Metro Mirror


Intracluster Metro Mirror performs the intracluster copying of a volume, in which both volumes
belong to the same system and I/O Group within the system. Because it is within the same

Chapter 9. Advanced Copy Services 533


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

I/O Group, bitmap space must be sufficient within the I/O Group for both sets of volumes and
licensing must be on the system.

Important: Performing Metro Mirror across I/O Groups within a system is not supported.

Intercluster Metro Mirror


Intercluster Metro Mirror performs intercluster copying of a volume, in which one volume
belongs to a system and the other volume belongs to a separate system.

Two IBM SVC systems must be defined in a partnership, which must be performed on both
IBM SVC systems in order to establish a fully functional Metro Mirror partnership.

By using standard single-mode connections, the supported distance between two SVC
systems in an MM partnership is 10 km (6.2 miles), although greater distances can be
achieved by using extenders. For extended distance solutions, contact your IBM marketing
representative.

Limit: When a local fabric and a remote fabric are connected for MM purposes, the
inter-switch link (ISL) hop count between a local node and a remote node cannot exceed
seven.

9.7.5 Synchronous remote copy


Metro Mirror is a fully synchronous remote copy technique that ensures that writes are
committed at both the master and auxiliary volumes before write completion is acknowledged
to the host, but only if writes to the auxiliary volumes are possible.

Events, such as a loss of connectivity between systems, can cause mirrored writes from the
master volume and the auxiliary volume to fail. In that case, Metro Mirror suspends writes to
the auxiliary volume and allows I/O to the master volume to continue to avoid affecting the
operation of the master volumes.

Figure 9-32 shows how a write to the master volume is mirrored to the cache of the auxiliary
volume before an acknowledgment of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.

534 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Figure 9-32 Write on volume in Metro Mirror relationship

However, this process also means that the application is exposed to the latency and
bandwidth limitations (if any) of the communication link between the master and auxiliary
volumes. This process might lead to unacceptable application performance, particularly when
placed under peak load. Therefore, the use of traditional Fibre Channel Metro Mirror has
distance limitations that are based on your performance requirements. The IBM SVC does
not support more than 300 km (186.4 miles).

9.7.6 Metro Mirror features


IBM SVC Metro Mirror supports the following features:
򐂰 Synchronous remote copy of volumes that are dispersed over metropolitan distances.
򐂰 The IBM SVC implements Metro Mirror relationships between volume pairs, with each
volume in a pair that is managed by an IBM Storwize V7000 system or IBM SAN Volume
Controller system (requires V6.3 or later).
򐂰 Supports intracluster Metro Mirror where both volumes belong to the same system (and
I/O Group).
򐂰 The IBM SVC supports intercluster Metro Mirror where each volume belongs to a separate
IBM SVC system. You can configure a specific IBM SVC system for partnership with
another system. All intercluster Metro Mirror processing occurs between two IBM SVC
systems that are configured in a partnership.
򐂰 Intercluster and intracluster Metro Mirror can be used concurrently.
򐂰 The IBM SVC does not require that a control network or fabric is installed to manage
Metro Mirror. For intercluster Metro Mirror, the IBM SVC maintains a control link between
two systems. This control link is used to control the state and coordinate updates at either
end. The control link is implemented on top of the same FC fabric connection that the IBM
SVC uses for Metro Mirror I/O.
򐂰 The IBM SVC implements a configuration model that maintains the Metro Mirror
configuration and state through major events, such as failover, recovery, and
resynchronization, to minimize user configuration action through these events.

Chapter 9. Advanced Copy Services 535


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

The IBM SVC allows the resynchronization of changed data so that write failures that occur
on the master or auxiliary volumes do not require a complete resynchronization of the
relationship.

9.7.7 Metro Mirror attributes


The Metro Mirror function in the IBM SVC offers the following attributes:
򐂰 A partnership is created between two IBM SVC systems or an IBM SVC system and an
IBM Storwize V7000 operating in the replication layer (for intercluster Metro Mirror).
򐂰 A Metro Mirror relationship is created between two volumes of the same size.
򐂰 To manage multiple Metro Mirror relationships as one entity, relationships can be made
part of a Metro Mirror Consistency Group, which ensures data consistency across multiple
Metro Mirror relationships and provides ease of management.
򐂰 When a Metro Mirror relationship is started and when the background copy completes, the
relationship becomes consistent and synchronized.
򐂰 After the relationship is synchronized, the auxiliary volume holds a copy of the production
data at the primary, which can be used for DR.
򐂰 The auxiliary volume is in read-only mode when the relationship is active.
򐂰 To access the auxiliary volume, the Metro Mirror relationship must be stopped with the
access option enabled, before write I/O is allowed to the auxiliary.
򐂰 The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.

9.7.8 Global Mirror


In the following topics, we describe the Global Mirror copy service, which is an asynchronous
remote copy service. It provides and maintains a consistent mirrored copy of a source volume
to a target volume.

Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The
volumes in a Global Mirror relationship are referred to as the master (source) volume and the
auxiliary (target) volume, which is the same as Metro Mirror.

Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups.

Global Mirror writes data to the auxiliary volume asynchronously, which means that host
writes to the master volume provide the host with confirmation that the write is complete
before the I/O completes on the auxiliary volume.

Global Mirror has the following characteristics:


򐂰 Near-zero RPO
򐂰 Asynchronous
򐂰 Production application performance that is impacted by I/O sequencing preparation time

536 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Intracluster Global Mirror


Although Global Mirror is available for intracluster, it has no functional value for production
use. Intracluster Metro Mirror provides the same capability with less overhead. However,
leaving this functionality in place simplifies testing and allows for client experimentation and
testing (for example, to validate server failover on a single test system). As with intracluster
Metro Mirror, you must consider the increase in the license requirement because the source
and target exist on the same IBM SVC system.

Intercluster Global Mirror


Intercluster Global Mirror operations require a pair of IBM SVC systems that are connected by
a number of intercluster links. The two IBM SVC systems must be defined in an partnership to
establish a fully functional Global Mirror relationship.

Limit: When a local fabric and a remote fabric are connected for Global Mirror purposes,
the ISL hop count between a local node and a remote node must not exceed seven hops.

9.7.9 Asynchronous remote copy


Global Mirror is an asynchronous remote copy technique. In asynchronous remote copy, the
write operations are completed on the primary site and the write acknowledgment is sent to
the host before it is received at the secondary site. An update of this write operation is sent to
the secondary site at a later stage, which provides the capability to perform remote copy over
distances that exceed the limitations of synchronous remote copy.

The Global Mirror function provides the same function as Metro Mirror remote copy, but over
long-distance links with higher latency without requiring the hosts to wait for the full round-trip
delay of the long distance link.

Figure 9-33 on page 537 shows that a write operation to the master volume is acknowledged
back to the host that is issuing the write before the write operation is mirrored to the cache for
the auxiliary volume.

Figure 9-33 Global Mirror write sequence

Chapter 9. Advanced Copy Services 537


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

The Global Mirror algorithms maintain a consistent image on the auxiliary always. They
achieve this consistent image by identifying sets of I/Os that are active concurrently at the
master, assigning an order to those sets, and applying those sets of I/Os in the assigned
order at the secondary. As a result, Global Mirror maintains the features of Write Ordering
and Read Stability.

The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary system. Therefore, the process is not
subject to the latency of the long distance link. These two elements of the protocol ensure
that the throughput of the total system can be grown by increasing system size while
maintaining consistency across a growing data set.

Global Mirror write I/O from the production IBM SVC system to a secondary IBM SVC system
requires serialization and sequence-tagging before being sent across the network to the
remote site (to maintain a write-order consistent copy of data).

To avoid impact on production site IBM SVC allows more parallelism in processing and
managing Global Mirror writes on a secondary system by using the following methods:
򐂰 Nodes on the secondary system store replication writes in new redundant non-volatile
cache
򐂰 Cache content details are shared between nodes
򐂰 Cache content details are batched together to make node-to-node latency less of an issue
򐂰 Nodes intelligently apply these batches in parallel as soon as possible
򐂰 Nodes internally manage and optimize Global Mirror secondary write I/O processing

In a failover scenario where the secondary site must become the master source of data,
certain updates might be missing at the secondary site. Therefore, any applications that use
this data must have an external mechanism for recovering the missing updates and
reapplying them, for example, a transaction log replay.

Global Mirror is supported over FC, FC over IP (FCIP), FC over Ethernet (FCOE), and native
IP connections. The maximum distance cannot exceed 80 ms round trip, which is about 4000
km (2485.48 miles) between mirrored systems. But, starting with V7.4, this distance was
significantly increased for certain IBM Storwize Gen2 and IBM SVC configurations.
Figure 9-34 shows the current supported distances for Global Mirror remote copy.

Figure 9-34 Supported Global Mirror distances

9.7.10 IBM SVC Global Mirror features


IBM SVC Global Mirror supports the following features:
򐂰 Asynchronous remote copy of volumes that are dispersed over metropolitan-scale
distances.

538 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

򐂰 The IBM SVC implements the Global Mirror relationship between a volume pair, with each
volume in the pair being managed by an IBM SVC or Storwize V7000.
򐂰 The IBM SVC supports intracluster Global Mirror where both volumes belong to the same
system (and I/O Group).
򐂰 The IBM SVC intercluster Global Mirror is supported if each volume belongs to a separate
IBM SVC system. An IBM SVC system can be configured for partnership with between
one and three other systems. For more information about IP partnership restrictions, see
9.6.3, “IP partnership limitations” on page 509.
򐂰 Intercluster and intracluster Global Mirror can be used concurrently but not for the same
volume.
򐂰 The IBM SVC does not require a control network or fabric to be installed to manage Global
Mirror. For intercluster Global Mirror, the IBM SVC maintains a control link between the
two systems. This control link is used to control the state and to coordinate the updates at
either end. The control link is implemented on top of the same FC fabric connection that
the IBM SVC uses for Global Mirror I/O.
򐂰 The IBM SVC implements a configuration model that maintains the Global Mirror
configuration and state through major events, such as failover, recovery, and
resynchronization, to minimize user configuration action through these events.
򐂰 The IBM SVC implements flexible resynchronization support, enabling it to resynchronize
volume pairs that experienced write I/Os to both disks and to resynchronize only those
regions that changed.
򐂰 An optional feature for Global Mirror is a delay simulation to be applied on writes that are
sent to auxiliary volumes. It is useful in intracluster scenarios for testing purposes.

Colliding writes
Before V4.3.1, the Global Mirror algorithm required that only a single write is active on any
512-byte logical block address (LBA) of a volume. If a further write is received from a host
while the auxiliary write is still active (even though the master write might complete), the new
host write is delayed until the auxiliary write is complete. This restriction is needed if a series
of writes to the auxiliary must be retried (which is called reconstruction). Conceptually, the
data for reconstruction comes from the master volume.

If multiple writes are allowed to be applied to the master for a sector, only the most recent
write gets the correct data during reconstruction. If reconstruction is interrupted for any
reason, the intermediate state of the auxiliary is inconsistent.

Applications that deliver such write activity do not achieve the performance that GM is
intended to support. A volume statistic is maintained about the frequency of these collisions.
An attempt is made to allow multiple writes to a single location to be outstanding in the GM
algorithm. Master writes still need to be serialized, and the intermediate states of the master
data must be kept in a non-volatile journal while the writes are outstanding to maintain the
correct write ordering during reconstruction. Reconstruction must never overwrite data on the
auxiliary with an earlier version. The volume statistic that is monitoring colliding writes is now
limited to those writes that are not affected by this change.

Figure 9-35 shows a colliding write sequence example.

Chapter 9. Advanced Copy Services 539


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-35 Colliding writes example

The following numbers correspond to the numbers that are shown in Figure 9-35 on
page 540:
򐂰 (1) The first write is performed from the host to LBA X.
򐂰 (2) The completion of the write is acknowledged to the host even though the mirrored write
to the auxiliary volume is not yet complete.
򐂰 (1’) and (2’) Steps occur asynchronously with the first write.
򐂰 (3) The second write is performed from the host also to LBA X. If this write occurs before
(2’), the write is written to the journal file.
򐂰 (4) The completion of the second write is acknowledged to the host.

Delay simulation
An optional feature for Global Mirror permits a delay simulation to be applied on writes that
are sent to auxiliary volumes. This feature allows you to test to detect colliding writes.
Therefore, you can use this feature to test an application before the full deployment of the
feature. The feature can be enabled separately for each intracluster or intercluster Global
Mirrors. You specify the delay setting by using the chsystem command and view the delay by
using the lssystem command. The gm_intra_cluster_delay_simulation field expresses the
amount of time that intracluster auxiliary I/Os are delayed. The
gm_inter_cluster_delay_simulation field expresses the amount of time that intercluster
auxiliary I/Os are delayed. A value of zero disables the feature.

Tip: If you are experiencing repeated problems with the delay on your link, ensure that the
delay simulator was properly disabled.

9.7.11 Using Change Volumes with Global Mirror


Global Mirror is designed to achieve an RPO as low as possible so that data is as up-to-date
as possible. This design places several strict requirements on your infrastructure. In certain

540 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

situations with low network link quality, congested or overloaded hosts, you might be affected
by multiple 1920 congestion errors.

Congestion errors happen in the following primary situations:


򐂰 Congestion at the source site through the host or network
򐂰 Congestion in the network link or network path
򐂰 Congestion at the target site through the host or network

Global Mirror has functionality that is designed to address the following conditions, which
might negatively affect certain Global Mirror implementations:
򐂰 The estimation of the bandwidth requirements tends to be complex.
򐂰 Ensuring the latency and bandwidth requirements can be met is often difficult.
򐂰 Congested hosts on the source or target site can cause disruption.
򐂰 Congested network links can cause disruption with only intermittent peaks.

To address these issues, Change Volumes were added as an option for Global Mirror
relationships. Change Volumes use the FlashCopy functionality, but they cannot be
manipulated as FlashCopy volumes because they are for a special purpose only. Change
Volumes replicate point-in-time images on a cycling period. The default is 300 seconds. Your
change rate needs to include only the condition of the data at the point in time that the image
was taken, instead of all the updates during the period. The use of this function can provide
significant reductions in replication volume.

Global Mirror with Change Volumes has the following characteristics:


򐂰 Larger RPO
򐂰 Point-in-time copies
򐂰 Asynchronous
򐂰 Possible system performance overhead because point-in-time copies are created locally

Figure 9-36 shows a simple Global Mirror relationship without Change Volumes.

Figure 9-36 Global Mirror without Change Volumes

With Global Mirror with Change Volumes, this environment looks as shown in Figure 9-37.

Chapter 9. Advanced Copy Services 541


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Figure 9-37 Global Mirror with Change Volumes

With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary Change Volume. The mapping is updated on the cycling period (60 seconds to one
day). The primary Change Volume is then replicated to the secondary Global Mirror volume at
the target site, which is then captured in another Change Volume on the target site. This
approach provides an always consistent image at the target site and protects your data from
being inconsistent during resynchronization.

How Change Volumes might save you replication traffic is shown in Figure 9-38 on page 542.

Figure 9-38 Global Mirror I/O replication without Change Volumes

In Figure 9-38, you can see a number of I/Os on the source and the same number on the
target, and in the same order. Assuming that this data is the same set of data being updated
repeatedly, this approach results in wasted network traffic. The I/O can be completed much
more efficiently, as shown in Figure 9-39.

Figure 9-39 Global Mirror I/O with Change Volumes

542 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

In Figure 9-39, the same data is being updated repeatedly; therefore, Change Volumes
demonstrate significant I/O transmission savings by needing to send I/O number 16 only,
which was the last I/O before the cycling period.

You can adjust the cycling period by using the chrcrelationship -cycleperiodseconds
<60 - 86400> command from the CLI. If a copy does not complete in the cycle period, the
next cycle does not start until the prior cycle completes. For this reason, the use of Change
Volumes offers the following possibilities for RPO:
򐂰 If your replication completes in the cycling period, your RPO is twice the cycling period.
򐂰 If your replication does not complete within the cycling period, your RPO is twice the
completion time. The next cycling period starts immediately after the prior cycling period is
finished.

Carefully consider your business requirements versus the performance of Global Mirror with
Change Volumes. Global Mirror with Change Volumes increases the intercluster traffic for
more frequent cycling periods. Therefore, selecting the shortest cycle periods possible is not
always the answer. In most cases, the default must meet requirements and perform well.

Important: When you create your Global Mirror volumes with Change Volumes, ensure
that you remember to select the Change Volume on the auxiliary (target) site. Failure to do
so leaves you exposed during a resynchronization operation.

9.7.12 Distribution of work among nodes


For the best performance, MM/GM volumes must have their preferred nodes evenly
distributed among the nodes of the systems. Each volume within an I/O Group has a
preferred node property that can be used to balance the I/O load between nodes in that
group. MM/GM also uses this property to route I/O between systems.

If this preferred practice is not maintained, for example, source volumes are assigned to only
one node in the I/O group, you can change the preferred node for each volume to distribute
volumes evenly between the nodes. You can also change the preferred node for volumes that
are in a remote copy relationship without affecting the host IO to a particular volume. The
remote copy relationship type does not matter. (The remote copy relationship type can be
MM, GM, or GM with Change Volumes.) You can change the preferred node both to the
source and target volumes that are participating in the remote copy relationship.

9.7.13 Background copy performance


The background copy performance is subject to sufficient RAID controller bandwidth.
Performance is also subject to other potential bottlenecks, such as the intercluster fabric, and
possible contention from host I/O for the IBM SVC bandwidth resources.

Background copy I/O is scheduled to avoid bursts of activity that might have an adverse affect
on system behavior. An entire grain of tracks on one volume is processed at around the same
time but not as a single I/O. Double buffering is used to try to use sequential performance
within a grain. However, the next grain within the volume might not be scheduled for a while.
Multiple grains might be copied simultaneously and might be enough to satisfy the requested
rate, unless the available resources cannot sustain the requested rate.

Global Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy occurs on relationships that are in the InconsistentCopying
state with a status of Online.

Chapter 9. Advanced Copy Services 543


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

The quota of background copy (configured on the intercluster link) is divided evenly between
all nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node, in turn, divides its allocation evenly between the multiple relationships that are
performing a background copy.

The default value of the background copy is 25 MBps per volume.

Important: The background copy value is a system-wide parameter that can be changed
dynamically but only per system basis and not per relationship basis. Therefore, the copy
rate of all relationships changes when this value is increased or decreased. In systems
with many remote copy relationships, increasing this value might affect overall system or
intercluster link performance. The background copy rate can be changed between 1 - 1000
MBps.

9.7.14 Thin-provisioned background copy


Metro Mirror and Global Mirror relationships preserve the space-efficiency of the master.
Conceptually, the background copy process detects a deallocated region of the master and
sends a special “zero buffer” to the auxiliary. If the auxiliary volume is thin-provisioned and the
region is deallocated, the special buffer prevents a write and, therefore, an allocation. If the
auxiliary volume is not thin-provisioned or the region in question is an allocated region of a
thin-provisioned volume, a buffer of “real” zeros is synthesized on the auxiliary and written as
normal.

9.7.15 Methods of synchronization


This section describes two methods that can be used to establish a synchronized
relationship.

Full synchronization after creation


The full synchronization after creation method is the default method. It is the simplest method
because it requires no administrative activity apart from issuing the necessary commands.
However, in certain environments, the available bandwidth can make this method unsuitable.

Use the following command sequence for a single relationship:


򐂰 Run mkrcrelationship without specifying the -sync option.
򐂰 Run startrcrelationship without specifying the -clean option.

Synchronized before creation


In this method, the administrator must ensure that the master and auxiliary volumes contain
identical data before creating the relationship by using the following technique:
򐂰 Both disks are created with the security delete feature to make all data zero.
򐂰 A complete tape image (or other method of moving data) is copied from one disk to the
other disk.

With this technique, do not allow I/O on the master or auxiliary before the relationship is
established.

Then, the administrator must run the following commands:


򐂰 Run mkrcrelationship with the -sync flag.

544 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

򐂰 Run startrcrelationship without the -clean flag.

Important: Failure to perform these steps correctly can cause MM/GM to report the
relationship as consistent when it is not, therefore, creating a data loss or data integrity
exposure for hosts that access data on the auxiliary volume.

9.7.16 Practical use of Metro Mirror


The master volume is the production volume, and updates to this copy are mirrored in real
time to the auxiliary volume. The contents of the auxiliary volume that existed when the
relationship was created are destroyed.

Switching copy direction: The copy direction for a Metro Mirror relationship can be
switched so that the auxiliary volume becomes the master, and the master volume
becomes the auxiliary, which is similar to the FlashCopy restore option. However, although
the FlashCopy target volume can operate in read/write mode, the target volume of the
started remote copy is always in read-only mode.

While the Metro Mirror relationship is active, the auxiliary volume is not accessible for host
application write I/O at any time. The IBM SVC allows read-only access to the auxiliary
volume when it contains a consistent image. Storwize allows boot time operating system
discovery to complete without an error, so that any hosts at the secondary site can be ready
to start the applications with minimum delay, if required.

For example, many operating systems must read logical block address (LBA) zero to
configure a logical unit. Although read access is allowed at the auxiliary in practice, the data
on the auxiliary volumes cannot be read by a host because most operating systems write a
“dirty bit” to the file system when it is mounted. Because this write operation is not allowed on
the auxiliary volume, the volume cannot be mounted.

This access is provided only where consistency can be ensured. However, coherency cannot
be maintained between reads that are performed at the auxiliary and later write I/Os that are
performed at the master.

To enable access to the auxiliary volume for host operations, you must stop the Metro Mirror
relationship by specifying the -access parameter. While access to the auxiliary volume for
host operations is enabled, the host must be instructed to mount the volume before the
application can be started, or instructed to perform a recovery process.

For example, the Metro Mirror requirement to enable the auxiliary copy for access
differentiates it from third-party mirroring software on the host, which aims to emulate a
single, reliable disk regardless of what system is accessing it. Metro Mirror retains the
property that there are two volumes in existence but it suppresses one volume while the copy
is being maintained.

The use of an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required and that the tasks to be performed on the host that is involved in
establishing the operation on the auxiliary copy are substantial. The goal is to make this copy
rapid (much faster when compared to recovering from a backup copy) but not seamless.

The failover process can be automated through failover management software. The IBM SVC
provides SNMP traps and programming (or scripting) for the CLI to enable this automation.

Chapter 9. Advanced Copy Services 545


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

9.7.17 Practical use of Global Mirror


The practical use of Global Mirror is similar to the Metro Mirror described in 9.7.16, “Practical
use of Metro Mirror” on page 545. The main difference between the two remote copy modes
is that Global Mirror or Global Mirror with Change Volumes are mostly used on much larger
distances than Metro Mirror. Weak link quality or insufficient bandwidth between the primary
and secondary sites can also be a reason to prefer asynchronous Global Mirror over
synchronous Metro Mirror. Otherwise, the use cases for Metro Mirror and Global Mirror are
the same.

9.7.18 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror


Table 9-9 on page 546 lists the combinations of FlashCopy, Metro Mirror, and Global Mirror
functions that are valid for a single volume.

Table 9-9 Valid combination for a single volume


FlashCopy MM or GM source MM or GM target

FlashCopy source Supported Supported

FlashCopy target Supported Not supported

9.7.19 Remote Copy configuration limits


Table 9-10 lists the Metro Mirror and Global Mirror configuration limits.

Table 9-10 MM and GM configuration limits


Parameter Value

Number of MM or GM 256
Consistency Groups per system

Number of Metro Mirror or Global 8192


Mirror relationships per system

Number of Metro Mirror or Global 8192


Mirror relationships per
Consistency Group

Total volume size per I/O Group A per I/O Group limit of 1,024 TiB exists on the quantity of master
and auxiliary volume address spaces that can participate in
Metro Mirror and Global Mirror relationships. This maximum
configuration uses all 512 MiB of bitmap space for the I/O Group,
and it allows 10 MiB of space for all remaining copy services
features.

9.7.20 Remote Copy states and events


In this section, we describe the various states of a MM/GM relationship and the conditions
that cause them to change.

In Figure 9-40 on page 547, the MM/GM relationship diagram shows an overview of the
status that can apply to a MM/GM relationship in a connected state.

546 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Figure 9-40 Metro Mirror or Global Mirror mapping state diagram

When the MM/GM relationship is created, you can specify whether the auxiliary volume is
already in sync with the master volume, and the background copy process is then skipped.
This capability is useful when MM/GM relationships are established for volumes that were
created with the format option.

The following step identifiers are shown in Figure 9-40:


򐂰 Step 1:
a. The MM/GM relationship is created with the -sync option, and the MM/GM relationship
enters the ConsistentStopped state.
b. The MM/GM relationship is created without specifying that the master and auxiliary
volumes are in sync, and the MM/GM relationship enters the InconsistentStopped
state.
򐂰 Step 2:
a. When an MM/GM relationship is started in the ConsistentStopped state, the MM/GM
relationship enters the ConsistentSynchronized state. Therefore, no updates (write
I/Os) were performed on the master volume while in the ConsistentStopped state.
Otherwise, the -force option must be specified, and the MM/GM relationship then
enters the InconsistentCopying state while the background copy is started.
b. When an MM/GM relationship is started in the InconsistentStopped state, the MM/GM
relationship enters the InconsistentCopying state while the background copy is started.

Chapter 9. Advanced Copy Services 547


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Step 3
When the background copy completes, the MM/GM relationship transitions from the
InconsistentCopying state to the ConsistentSynchronized state.
򐂰 Step 4:
a. When a MM/GM relationship is stopped in the ConsistentSynchronized state, the
MM/GM relationship enters the Idling state when you specify the -access option,
which enables write I/O on the auxiliary volume.
b. When a MM/GM relationship is stopped in the ConsistentSynchronized state without
an -access parameter, the auxiliary volumes remain read-only and the state of the
relationship changes to ConsistentStopped.
c. To enable write I/O on the auxiliary volume, when the MM/GM relationship is in the
ConsistentStopped state, issue the command Storwizetask stoprcrelationship,
which specifies the -access option, and the MM/GM relationship enters the Idling
state.
򐂰 Step 5:
a. When a MM/GM relationship is started from the Idling state, you must specify the
-primary argument to set the copy direction. If no write I/O was performed (to the
master or auxiliary volume) while in the Idling state, the MM/GM relationship enters the
ConsistentSynchronized state.
b. If write I/O was performed to the master or auxiliary volume, the -force option must be
specified and the MM/GM relationship then enters the InconsistentCopying state while
the background copy is started. The background copy copies only the data that
changed on the primary volume while the relationship was stopped.

Stop or error
When a MM/GM relationship is stopped (intentionally or because of an error), the state
changes.

For example, the MM/GM relationships in the ConsistentSynchronized state enter the
ConsistentStopped state, and the MM/GM relationships in the InconsistentCopying state
enter the InconsistentStopped state.

If the connection is broken between the IBM SVC systems that are in a partnership, all
(intercluster) MM/GM relationships enter a Disconnected state. For more information, see
“Connected versus disconnected” on page 548.

Common states: Stand-alone relationships and Consistency Groups share a common


configuration and state model. All MM/GM relationships in a Consistency Group have the
same state as the Consistency Group.

State overview
In the following sections, we provide an overview of the various MM/GM states.

Connected versus disconnected


Under certain error scenarios (for example, a power failure at one site that causes one
complete system to disappear), communications between two systems in an MM/GM
relationship can be lost. Alternatively, the fabric connection between the two systems might
fail, which leaves the two systems running but they cannot communicate with each other.

548 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

When the two systems can communicate, the systems and the relationships that span them
are described as connected. When they cannot communicate, the systems and the
relationships spanning them are described as disconnected.

In this state, both systems are left with fragmented relationships and are limited regarding the
configuration commands that can be performed. The disconnected relationships are
portrayed as having a changed state. The new states describe what is known about the
relationship and the configuration commands that are permitted.

When the systems can communicate again, the relationships are reconnected. MM/GM
automatically reconciles the two state fragments, considering any configuration or other event
that occurred while the relationship was disconnected. As a result, the relationship can return
to the state that it was in when it became disconnected or enter a new state.

Relationships that are configured between volumes in the same IBM SVC system
(intracluster) are never described as being in a disconnected state.

Consistent versus inconsistent


Relationships that contain volumes that are operating as secondaries can be described as
being consistent or inconsistent. Consistency Groups that contain relationships can also be
described as being consistent or inconsistent. The consistent or inconsistent property
describes the relationship of the data on the auxiliary to the data on the master volume. It can
be considered a property of the auxiliary volume.

An auxiliary volume is described as consistent if it contains data that might be read by a host
system from the master if power failed at an imaginary point while I/O was in progress, and
power was later restored. This imaginary point is defined as the recovery point. The
requirements for consistency are expressed regarding activity at the master up to the
recovery point.

The auxiliary volume contains the data from all of the writes to the master for which the host
received successful completion and that data was not overwritten by a subsequent write
(before the recovery point).

For writes for which the host did not receive a successful completion (that is, it received a bad
completion or no completion at all), and the host then performed a read from the master of
that data that returned successful completion and no later write was sent (before the recovery
point), the auxiliary contains the same data as the data that was returned by the read from the
master.

From the point of view of an application, consistency means that an auxiliary volume contains
the same data as the master volume at the recovery point (the time at which the imaginary
power failure occurred).

If an application is designed to cope with an unexpected power failure, this assurance of


consistency means that the application can use the auxiliary and begin operation as though it
was restarted after the hypothetical power failure. Again, maintaining the application write
ordering is the key property of consistency.

For more information about dependent writes, see 9.4.3, “Consistency Groups” on page 486.

If a relationship (or a set of relationships) is inconsistent and an attempt is made to start an


application by using the data in the secondaries, the following outcomes are possible:
򐂰 The application might decide that the data is corrupted and crash or exit with an event
code.
򐂰 The application might fail to detect that the data is corrupted and return erroneous data.

Chapter 9. Advanced Copy Services 549


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 The application might work without a problem.

Because of the risk of data corruption, and in particular undetected data corruption, MM/GM
strongly enforces the concept of consistency and prohibits access to inconsistent data.

Consistency as a concept can be applied to a single relationship or a set of relationships in a


Consistency Group. Write ordering is a concept that an application can maintain across a
number of disks that are accessed through multiple systems; therefore, consistency must
operate across all those disks.

When you are deciding how to use Consistency Groups, the administrator must consider the
scope of an application’s data and consider all of the interdependent systems that
communicate and exchange information.

If two programs or systems communicate and store details as a result of the information
exchanged, either of the following actions might occur:
򐂰 All of the data that is accessed by the group of systems must be placed into a single
Consistency Group.
򐂰 The systems must be recovered independently (each within its own Consistency Group).
Then, each system must perform recovery with the other applications to become
consistent with them.

Consistent versus synchronized


A copy that is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the master and auxiliary volumes differ only in regions where writes are
outstanding from the host.

Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in the past. Write I/O might continue to a master but
not be copied to the auxiliary. This state arises when it becomes impossible to keep data
up-to-date and maintain consistency. An example is a loss of communication between
systems when you are writing to the auxiliary.

When communication is lost for an extended period, MM/GM tracks the changes that
occurred on the master, but not the order or the details of such changes (write data). When
communication is restored, it is impossible to synchronize the auxiliary without sending write
data to the auxiliary out of order and, therefore, losing consistency.

The following policies can be used to cope with this situation:


򐂰 Make a point-in-time copy of the consistent auxiliary before you allow the auxiliary to
become inconsistent. If there is a disaster before consistency is achieved again, the
point-in-time copy target provides a consistent (although out-of-date) image.
򐂰 Accept the loss of consistency and the loss of a useful auxiliary while synchronizing the
auxiliary.

Detailed states
In the following sections, we describe the states that are portrayed to the user for either
Consistency Groups or relationships. We also describe information that is available in each
state. The major states are designed to provide guidance about the available configuration
commands.

550 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. A copy process must be
started to make the auxiliary consistent.

This state is entered when the relationship or Consistency Group was InconsistentCopying
and suffered a persistent error or received a stop command that caused the copy process to
stop.

A start command causes the relationship or Consistency Group to move to the


InconsistentCopying state. A stop command is accepted, but it has no effect.

If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O.

This state is entered after a start command is issued to an InconsistentStopped relationship


or a Consistency Group. It is also entered when a forced start is issued to an Idling or
ConsistentStopped relationship or Consistency Group.

In this state, a background copy process runs that copies data from the master to the auxiliary
volume.

In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.

A persistent error or stop command places the relationship or Consistency Group into an
InconsistentStopped state. A start command is accepted but has no effect.

If the background copy process completes on a stand-alone relationship or on all


relationships for a Consistency Group, the relationship or Consistency Group transitions to
the ConsistentSynchronized state.

If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date in relation to the master.

This state can arise when a relationship was in a ConsistentSynchronized state and
experiences an error that forces a Consistency Freeze. It can also arise when a relationship is
created with a CreateConsistentFlag set to TRUE.

Normally, write activity that follows an I/O error causes updates to the master and the
auxiliary is no longer synchronized. In this case, consistency must be given up for a period to
reestablish synchronization. You must use a start command with the -force option to
acknowledge this condition, and the relationship or Consistency Group transitions to
InconsistentCopying. Enter this command only after all outstanding events are repaired.

In the unusual case where the master and the auxiliary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you

Chapter 9. Advanced Copy Services 551


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

can enter a switch command that moves the relationship or Consistency Group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.

If the relationship or Consistency Group becomes disconnected, the auxiliary transitions to


ConsistentDisconnected. The master transitions to IdlingDisconnected.

An informational status log is generated whenever a relationship or Consistency Group enters


the ConsistentStopped state with a status of Online. You can configure this event to generate
an SNMP trap that can be used to trigger automation or manual intervention to issue a start
command following a loss of synchronization.

ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible
for read and write I/O, and the auxiliary volume is accessible for read-only I/O.

Writes that are sent to the master volume are also sent to the auxiliary volume. Either
successful completion must be received for both writes, the write must be failed to the host, or
a state must transition out of the ConsistentSynchronized state before a write is completed to
the host.

A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.

A switch command leaves the relationship in the ConsistentSynchronized state, but it


reverses the master and auxiliary roles (it switches the direction of replicating data).

A start command is accepted, but has no effect.

If the relationship or Consistency Group becomes disconnected, the same transitions are
made as for ConsistentStopped.

Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role.
Therefore, both master and auxiliary volumes are accessible for write I/O.

In this state, the relationship or Consistency Group accepts a start command. MM/GM
maintains a record of regions on each disk that received write I/O while they were idling. This
record is used to determine the areas that must be copied after a start command.

The start command must specify the new copy direction. A start command can cause a
loss of consistency if either volume in any relationship received write I/O, which is indicated by
the Synchronized status. If the start command leads to loss of consistency, you must specify
the -force parameter.

After a start command, the relationship or Consistency Group transitions to


ConsistentSynchronized if consistency is not lost or to InconsistentCopying if consistency is
lost.

Also, the relationship or Consistency Group accepts a -clean option on the start command
while in this state. If the relationship or Consistency Group becomes disconnected, both sides
change their state to IdlingDisconnected.

IdlingDisconnected
IdlingDisconnected is a disconnected state. The target volumes in this half of the relationship
or Consistency Group are all in the master role and accept read or write I/O.

The priority in this state is to recover the link to restore the relationship or consistency.

552 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transitions to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or Consistency Group, which depends on the following factors:
򐂰 The state when it became disconnected
򐂰 The write activity since it was disconnected
򐂰 The configuration activity since it was disconnected

If both halves are IdlingDisconnected, the relationship becomes Idling when it is reconnected.

While IdlingDisconnected, if a write I/O is received that causes the loss of synchronization
(synchronized attribute transitions from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an event is raised to notify you of
the condition. This same event also is raised when this condition occurs for the
ConsistentSynchronized state.

InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the auxiliary role and do not accept read or write
I/O.

Except for deletes, no configuration activity is permitted until the relationship becomes
connected again.

When the relationship or Consistency Group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either of the following conditions are true:
򐂰 The relationship was InconsistentStopped when it became disconnected.
򐂰 The user issued a stop command while disconnected.

In either case, the relationship or Consistency Group becomes InconsistentStopped.

ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write
I/O.

This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary


side of a relationship becomes disconnected.

In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which
is the point when Consistency was frozen. When it is entered from ConsistentStopped, it
retains the time that it had in that state. When it is entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or Consistency Group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
system.

A stop command with the -access flag set to true transitions the relationship or Consistency
Group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary volume and is used as part of a DR scenario.

When the relationship or Consistency Group becomes connected again, the relationship or
Consistency Group becomes ConsistentSynchronized only if this action does not lead to a
loss of consistency. The following conditions must be true:
򐂰 The relationship was ConsistentSynchronized when it became disconnected.
򐂰 No writes received successful completion at the master while disconnected.

Chapter 9. Advanced Copy Services 553


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.

Empty
This state applies only to Consistency Groups. It is the state of a Consistency Group that has
no relationships and no other state information to show.

It is entered when a Consistency Group is first created. It is exited when the first relationship
is added to the Consistency Group, at which point the state of the relationship becomes the
state of the Consistency Group.

9.8 Remote Copy commands


In this section, we present commands that need to be issued to create and operate remote
copy services.

9.8.1 Remote Copy process


The MM/GM process includes the following steps:
1. An system partnership is created between two IBM SVC systems (for intercluster
MM/GM).
2. An MM/GM relationship is created between two volumes of the same size.
3. To manage multiple MM/GM relationships as one entity, the relationships can be made
part of an MM/GM Consistency Group to ensure data consistency across multiple MM/GM
relationships or for ease of management.
4. The MM/GM relationship is started and when the background copy completes, the
relationship is consistent and synchronized.
5. When synchronized, the auxiliary volume holds a copy of the production data at the
master that can be used for disaster recovery.
6. To access the auxiliary volume, the MM/GM relationship must be stopped with the access
option enabled before write I/O is submitted to the auxiliary.

The remote host server is mapped to the auxiliary volume and the disk is available for I/O.

For more information about MM/GM commands, see IBM System Storage SAN Volume
Controller and IBM Storwize V7000 Command-Line Interface User’s Guide, GC27-2287.

The command set for MM/GM contains the following broad groups:
򐂰 Commands to create, delete, and manipulate relationships and Consistency Groups
򐂰 Commands to cause state changes

If a configuration command affects more than one system, MM/GM performs the work to
coordinate configuration activity between the systems. Certain configuration commands can
be performed only when the systems are connected and fail with no effect when they are
disconnected.

Other configuration commands are permitted even though the systems are disconnected. The
state is reconciled automatically by MM/GM when the systems become connected again.

554 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

For any command (with one exception) a single system receives the command from the
administrator. This design is significant for defining the context for a CreateRelationship
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case the
system that is receiving the command is called the local system.

The exception is the command that sets systems into an MM/GM partnership. The
mkfcpartnership and mkippartnership command must be issued on both, the local and
remote systems.

The commands in this section are described as an abstract command set and are
implemented by either of the following methods:
򐂰 The CLI can be used for scripting and automation.
򐂰 The GUI can be used for one-off tasks.

9.8.2 Listing available IBM SVC system partners


Use the lspartnershipcandidate command to list the systems that are available for setting
up a two-system partnership. This command is a prerequisite for creating MM/GM
relationships.

Note: This command is not supported on IP partnerships. Use the mkippartnership for IP
connections.

9.8.3 Changing the system parameters


When you want to change system parameters specific to any remote copy or to Global Mirror
only, use the chsystem command.

The chsystem command


The chsystem command features the following parameters for MM/GM:
򐂰 -relationshipbandwidthlimit cluster_relationship_bandwidth_limit
This parameter controls the maximum rate at which any one remote copy relationship can
synchronize. The default value for the relationship bandwidth limit is 25 MBps, but this
value can now be specified between1 MBps - 100000 MBps. The partnership overall limit
is controlled by the chpartnership -linkbandwidthmbits command, and the command
must be set on each involved system.

Important: Do not set this value higher than the default without first establishing that
the higher bandwidth can be sustained without affecting the host’s performance. The
limit must never be higher than the maximum that is supported by the infrastructure
connecting the remote sites, regardless of the compression rates that you might
achieve.

򐂰 -gmlinktolerance link_tolerance
This parameter specifies the maximum period that the system tolerates delay before
stopping GM relationships. Specify values 60 - 86400 seconds in increments of 10
seconds. The default value is 300. Do not change this value except under the direction of
IBM Support.
򐂰 -gmmaxhostdelay max_host_delay

Chapter 9. Advanced Copy Services 555


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

Specifies the maximum time delay, in milliseconds, at which the Global Mirror link
tolerance timer starts counting down. This threshold value determines the additional
impact that Global Mirror operations can add to the response times of the Global Mirror
source volumes. You can use this parameter to increase the threshold from the default
value of 5 milliseconds.

򐂰 -gminterdelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intercluster copying
to an auxiliary volume) is delayed. This parameter permits you to test performance
implications before GM is deployed and a long-distance link is obtained. Specify a value of
0 - 100 milliseconds in 1-millisecond increments. The default value is 0. Use this argument
to test each intercluster GM relationship separately.
򐂰 -gmintradelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intracluster copying
to an auxiliary volume) is delayed. By using this parameter, you can test performance
implications before GM is deployed and a long-distance link is obtained. Specify a value of
0 - 100 milliseconds in 1-millisecond increments. The default value is 0. Use this argument
to test each intracluster GM relationship separately.
򐂰 -maxreplicationdelay max_replication_delay
Sets a maximum replication delay in seconds. The value must be a number from 1 to 360.
This feature sets the maximum number of seconds to be tolerated to complete single IO. If
IO can’t complete within the max_replication_delay the 1920 event is reported. This is
the system wide setting. When set to 0 the feature is disabled. Applies to Metro Mirror and
Global Mirror relationships.

Use the chsystem command to adjust these values, as shown in the following example:
chsystem -gmlinktolerance 300

You can view all of these parameter values by using the lssystem <system_name> command.

gmlinktolerance
We focus on the gmlinktolerance parameter in particular. If poor response extends past the
specified tolerance, a 1920 event is logged and one or more GM relationships automatically
stop to protect the application hosts at the primary site. During normal operations, application
hosts experience a minimal effect from the response times because the GM feature uses
asynchronous replication.

However, if GM operations experience degraded response times from the secondary system
for an extended period, I/O operations begin to queue at the primary system. This queue
results in an extended response time to application hosts. In this situation, the
gmlinktolerance feature stops GM relationships and the application host’s response time
returns to normal. After a 1920 event occurs, the GM auxiliary volumes are no longer in the
consistent_synchronized state until you fix the cause of the event and restart your GM
relationships. For this reason, ensure that you monitor the system to track when these 1920
events occur.

You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0
(zero). However, the gmlinktolerance feature cannot protect applications from extended
response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature
under the following circumstances:
򐂰 During SAN maintenance windows in which degraded performance is expected from SAN
components and application hosts can withstand extended response times from GM
volumes.

556 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

򐂰 During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the GM relationships. For example,
if you test by using an I/O generator that is configured to stress the back-end storage, the
gmlinktolerance feature might detect the high latency and stop the GM relationships.
Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test
host to extended response times.

A 1920 event indicates that one or more of the SAN components cannot provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).

If 1920 events are occurring, it can be necessary to use a performance monitoring and
analysis tool, such as the IBM Virtual Storage Center to help identify and resolve the problem.

9.8.4 System partnership


To create an IBM SVC system partnership, use the mkfcpartnership command for traditional
Fibre Channel (FC or FCoE) connections or mkippartnership command for IP-based
connections.

Storwizetask mkfcpartnership command


Use the mkfcpartnership command to establish a one-way MM/GM partnership between the
local system and a remote system. Alternatively, use mkippartnership to create an IP-based
partnership.

To establish a fully functional MM/GM partnership, you must issue this command on both
systems. This step is a prerequisite for creating MM/GM relationships between volumes on
the IBM SVC systems.

When the partnership is created, you can specify the bandwidth to be used by the
background copy process between the local and remote IBM SVC system. If it is not
specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is
less than or equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth effect on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy is
attempted for MM/GM. The background copy bandwidth can affect foreground I/O latency in
one of the following ways:
򐂰 The following result can occur if the background copy bandwidth is set too high compared
to the MM/GM intercluster link capacity:
– The background copy I/Os can back up on the MM/GM intercluster link.
– There is a delay in the synchronous auxiliary writes of foreground I/Os.
– The foreground I/O latency increases as perceived by applications.
򐂰 If the background copy bandwidth is set too high for the storage at the primary site,
background copy read I/Os overload the primary storage and delay foreground I/Os.
򐂰 If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary site overload the auxiliary storage and again
delay the synchronous secondary writes of foreground I/Os.

To set the background copy bandwidth optimally, ensure you consider all three resources:
primary storage, intercluster link bandwidth, and auxiliary storage. Provision the most
restrictive of these three resources between the background copy bandwidth and the peak

Chapter 9. Advanced Copy Services 557


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

foreground I/O workload. Perform this provisioning by calculation or by determining


experimentally how much background copy can be allowed before the foreground I/O latency
becomes unacceptable. Then, reduce the background copy to accommodate peaks in
workload.

chpartnership command
To change the bandwidth that is available for background copy in an IBM SVC system
partnership, use the chpartnership -backgroundcopyrate percentage_of_link_bandwidth
command to specify the percentage of whole link capacity to be used by background copy
process.

9.8.5 Creating a Metro Mirror/Global Mirror consistency group


Use the mkrcconsistgrp command to create an empty MM/GM consistency group.

The MM/GM consistency group name must be unique across all consistency groups that are
known to the systems owning this consistency group. If the consistency group involves two
systems, the systems must be in communication throughout the creation process.

The new consistency group does not contain any relationships and is in the empty state. You
can add MM/GM relationships to the group (upon creation or afterward) by using the
chrelationship command.

9.8.6 Creating a Metro Mirror/Global Mirror relationship


Use the mkrcrelationship command to create a new MM/GM relationship. This relationship
persists until it is deleted.

Optional parameter: If you do not use the -global optional parameter, a Metro Mirror
relationship is created instead of a Global Mirror relationship.

The auxiliary volume must be equal in size to the master volume or the command fails. If both
volumes are in the same system, they must be in the same I/O Group. The master and
auxiliary volume cannot be in an existing relationship and they cannot be the targets of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.

When the MM/GM relationship is created, you can add it to a Consistency Group that exists
or it can be a stand-alone MM/GM relationship if no Consistency Group is specified.

The lsrcrelationshipcandidate command


Use the lsrcrelationshipcandidate command to list the volumes that are eligible to form an
MM/GM relationship.

When the command is issued, you can specify the master volume name and auxiliary system
to list the candidates that comply with the prerequisites to create an MM/GM relationship. If
the command is issued with no parameters all of the volumes that are not disallowed by
another configuration state, such as being a FlashCopy target, are listed.

558 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

9.8.7 Changing Metro Mirror/Global Mirror relationship


Use the chrcrelationship command to modify the following properties of an MM/GM
relationship:
򐂰 Change the name of an MM/GM relationship.
򐂰 Add a relationship to a group.
򐂰 Remove a relationship from a group by using -force flag.

Adding an MM/GM relationship: When an MM/GM relationship is added to a


Consistency Group that is not empty, the relationship must have the same state and copy
direction as the group to be added to it.

9.8.8 Changing Metro Mirror/Global Mirror consistency group


Use the chrcconsistgrp command to change the name of an MM/GM Consistency Group.

9.8.9 Starting Metro Mirror/Global Mirror relationship


Use the startrcrelationship command to start the copy process of an MM/GM relationship.

When the command is issued, you can set the copy direction if it is undefined, and, optionally,
you can mark the auxiliary volume of the relationship as clean. The command fails if it is used
as an attempt to start a relationship that is already a part of a consistency group.

You can issue this command only to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (master and auxiliary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
by a stop command or by an I/O error.

If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when the relationship is restarted. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original master of the relationship. The use of the -force parameter here is a reminder
that the data on the auxiliary becomes inconsistent while resynchronization (background
copying) takes place and, therefore, is unusable for DR purposes before the background copy
completes.

In the Idling state, you must specify the master volume to indicate the copy direction. In other
connected states, you can provide the -primary argument, but it must match the existing
setting.

9.8.10 Stopping Metro Mirror/Global Mirror relationship


Use the stoprcrelationship command to stop the copy process for a relationship. You can
also use this command to enable write access to a consistent auxiliary volume by specifying
the -access parameter.

This command applies to a stand-alone relationship. It is rejected if it is addressed to a


relationship that is part of a Consistency Group. You can issue this command to stop a
relationship that is copying from master to auxiliary.

Chapter 9. Advanced Copy Services 559


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue a startrcrelationship command. Write activity is no longer copied from the
master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this
command causes a Consistency Freeze.

When a relationship is in a consistent state (that is, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the stoprcrelationship command to enable write access to the auxiliary
volume.

9.8.11 Starting Metro Mirror/Global Mirror consistency group


Use the startrcconsistgrp command to start an MM/GM consistency group. You can issue
this command only to a consistency group that is connected.

For a consistency group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped by a stop command or by an I/O error.

9.8.12 Stopping Metro Mirror/Global Mirror consistency group


Use the startrcconsistgrp command to stop the copy process for an MM/GM consistency
group. You can also use this command to enable write access to the auxiliary volumes in the
group if the group is in a consistent state.

If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you issue the startrcconsistgrp command. Write activity is no longer copied
from the master to the auxiliary volumes that belong to the relationships in the group. For a
consistency group in the ConsistentSynchronized state, this command causes a Consistency
Freeze.

When a consistency group is in a consistent state (for example, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the Storwizetask stoprcconsistgrp command to enable write access to the
auxiliary volumes within that group.

9.8.13 Deleting Metro Mirror/Global Mirror relationship


Use the rmrcrelationship command to delete the relationship that is specified. Deleting a
relationship deletes only the logical relationship between the two volumes. It does not affect
the volumes themselves.

If the relationship is disconnected at the time that the command is issued, the relationship is
deleted only on the system on which the command is being run. When the systems
reconnect, the relationship is automatically deleted on the other system.

Alternatively, if the systems are disconnected and you still want to remove the relationship on
both systems, you can issue the rmrcrelationship command independently on both of the
systems.

A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.

560 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

If you delete an inconsistent relationship, the auxiliary volume becomes accessible even
though it is still inconsistent. This situation is the one case in which MM/GM does not inhibit
access to inconsistent data.

9.8.14 Deleting Metro Mirror/Global Mirror consistency group


Use the rmrcconsistgrp command to delete an MM/GM consistency group. This command
deletes the specified consistency group. You can issue this command for any existing
consistency group.

If the consistency group is disconnected at the time that the command is issued, the
consistency group is deleted only on the system on which the command is being run. When
the systems reconnect, the consistency group is automatically deleted on the other system.

Alternatively, if the systems are disconnected and you still want to remove the consistency
group on both systems, you can issue the rmrcconsistgrp command separately on both of
the systems.

If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.

9.8.15 Reversing Metro Mirror/Global Mirror relationship


Use the switchrcrelationship command to reverse the roles of the master volume and the
auxiliary volume when a stand-alone relationship is in a consistent state. When the command
is issued, the wanted master must be specified.

9.8.16 Reversing Metro Mirror/Global Mirror consistency group


Use the switchrcconsistgrp command to reverse the roles of the master volume and the
auxiliary volume when a consistency group is in a consistent state. This change is applied to
all of the relationships in the consistency group. When the command is issued, the wanted
master must be specified.

Important: Remember, by reversing the roles, your current source volumes become
targets and target volumes become source volumes. Therefore, you will lose write access
to your current primary volumes.

9.9 Troubleshooting remote copy


Remote copy (Metro Mirror and Global Mirror) has two primary error codes that are displayed:
1920 or 1720. A 1920 is a congestion error. This error means that the source, the link between
the source and target, or the target cannot keep up with the requested copy rate. A 1720 error
is a heartbeat or system partnership communication error. This error often is more serious
because failing communication between your system partners involves extended diagnostic
time.

Chapter 9. Advanced Copy Services 561


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

9.9.1 1920 error


A 1920 error (event ID 050010) can have several triggers, including the following probable
causes:
򐂰 Primary 2145 system or SAN fabric problem (10%)
򐂰 Primary 2145 system or SAN fabric configuration (10%)
򐂰 Secondary 2145 system or SAN fabric problem (15%)
򐂰 Secondary 2145 system or SAN fabric configuration (25%)
򐂰 Intercluster link problem (15%)
򐂰 Intercluster link configuration (25%)

In practice, the most often overlooked cause is latency. Global Mirror has a round-trip-time
tolerance limit of 80 or 250 milliseconds, depending on the firmware version and the hardware
model. See Figure 9-34 on page 538. A message that is sent from your source IBM SVC
system to your target IBM SVC system and the accompanying acknowledgment must have a
total time of 80 or 250 milliseconds round trip. In other words, it has to have up to 40 or
125-milliseconds latency each way.

The primary component of your round-trip time is the physical distance between sites. For
every 1000 kilometers (621.4 miles), you observe a 5-millisecond delay each way. This delay
does not include the time that is added by equipment in the path. Every device adds a varying
amount of time depending on the device, but a good rule is 25 microseconds for pure
hardware devices. For software-based functions (such as compression that is implemented in
applications), the added delay tends to be much higher (usually in the millisecond plus
range.) Next, we describe an example of a physical delay.

Company A has a production site that is 1900 kilometers (1180.6 miles) away from its
recovery site. The network service provider uses a total of five devices to connect the two
sites. In addition to those devices, Company A employs a SAN FC router at each site to
provide Fibre Channel over IP (FCIP) to encapsulate the FC traffic between sites.

Now, there are seven devices, and 1900 kilometers (1180.6 miles) of distance delay. All the
devices are adding 200 microseconds of delay each way. The distance adds 9.5 milliseconds
each way, for a total of 19 milliseconds. Combined with the device latency, the delay is
19.4 milliseconds of physical latency minimum, which is under the 80-millisecond limit of
Global Mirror until you realize that this number is the best case number.

The link quality and bandwidth play a large role. Your network provider likely ensures a
latency maximum on your network link; therefore, be sure to stay as far beneath the Global
Mirror round-trip-time (RTT) limit as possible. You can easily double or triple the expected
physical latency with a lower quality or lower bandwidth network link. Then, you are within the
range of exceeding the limit if high I/O occurs that exceeds the existing bandwidth capacity.

When you get a 1920 event, always check the latency first. The FCIP routing layer can
introduce latency if it is not correctly configured. If your network provider reports a much lower
latency, you might have a problem at your FCIP routing layer. Most FCIP routing devices have
built-in tools to allow you to check the RTT. When you are checking latency, remember that
TCP/IP routing devices (including FCIP routers) report RTT or round-trip time by using
standard 64-byte ping packets.

In Figure 9-41 on page 563, you can see why the effective transit time must be measured only
by using packets that are large enough to hold an FC frame, or 2148 bytes (2112 bytes of
payload and 36 bytes of header). Allow some overhead to be safe because various switch
vendors have optional features that might increase this size. After you verify your latency by
using the proper packet size, proceed with normal hardware troubleshooting.

562 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm

Before we proceed, we look at the second largest component of your RTT, which is
serialization delay. Serialization delay is the amount of time that is required to move a packet
of data of a specific size across a network link of a certain bandwidth. The required time to
move a specific amount of data decreases as the data transmission rate increases.
Figure 9-41 on page 563 shows the orders of magnitude of difference between the link
bandwidths. It is easy to see how 1920 errors can arise when your bandwidth is insufficient.
Never use a TCP/IP ping to measure RTT for FCIP traffic.

Figure 9-41 Effect of packet size (in bytes) versus the link size

In Figure 9-41, the amount of time in microseconds that is required to transmit a packet
across network links of varying bandwidth capacity is compared. The following packet sizes
are used:
򐂰 64 bytes: The size of the common ping packet
򐂰 1500 bytes: The size of the standard TCP/IP packet
򐂰 2148 bytes: The size of an FC frame

Finally, your path maximum transmission unit (MTU) affects the delay that is incurred to get a
packet from one location to another location. An MTU might cause fragmentation or be too
large and cause too many retransmits when a packet is lost.

9.9.2 1720 error


The 1720 error (event ID 050020) is the other problem remote copy might encounter. The
amount of bandwidth that is needed for system-to-system communications varies based on
the number of nodes. It is important that it is not zero. When a partner on either side stops
communication, you see a 1720 error appear in your error log. According to the product
documentation, there are no likely field-replaceable unit breakages or other causes.

The source of this error is most often a fabric problem or a problem in the network path
between your partners. When you receive this error, check your fabric configuration for zoning
of more than one host bus adapter (HBA) port for each node per I/O Group if your fabric has

Chapter 9. Advanced Copy Services 563


7933 08 Advanced Copy Services Tarik Marcin.fm Draft Document for Review February 4, 2016 8:01 am

more than 64 HBA ports zoned. One port for each node per I/O Group per fabric that is
associated with the host is the recommended zoning configuration for fabrics. For those
fabrics with 64 or more host ports, this recommendation becomes a rule. Therefore, you will
see four paths to each volume that is discovered on the host because each host needs to
have at least two FC ports from separate HBA cards, each in a separate fabric. On each
fabric, each host FC port is zoned to two of the IB M SVC ports, and each IBM SVC port
comes from one IBM SVC node. This gives four paths per host volume. More than four paths
per volume are supported but not recommended.

Improper zoning can lead to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer via IBM Virtual Storage Center and
comparing against your sample interval reveals potential SAN congestion. If a zero buffer
credit timer is above 2% of the total time of the sample interval, it might cause problems.

Next, always ask your network provider to check the status of the link. If the link is acceptable,
watch for repeats of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences could indicate a larger problem.

If you receive multiple 1720 errors, recheck your network connection and then check the
Storwize partnership information to verify its status and settings. Then, proceed to perform
diagnostics for every piece of equipment in the path between the two Storwize systems. It
often helps to have a diagram that shows the path of your replication from both logical and
physical configuration viewpoints.

If your investigations fail to resolve your remote copy problems, contact your IBM Support
representative for more complete analysis.

564 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

10

Chapter 10. Operations using the CLI


In this chapter, we describe operational management. We use the command-line interface
(CLI) to demonstrate normal operation and advanced operation. You can use the CLI or GUI
to manage IBM System Storage SAN Volume Controller (SVC) operations. We use the CLI in
this chapter. You can script these operations. We think that it is easier to create the
documentation for the scripts by using the CLI.

This chapter assumes a fully functional SVC environment.

© Copyright IBM Corp. 2015. All rights reserved. 565


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.1 Normal operations using the CLI


In the following topics, we describe the commands that best represent normal operational
commands.

10.1.1 Command syntax and online help

Command prefix changes: The svctask and svcinfo command prefixes are no longer
needed when you are issuing a command. If you have existing scripts that use those
prefixes, they continue to function. You do not need to change your scripts.

The following major command sets are available:


򐂰 By using the svcinfo command, you can query the various components within the SVC
environment.
򐂰 By using the svctask command, you can change the various components within the SVC.

When the command syntax is shown, you see certain parameters in square brackets, for
example [parameter]. These brackets indicate that the parameter is optional in most (if not
all) instances. Any information that is not in square brackets is required information. You can
view the syntax of a command by entering one of the following commands:
򐂰 svcinfo -? shows a complete list of informational commands.
򐂰 svctask -? shows a complete list of task commands.
򐂰 svcinfo commandname -? shows the syntax of informational commands.
򐂰 svctask commandname -? shows the syntax of task commands.
򐂰 svcinfo commandname -filtervalue? shows the filters that you can use to reduce the
output of the informational commands.

Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask
commandname -h command.

If you review the syntax of the command by entering svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.

Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were recently issued. Then, you can use the left and right, Backspace, and Delete keys to
edit commands before you resubmit them.

Using shortcuts
You can use the shortcuts command to display a list of display or execution commands. This
command produces an alphabetical list of actions that are supported. The command parameter
must be svcinfo for display commands or svctask for execution commands. The model
parameter allows for different shortcuts on different platforms, 2145 or 2076, as shown in the
following example:

<command> Shortcuts <model>

Example 10-1 on page 567 is a full list of all shortcut commands.

566 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Example 10-1 Shortcut commands


IBM_2145:ITSO_SVC_DH8:superuser>svctask shortcuts 2145
activatefeature
addhostiogrp
addhostport
addmdisk
addnode
addvdiskaccess
addvdiskcopy
analyzevdisk
analyzevdiskbysystem
applydrivesoftware
applysoftware
cancellivedump
cfgportip
charray
charraymember
chauthservice
chbanner
chcontroller
chcurrentuser
chdrive
chemail
chemailserver
chemailuser
chenclosure
chenclosurecanister
chenclosureslot
chencryption
cherrstate
cheventlog
chfcconsistgrp
chfcmap
chhost
chiogrp
chldap
chldapserver
chlicense
chmdisk
chmdiskgrp
chnode
chnodebattery
chnodebootdrive
chnodehw
chpartnership
chquorum
chrcconsistgrp
chrcrelationship
chsecurity
chsite
chsnmpserver
chsyslogserver
chsystem
chsystemcert
chsystemip
chuser
chusergrp
chvdisk
cleardumps
clearerrlog

Chapter 10. Operations using the CLI 567


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

cpdumps
deactivatefeature
detectmdisk
dumpallmdiskbadblocks
dumpauditlog
dumperrlog
dumpmdiskbadblocks
enablecli
expandvdisksize
finderr
includemdisk
migrateexts
migratetoimage
migratevdisk
mkarray
mkdistributedarray
mkemailserver
mkemailuser
mkfcconsistgrp
mkfcmap
mkfcpartnership
mkhost
mkimagevolume
mkippartnership
mkldapserver
mkmdiskgrp
mkmetadatavdisk
mkpartnership
mkquorumapp
mkrcconsistgrp
mkrcrelationship
mksnmpserver
mksyslogserver
mkuser
mkusergrp
mkvdisk
mkvdiskhostmap
mkvolume
movevdisk
ping
preplivedump
prestartfcconsistgrp
prestartfcmap
recoverarray
recoverarraybysystem
recovervdisk
recovervdiskbyiogrp
recovervdiskbysystem
repairsevdiskcopy
repairvdiskcopy
resetleds
rmarray
rmemailserver
rmemailuser
rmfcconsistgrp
rmfcmap
rmhost
rmhostiogrp
rmhostport
rmldapserver

568 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

rmmdisk
rmmdiskgrp
rmmetadatavdisk
rmnode
rmpartnership
rmportip
rmrcconsistgrp
rmrcrelationship
rmsnmpserver
rmsyslogserver
rmuser
rmusergrp
rmvdisk
rmvdiskaccess
rmvdiskcopy
rmvdiskhostmap
rmvolume
rmvolumecopy
sendinventoryemail
setdisktrace
setlocale
setpwdreset
setsystemtime
settimezone
settrace
shrinkvdisksize
splitvdiskcopy
startemail
startfcconsistgrp
startfcmap
startrcconsistgrp
startrcrelationship
startstats
starttrace
stopemail
stopfcconsistgrp
stopfcmap
stoprcconsistgrp
stoprcrelationship
stopsystem
stoptrace
switchrcconsistgrp
switchrcrelationship
testemail
triggerdrivedump
triggerenclosuredump
triggerlivedump
writesernum

Chapter 10. Operations using the CLI 569


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

The use of reverse-i-search


If you work on your SVC with the same PuTTY session for many hours and enter many
commands, scrolling back to find your previous or similar commands can be a time-intensive
task. In this case, the use of the reverse-i-search command can help you quickly and easily
find any command that you issued in the history of your commands by using the Ctrl+R keys.
By using Ctrl+R, you can interactively search through the command history as you enter
commands. Pressing Ctrl+R at an empty command prompt gives you a prompt, as shown in
Example 10-2.

Example 10-2 Using reverse-i-search


IBM_2145:ITSO_SVC_DH8:superuser>lsiogrp
id name node_count vdisk_count host_count site_id site_name
0 io_grp0 2 9 1
1 io_grp1 0 0 1
2 io_grp2 0 0 1
3 io_grp3 0 0 1
4 recovery_io_grp 0 0 0
(reverse-i-search)`s': lsiogrp

As shown in Example 10-2, we ran a lsiogrp command. By pressing Ctrl+R and entering s,
the command that we needed was recalled from history.

10.1.2 Organizing on window content


There are instances in which the output of a command can be long and difficult to read in the
window. If you need information about a subset of the total number of available items, you can
use filtering to reduce the output to a more manageable size.

Filtering
To reduce the output that is displayed by a command, you can specify a number of filters,
depending on the command that you are running. To see which filters are available, enter the
command followed by the -filtervalue? flag, as shown in Example 10-3.

Example 10-3 lsvdisk -filtervalue? command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk -filtervalue?

Filters for this view are:


name
id
IO_group_id
IO_group_name
status
mdisk_grp_name
mdisk_grp_id
capacity
type
FC_id
FC_name
RC_id
RC_name
vdisk_name
vdisk_id

vdisk_UID
fc_map_count

570 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

copy_count
fast_write_state
se_copy_count
filesystem
preferred_node_id
mirror_write_priority
RC_change
compressed_copy_count
access_IO_group_count
block_size
owner_type
owner_id
owner_name
parent_mdisk_grp_id
parent_mdisk_grp_name
formatting
volume_id
volume_name
volume_function

When you know the filters, you can be more selective in generating output. Consider the
following points:
򐂰 Multiple filters can be combined to create specific searches.
򐂰 You can use an asterisk (*) as a wildcard when names are used.
򐂰 When capacity is used, the units must also be specified by using -u b | kb | mb | gb | tb |
pb.

For example, if we run the lsvdisk command with no filters but with the -delim parameter, we
see the output that is shown in Example 10-4 on page 571.

Example 10-4 lsvdisk command: No filters


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge
0,ESXI_SRV1_VOL01,1,io_grp1,online,many,many,100.00GB,many,,,,,6005076801AF813F100000000000
0014,0,2,empty,0,no
1,volume_7,0,io_grp0,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F10000000
0000001F,0,1,empty,1,no
2,W2K3_SRV1_VOL02,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000003,0,1,empty,0,no
3,W2K3_SRV1_VOL03,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000004,0,1,empty,0,no
4,W2K3_SRV1_VOL04,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000005,0,1,empty,0,no
5,W2K3_SRV1_VOL05,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000006,0,1,empty,0,no
6,W2K3_SRV1_VOL06,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000007,0,1,empty,0,no
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,no

Chapter 10. Operations using the CLI 571


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Tip: The -delim parameter truncates the content in the window and separates data fields
with colons as opposed to wrapping text over multiple lines. This parameter is often used if
you must get reports during script execution.

If we now add a filter (mdisk_grp_name) to our lsvdisk command, we can reduce the output,
as shown in Example 10-5.

Example 10-5 lsvdisk command: With a filter


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_write_state se_copy_count RC_change
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,no

10.1.3 UNIX commands available in interactive SSH sessions


You can use several UNIX-based commands while working with interactive SSH sessions.

The system supports up to 32 interactive SSH sessions on the management IP address


simultaneously.

Note: After one hour, a fixed SSH interactive session times out, which means the SSH
session is automatically closed. This session timeout limit is not configurable.

You can use the following UNIX commands to manage interactive SSH sessions:

Table 10-1 UNIX commands for interactive SSH session


UNIX commands Description

grep Filters output by keywords or expressions.

more Moves through output one page at a time.

sed Filters output by complex expressions.

sort Sorts output according to criteria.

cut Removes individual columns from output.

head Displays only first lines.

less Moves through the output bidirectionally a page at a time. (secure mode)

tail Shows only the last lines.

uniq Hides any duplicate information.

tr Translate characters.

wc Counts lines and words and characters in data.

572 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

10.2 New commands and functions


The commands that are shown in Example 10-6 were introduced in V7.6.

Example 10-6 New commands


activatefeature
analyzevdisk
analyzevdiskbysystem
chbanner
chsystemcert
deactivatefeature
mkdistributedarray
mkimagevolume
mkmetadatavdisk
mkquorumapp
mkvolume
rmmetadatavdisk
rmvolume
rmvolumecopy
activatefeature / deactivatefeature:
򐂰 Use the activatefeature command to activate a feature (using a license key or keyfile) or
feature trial period. You can use either of the following commands to activate an encryption
license on the system:
– To activate the license by using the key directly, enter the following command in the
command-line interface:
activatefeature -licensekey key where key is the license key to activate a feature.
The key consists of 16 hexadecimal characters organized in four groups of four
characters with each group separated by a hyphen (such as 0123-4567-89AB-CDEF).
– To activate the license with a file path that stores the key, complete these steps:
i. Use (p)scp to copy the license key file (2145_XXXXXXX.xml) to the /tmp directory.
ii. Using the command-line interface, enter the activatefeature -licensekeyfile
filepath, where filepath is full path-to-file that contains all required license
information (such as /tmp/keyfile.xml).
򐂰 Use the deactivatefeature command to deactivate a feature or suspend a feature trial
period.
deactivatefeature feature_id
The value feature_id is a unique ID as displayed when using the lsfeature command,
and is an incremental number (from 0 to 320).
analyzevdisk / analyzevdiskbysystem:

To determine if current volumes on your system could be compressed for additional capacity
savings, the system supports CLI commands that analyze the volumes for potential
compression savings. The analyzevdisk command can be run on a single volume and all the
volumes that are on the system can be analyzed using the analyzevdiskbysystem command.
Any volumes created after the compression analysis completes can be evaluated individually
for compression savings. Ensure that volumes to be analyzed contain as much active data as
possible rather than volumes that are mostly empty of data. Analyzing active data increases
accuracy and reduces the risk of analyzing old data that is already deleted but can still have
traces on the device. These commands provide the functionality of the Comprestimator Utility
which is a tool that can be downloaded to hosts to evaluate compression savings. In some

Chapter 10. Operations using the CLI 573


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

environments, third-party applications or access to hosts are restricted. The commands


provide similar function without these restrictions. Storage administrators can use the
commands to determine compression savings for volumes on the system to quickly evaluate
compression.

After the analysis completes, you can display the results using the lsvdiskanalysis
command. You can display results for all the volumes or single volumes by specifying a
volume name or identifier for individual analysis.
򐂰 To analyze a single volume for compression savings, complete these steps:
– In the command-line interface, enter the following command:
analyzevdisk -vdisk_name | -vdisk_id
where -vdisk_name or -vdisk_name is either the name or identifier for the volume that
you want to analyze for compression savings.
– Analysis results can be displayed after the process completes by issuing the following
command:
lsvdiskanalysis -vdisk_name | -vdisk_id
where -vdisk_name or -vdisk_id is either the name or identifier for the volume that you
want to analyze for compression savings.
򐂰 To analyze all the volumes that are currently on the system, complete these steps:
– In the command-line interface, enter the following command:
analyzevdiskbysystem
This command analyzes all the current volumes that are created on the system.
Volumes that are created during or after the analysis are not included and can be
analyzed individually with the analyzevdisk. Progress for analyzing of all the volumes
on system depends on the number of volumes being analyzed and results can be
expected at about a minute per volume. For example if a system has 50 volumes,
compression savings analysis would take approximately 50 minutes.
– To check the progress of the analysis, enter the following command:
lsvdiskanalysisprogress
This command displays the total number of volumes on the system, the total number of
volumes that are remaining to be analyzed, and the estimated time of completion.
chbanner:

You can create or change a message that displays when users log on to the system. When
users log on to the system with the management GUI, command-line interface, or service
assistant, the message displays before they log on to the system.

To create or change the login message, you can use the chbanner command or the
management GUI. If you are using the command, you must create the message in a
supported text editor and use Secure Copy (SCP) to copy the file to the configuration node on
the system.

To change the login message from a SAN administrator workstation, complete the following
steps.
1. Use a suitable text editor to create a file that contains the text of the message.
Note: The message cannot exceed 4 Kbytes.

574 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

2. Use a Secure Copy client to copy the file to the configuration node of the system to be
configured (e.g. /tmp/loginmessage). Specify the management IP address of the system
to be configured.
3. Log on to the system to be configured.
4. In the command-line interface, type the following command to set the login message:
chbanner -file filepath
where filepath specifies the file that contains the text of the new message (e.g.
/tmp/loginmessage).

chsystemcert:

Use the chsystemcert command to manage the Secure Sockets Layer (SSL) certificate that is
installed on a clustered system. You can also generate a new self-signed SSL certificate. This
command can also be used to create a certificate request to be copied from the system and
signed by a certificate authority (CA). The signed certificate that is returned by the CA can be
installed. You can also use this command to export the current SSL certificate (for example to
allow the certificate to be imported into a key server).

During system setup, an initial certificate is created to use for secure connections between
web browsers. Based on the security requirements for your system, you can create either a
new self-signed certificate or install a signed certificate that is created by a third-party
certificate authority. Self-signed certificates are generated automatically by the system and
encrypt communications between the browser and the system. Self-signed certificates can
generate web browser security warnings and might not comply with organizational security
guidelines.

mkdistributedarray:

Use the mkdistributedarray command to create a distributed array and add it to a storage
pool.

On 2145-DH8 nodes, distributed array configurations create large-scale internal MDisks.


These arrays, which can contain 4 - 128 drives, also contain rebuild areas that are used to
maintain redundancy after a drive fails. As a result, the distributed configuration dramatically
reduces rebuild times and decreases the exposure volumes have to the extra load of
recovering redundancy.

Distributed array configurations may contain between 4 - 128 drives. Distributed arrays
remove the need for separate drives that are idle until a failure occurs. Instead of allocating
one or more drives as spares, the spare capacity is distributed over specific rebuild areas
across all the member drives. Data can be copied faster to the rebuild area and redundancy is
restored much more rapidly. Additionally, as the rebuild progresses, the performance of the
pool is more uniform because all of the available drives are used for every volume extent.
After the failed drive is replaced, data is copied back to the drive from the distributed spare
capacity. Unlike "hot spare" drives, read/write requests are processed on other parts of the
drive that are not being used as rebuild areas. The number of rebuild areas is based on the
width of the array. The size of the rebuild area determines how many times the distributed
array can recover failed drives without risking becoming degraded. For example, a distributed
array that uses RAID 6 drives can handle two concurrent failures. After the failed drives have
been rebuilt, the array can tolerate another two drive failures. If all of the rebuild areas are
used to recover data, the array becomes degraded on the next drive failure.

Supported RAID levels:


򐂰 Distributed RAID 5

Chapter 10. Operations using the CLI 575


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Distributed RAID 5 arrays stripe data over the member drives with one parity strip on every
stripe. These distributed arrays can support 4 - 128 drives. RAID 5 distributed arrays can
tolerate the failure of one member drive.
򐂰 Distributed RAID 6
RAID 6 arrays stripe data over the member drives with two parity strips on every stripe.
These distributed arrays can support 6 - 128 drives. A RAID 6 distributed array can
tolerate any two concurrent member drive failures.

mkimagevolume:

Use the mkimagevolume command to create a new image mode volume. This command is for
high availability configurations including HyperSwap or stretched systems.
mkmetadatavdisk / rmmetadatavdisk:
򐂰 To enable Virtual Volumes by using the command-line interface (CLI), a utility volume is
required to store critical metadata for Virtual Volumes. To create the utility volume on the
system, you must have either the administrator or the security administrator user role. If
possible, have a mirrored copy of the utility volume stored in a second storage pool in a
separate failure domain. Use a storage pool that is made from MDisks that are presented
from a different storage controller or a different I/O group. For a single storage pool, enter
the following command:
svctask mkmetadatavdisk -mdiskgrp mdiskgrpid
For multiple storage pools, enter the following command:
svctask mkmetadatavdisk -mdiskgrp mdiskgrpid_1:mdiskgrpid_2
To complete the Virtual Volumes implementation, see the documentation in the IBM
Spectrum Control Base Edition User Guide about Creating a VVOL-enabled service on
Spectrum Virtualize/Storwize storage systems.
򐂰 Use the rmmetadatavdisk command to remove the metadata volume from a storage pool.
When -ignorevvolsexist is specified, only the metadata volume is deleted.
rmmetadatavdisk -ignorevvolsexist

mkquorumapp:

In some stretched configurations or HyperSwap configurations, IP quorum applications can


be used as a third site to house quorum devices. Use the mkquorumapp command to generate
a Java application to use for quorum.

Note: For more informations on stretched or HyperSwap system configurations please


refer to Appendix C, “Stretched Cluster” on page 939.

To use an IP-based quorum application as the quorum device for the third site, no Fibre
Channel connectivity is used. Java applications are run on hosts at the third site. However,
there are strict requirements on the IP network and slight disadvantages with using IP
quorum applications. Unlike quorum disks, all IP quorum applications must be reconfigured
and redeployed to hosts when certain aspects of the system configuration change. These
aspects include adding or removing a node from the system or when node service IP
addresses are changed.

For stable quorum resolutions, an IP network must provide the following requirements:
򐂰 Connectivity from the hosts to the service IP addresses of all nodes. The network must
also deal with possible security implications of exposing the service IP addresses, as this

576 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

connectivity can also be used to access the service GUI if IP quorum is configured
incorrectly.
򐂰 Port 1260 is used by IP quorum applications to communicate from the hosts to all nodes.
򐂰 The maximum round-trip delay must not exceed 80 milliseconds (ms), which means 40 ms
each direction.
򐂰 A minimum bandwidth of 2 megabytes per second is guaranteed for node-to-quorum
traffic.

Even with IP quorum applications at the third site, quorum disks at site one and site two are
required, as they are used to store metadata. To provide quorum resolution, use the
mkquorumapp command to generate a Java application that is copied from the system and run
on a host at a third site. The maximum number of applications that can be deployed is five.
Currently, supported Java Runtime Environments are IBM Java 7.1 and IBM Java 8.

mkvolume / rmvolume:
򐂰 Use the mkvolume command to create an empty volume from existing storage pools. This
command is for high availability configurations including HyperSwap or stretched systems.
A HyperSwap volume has one copy on each site. Before you create HyperSwap volumes,
you must configure the HyperSwap topology.
After you configure the HyperSwap topology, complete one of the following steps.
– If you are using the management GUI, use the Create Volumes wizard to create
HyperSwap volumes.
– If you are using the command-line interface, the mkvolume command creates a
volume. A HyperSwap volume is created by specifying two storage pools in
independent sites. For example:
mkvolume -size 100 -pool site1pool:site2pool
򐂰 Use the rmvolume command to remove a volume and all copies and mirrors. For a
HyperSwap volume, this includes deleting the active-active relationship and the change
volumes.

rmvolumecopy:

Use the rmvolumecopy command to remove a volume copy and all copies and mirrors.
HyperSwap volumes that are part of a consistency group must be removed from that
consistency group before you can remove the last volume copy from that site.

V7.6 includes command changes and the addition of attributes and variables for several
existing commands. For more information, see the command reference or help, which is
available at this website:
https://ibm.biz/BdHnKF

10.3 Working with managed disks and disk controller systems


This section describes various configuration and administrative tasks for managed disks
(MDisks) within the SVC environment and the tasks that you can perform at a disk controller
level.

Chapter 10. Operations using the CLI 577


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.3.1 Viewing disk controller details


Use the lscontroller command to display summary information about all available back-end
storage systems.

To display more detailed information about a specific controller, run the command again and
append the controller name parameter, for example, controller ID 4, as shown in
Example 10-7.

Example 10-7 lscontroller command

lscontroller -delim 7

id 7
controller_name controller7
WWNN 20000004CF2412AC
mdisk_link_count 1
max_mdisk_link_count 1
degraded no
vendor_id SEAGATE
product_id_low ST373405
product_id_high FC
product_revision 0003
ctrl_s/n 3EK0J5Y8
allow_quorum no
site_id 2
site_name DR
WWPN 22000004CF2412AC
path_count 1
max_path_count 1
WWPN 21000004CF2412AC
path_count 0
max_path_count 0
fabric_type sas_direct

10.3.2 Renaming a controller


Use the chcontroller command to change the name of a storage controller. To verify the
change, run the lscontroller command. Example 10-8 shows both of these commands.

Example 10-8 chcontroller command


IBM_2145:ITSO_SVC_DH8:superuser>chcontroller -name DS_3400 4
IBM_2145:ITSO_SVC_DH8:superuser>lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
product_id_high site_id site_name
0 V7000_Gen2 2076 IBM 2145
3 quorum
1 controller1 2076 IBM 2145
1 site1
2 controller2 2076 IBM 2145
1 site1
3 controller3 2076 IBM 2145
2 site2
4 DS_3400 IBM 1726-4xx FAStT
2 site2

This command renames the controller that is named DS 3400 to DS_3400.

578 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Choosing a new name: The chcontroller command specifies the new name first. You
can use letters A - Z, a to z, numbers 0 - 9, the dash (-), and the underscore (_). The new
name can be 1 - 63 characters. However, the new name cannot start with a number, dash,
or the word “controller” because this prefix is reserved for SVC assignment only.

10.3.3 Discovery status


Use the lsdiscoverystatus command to determine whether a discovery operation is in
progress, as shown in Example 10-9. The output of this command is a status of active or
inactive.

Example 10-9 lsdiscoverystatus command


IBM_2145:ITSO_SVC_DH8:superuser>lsdiscoverystatus -delim :
id:scope:IO_group_id:IO_group_name:status
0:fc_fabric:::active
1:sas_iogrp:0:io_grp0:inactive
3:sas_iogrp:2:io_grp2:active

This command displays the state of all discoveries in the clustered system. During discovery,
the system updates the drive and MDisk records. You must wait until the discovery finishes
and is inactive before you attempt to use the system. This command displays one of the
following results:
򐂰 Active: A discovery operation is in progress at the time that the command is issued.
򐂰 Inactive: No discovery operations are in progress at the time that the command is issued.

10.3.4 Discovering MDisks


The clustered system detects the MDisks automatically when they appear in the network.
However, certain Fibre Channel (FC) controllers do not send the required Small Computer
System Interface (SCSI) primitives that are necessary to automatically discover the new
MDisks.

If new storage was attached and the clustered system did not detect the new storage, you
might need to run this command before the system can detect the new MDisks.

Use the detectmdisk command to scan for newly added MDisks, as shown in
Example 10-10.

Example 10-10 detectmdisk


IBM_2145:ITSO_SVC_DH8:superuser>detectmdisk

To check whether any newly added MDisks were successfully detected, run the lsmdisk
command and look for new unmanaged MDisks.

If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk
subsystem and that the zones are set up correctly.

Discovery process: If you assigned many logical unit numbers (LUNs) to your SVC, the
discovery process can take time. Check several times by using the lsmdisk command to
see whether all the expected MDisks are present.

Chapter 10. Operations using the CLI 579


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

When all the disks that are allocated to the SVC are seen from the SVC system, the following
procedure is a useful way to verify the MDisks that are unmanaged and ready to be added to
the storage pool.

Complete the following steps to display MDisks:


1. Enter the lsmdiskcandidate command, as shown in Example 10-11. This command
displays all detected MDisks that are not part of a storage pool.

Example 10-11 lsmdiskcandidate command


IBM_2145:ITSO_SVC_DH8:superuser>lsmdiskcandidate
id
0
1
2
.
.

Alternatively, you can list all MDisks (managed or unmanaged) by running the lsmdisk
command, as shown in Example 10-12.

Example 10-12 lsmdisk command


IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier encrypt site_id site_name
0 mdisk0 online managed 0 Site2_Pool 500.0GB 0000000000000000
Site2_V7K_EXTSG03_A 6005076802880102c00000000000001400000000000
000000000000000000000 nearline no 2 site2
1 mdisk1 online managed 0 Site2_Pool 500.0GB 0000000000000001
Site2_V7K_EXTSG03_A 6005076802880102c00000000000001500000000000
000000000000000000000 nearline no 2 site2
2 mdisk2 online managed 0 Site2_Pool 500.0GB 0000000000000002
Site2_V7K_EXTSG03_A 6005076802880102c00000000000001600000000000
000000000000000000000 nearline no 2 site2
3 mdisk3 online managed 3 Site1_Pool 500.0GB 0000000000000000
Site1_V7K_EXTSG04_A 600507680284801ac80000000000000000000000000
000000000000000000000 enterprise no 1 site1
4 mdisk4 online managed 3 Site1_Pool 500.0GB 0000000000000001
Site1_V7K_EXTSG04_A 600507680284801ac80000000000000100000000000
000000000000000000000 enterprise no 1 site1
5 mdisk5 online managed 3 Site1_Pool 500.0GB 0000000000000002
Site1_V7K_EXTSG04_A 600507680284801ac80000000000000200000000000
000000000000000000000 enterprise no 1 site1
6 mdisk6 online managed 2 Quorum 500.0GB 0000000000000000
Quorum_V7K_EXTSG05_A 60050768028a8002680000000000000000000000000
000000000000000000000 nearline no 3 quorum

From this output, you can see more information, such as the status, about each MDisk.
For our current task, we are interested only in the unmanaged disks because they are
candidates for a storage pool.

Tip: The -delim parameter collapses output instead of wrapping text over multiple lines.

580 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

2. If not all of the MDisks that you expected are visible, rescan the available FC network by
entering the detectmdisk command, as shown in Example 10-13.

Example 10-13 detectmdisk command


IBM_2145:ITSO_SVC_DH8:superuser>detectmdisk

3. If you run the lsmdiskcandidate command again and your MDisk or MDisks are still not
visible, check that the LUNs from your subsystem were correctly assigned to the SVC and
that the appropriate zoning is in place (for example, the SVC can see the disk subsystem).

10.3.5 Viewing MDisk information


When you are viewing information about the MDisks (managed or unmanaged), you can use
the lsmdisk command to display an overall summary information about all available managed
disks. To display more detailed information about a specific MDisk, run the command again
and append the -mdisk name parameter (for example, mdisk0).

The overview command is lsmdisk -delim, as shown in Example 10-14.

Example 10-14 lsmdisk command

IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk -delim " "


id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID
tier encrypt site_id site_name
0 mdisk0 online managed 0 Site2_Pool 500.0GB 0000000000000000 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001400000000000000000000000000000000 nearline no 2 site2
1 mdisk1 online managed 0 Site2_Pool 500.0GB 0000000000000001 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001500000000000000000000000000000000 nearline no 2 site2
2 mdisk2 online managed 0 Site2_Pool 500.0GB 0000000000000002 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001600000000000000000000000000000000 nearline no 2 site2
3 mdisk3 online managed 3 Site1_Pool 500.0GB 0000000000000000 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000000000000000000000000000000000000 enterprise no 1 site1
4 mdisk4 online managed 3 Site1_Pool 500.0GB 0000000000000001 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000100000000000000000000000000000000 enterprise no 1 site1
5 mdisk5 online managed 3 Site1_Pool 500.0GB 0000000000000002 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000200000000000000000000000000000000 enterprise no 1 site1
6 mdisk6 online managed 2 Quorum 500.0GB 0000000000000000 Quorum_V7K_EXTSG05_A
60050768028a8002680000000000000000000000000000000000000000000000 nearline no 3 quorum

The summary for an individual MDisk is lsmdisk name or ID. Include the name or ID of the
MDisk from which you want the information, as shown in Example 10-15.

Example 10-15 Usage of the lsmdisk name or ID command


IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk 6
id 6
name mdisk6
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name Site2_Pool
capacity 500.0GB
quorum_index
block_size 512
controller_name Site2_V7K_EXTSG03_A
ctrl_type 4
ctrl_WWNN 5005076802002B6C
controller_id 2
path_count 2

Chapter 10. Operations using the CLI 581


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

max_path_count 2
ctrl_LUN_# 0000000000000006
UID 6005076802880102c00000000000001a00000000000000000000000000000000
preferred_WWPN 5005076802102B6D
active_WWPN many
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier nearline
slow_write_priority
fabric_type fc
site_id 2
site_name site2
easy_tier_load high
encrypt no
distributed no
drive_class_id
drive_count 0
stripe_width 0
rebuild_areas_total
rebuild_areas_available
rebuild_areas_goal

10.3.6 Renaming an MDisk


Use the chmdisk command to change the name of an MDisk. When you use this command,
be aware that the new name is listed first, and the ID or name of the MDisk to be renamed is
listed next. Use this format: chmdisk -name <new name> <current ID/name>. Use the lsmdisk
command to verify the change. Example 10-16 shows the chmdisk command.

Example 10-16 chmdisk command


IBM_2145:ITSO_SVC_DH8:superuser>chmdisk -name mdisk_0 mdisk0

This command renamed the MDisk that is named mdisk0 to mdisk_0.

The chmdisk command: The chmdisk command specifies the new name first. You can
use letters A - Z, a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new name
can be 1 - 63 characters. However, the new name cannot start with a number, dash, or the
word “MDisk” because this prefix is reserved for SVC assignment only.

10.3.7 Including an MDisk


If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These
errors can result from a hardware problem, a SAN problem, or poorly planned maintenance. If
the error is a hardware fault, you can receive a Simple Network Management Protocol
(SNMP) alert about the state of the disk subsystem (before the disk was excluded), and you
can undertake preventive maintenance. If not, the hosts that were using virtual disks
(VDisks), which used the excluded MDisk, now have I/O errors.

582 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

By running the lsmdisk command, you can see that mdisk4 is excluded, as shown in
Example 10-17.

Example 10-17 lsmdisk command: Excluded MDisk


IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk -delim " "
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID
tier encrypt site_id site_name
0 mdisk0 online managed 0 Site2_Pool 500.0GB 0000000000000000 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001400000000000000000000000000000000 nearline no 2 site2
1 mdisk1 online managed 0 Site2_Pool 500.0GB 0000000000000001 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001500000000000000000000000000000000 nearline no 2 site2
2 mdisk2 online managed 0 Site2_Pool 500.0GB 0000000000000002 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001600000000000000000000000000000000 nearline no 2 site2
3 mdisk3 online managed 3 Site1_Pool 500.0GB 0000000000000000 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000000000000000000000000000000000000 enterprise no 1 site1
4 mdisk4 excluded managed 3 Site1_Pool 500.0GB 0000000000000001 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000100000000000000000000000000000000 enterprise no 1 site1
5 mdisk5 online managed 3 Site1_Pool 500.0GB 0000000000000002 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000200000000000000000000000000000000 enterprise no 1 site1
6 mdisk6 online managed 2 Quorum 500.0GB 0000000000000000 Quorum_V7K_EXTSG05_A
60050768028a8002680000000000000000000000000000000000000000000000 nearline no 3 quorum

After the necessary corrective action is taken to repair the MDisk (replace the failed disk,
repair the SAN zones, and so on), we must include the MDisk again. We issue the
includemdisk command (Example 10-18) because the SVC system does not include the
MDisk automatically.

Example 10-18 includemdisk


IBM_2145:ITSO_SVC_DH8:superuser>includemdisk mdisk4

Running the lsmdisk command again shows that mdisk4 is online again, as shown in
Example 10-19.

Example 10-19 lsmdisk command: Verifying that an MDisk is included


IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk -delim " "
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID
tier encrypt site_id site_name
0 mdisk0 online managed 0 Site2_Pool 500.0GB 0000000000000000 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001400000000000000000000000000000000 nearline no 2 site2
1 mdisk1 online managed 0 Site2_Pool 500.0GB 0000000000000001 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001500000000000000000000000000000000 nearline no 2 site2
2 mdisk2 online managed 0 Site2_Pool 500.0GB 0000000000000002 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001600000000000000000000000000000000 nearline no 2 site2
3 mdisk3 online managed 3 Site1_Pool 500.0GB 0000000000000000 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000000000000000000000000000000000000 enterprise no 1 site1
4 mdisk4 online managed 3 Site1_Pool 500.0GB 0000000000000001 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000100000000000000000000000000000000 enterprise no 1 site1
5 mdisk5 online managed 3 Site1_Pool 500.0GB 0000000000000002 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000200000000000000000000000000000000 enterprise no 1 site1
6 mdisk6 online managed 2 Quorum 500.0GB 0000000000000000 Quorum_V7K_EXTSG05_A
60050768028a8002680000000000000000000000000000000000000000000000 nearline no 3 quorum

Chapter 10. Operations using the CLI 583


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.3.8 Adding MDisks to a storage pool


If you created an empty storage pool or you assign more MDisks to your configured storage
pool, you can use the addmdisk command to populate the storage pool, as shown in
Example 10-20.

Example 10-20 addmdisk command

IBM_2145:ITSO_SVC_DH8:superuser>addmdisk -mdisk mdisk6 STGPool_Multi_Tier

You can add only unmanaged MDisks to a storage pool. This command adds the MDisk
named mdisk6 to the storage pool that is named STGPool_Multi_Tier.

Important: Do not add this MDisk to a storage pool if you want to create an image mode
volume from the MDisk that you are adding. When you add an MDisk to a storage pool, it
becomes managed and extent mapping is not necessarily one-to-one anymore.

10.3.9 Showing MDisks in a storage pool


Use the lsmdisk -filtervalue command (as shown in Example 10-21) to see the MDisks
that are part of a specific storage pool. This command shows all of the MDisks that are part of
a storage pool named Site2_Pool.

Example 10-21 lsmdisk -filtervalue: MDisks in the managed disk group (MDG)
IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk -filtervalue mdisk_grp_name=Site2_Pool
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID
tier encrypt site_id site_name
0 mdisk0 online managed 0 Site2_Pool 500.0GB 0000000000000000 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001400000000000000000000000000000000 nearline no 2 site2
1 mdisk1 online managed 0 Site2_Pool 500.0GB 0000000000000001 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001500000000000000000000000000000000 nearline no 2 site2
2 mdisk2 online managed 0 Site2_Pool 500.0GB 0000000000000002 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001600000000000000000000000000000000 nearline no 2 site2

By using a wildcard with this command, you can see all of the MDisks that are present in the
storage pools that are named Site2_Pool* (the asterisk (*) indicates a wildcard).

10.3.10 Working with a storage pool


Before we can create any volumes on the SVC clustered system, we must virtualize the
allocated storage that is assigned to the SVC. After we assign volumes to the SVC’s
managed disks, we cannot start using them until they are members of a storage pool.
Therefore, one of our first operations is to create a storage pool where we can place our
MDisks.

This section describes the operations that use MDisks and the storage pool. It also explains
the tasks that we can perform at the storage pool level.

10.3.11 Creating a storage pool


After a successful login to the CLI of the SVC, we create the storage pool.

Create a storage pool by using the mkmdiskgrp command, as shown in Example 10-22.

584 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Example 10-22 mkmdiskgrp command

IBM_2145:ITSO_SVC_DH8:super>mkmdiskgrp -name CompressedV7000 -ext 256


MDisk Group, id [4], successfully created

This command creates a storage pool that is called CompressedV7000. The extent size that is
used within this group is 256 MiB. We did not add any MDisks to the storage pool, so it is an
empty storage pool.

You can add unmanaged MDisks and create the storage pool in the same command. Use the
mkmdiskgrp command with the -mdisk parameter and enter the IDs or names of the MDisks to
add the MDisks immediately after the storage pool is created.

Before the creation of the storage pool, enter the lsmdisk command, as shown in
Example 10-23. This command lists all of the available MDisks that are seen by the SVC
system.

Example 10-23 Listing the available MDisks


IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk -delim " "
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID
tier encrypt site_id site_name
0 mdisk0 online managed 0 Site2_Pool 500.0GB 0000000000000000 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001400000000000000000000000000000000 nearline no 2 site2
1 mdisk1 online managed 0 Site2_Pool 500.0GB 0000000000000001 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001500000000000000000000000000000000 nearline no 2 site2
2 mdisk2 online managed 0 Site2_Pool 500.0GB 0000000000000002 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001600000000000000000000000000000000 nearline no 2 site2
3 mdisk3 online managed 3 Site1_Pool 500.0GB 0000000000000000 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000000000000000000000000000000000000 enterprise no 1 site1
4 mdisk4 online managed 3 Site1_Pool 500.0GB 0000000000000001 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000100000000000000000000000000000000 enterprise no 1 site1
5 mdisk5 online managed 3 Site1_Pool 500.0GB 0000000000000002 Site1_V7K_EXTSG04_A
600507680284801ac80000000000000200000000000000000000000000000000 enterprise no 1 site1
6 mdisk6 online managed 2 Quorum 500.0GB 0000000000000000 Quorum_V7K_EXTSG05_A
60050768028a8002680000000000000000000000000000000000000000000000 nearline no 3 quorum
7 mdisk7 online unmanaged 100.0GB 000000000000000A Site2_V7K_EXTSG03_A
6005076802880102c00000000000001e00000000000000000000000000000000 enterprise no 2 site2
8 mdisk8 online unmanaged 100.0GB 000000000000000A Site1_V7K_EXTSG04_A
600507680284801ac80000000000000a00000000000000000000000000000000 enterprise no 1 site1

By using the same command (mkmdiskgrp) and knowing the MDisk IDs that we are using, we
can add multiple MDisks to the storage pool at the same time. We now add the unmanaged
MDisks to the storage pool that we created, as shown in Example 10-24 on page 585.

Example 10-24 Creating a storage pool and adding available MDisks


IBM_2145:ITSO_SVC_DH8:superuser>mkmdiskgrp -name ITSO_Pool1 -ext 256 -mdisk 7:8
MDisk Group, id [5], successfully created

This command creates a storage pool that is called ITSO_Pool1. The extent size that is used
within this group is 256 MiB, and two MDisks (7 and 8) are added to the storage pool.

Chapter 10. Operations using the CLI 585


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Storage pool name: The -name and -mdisk parameters are optional. If you do not enter a
-name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the
SVC internally. If you do not enter the -mdisk parameter, an empty storage pool is created.

If you want to provide a name, you can use letters A - Z, a - z, numbers 0 - 9, and the
underscore (_). The name can be 1 - 63 characters, but it cannot start with a number or the
word “MDiskgrp” because this prefix is reserved for SVC assignment only.

By running the lsmdisk command, you now see the MDisks as managed and as part of the
ITSO_Pool1, as shown in Example 10-25.

Example 10-25 lsmdisk command


IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk -filtervalue mdisk_grp_name=ITSO_Pool1
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
encrypt site_id site_name
7 mdisk7 online managed 5 ITSO_Pool1 100.0GB 000000000000000A
Site2_V7K_EXTSG03_A 6005076802880102c00000000000001e00000000000000000000000000000000
enterprise no 2 site2
8 mdisk8 online managed 5 ITSO_Pool1 100.0GB 000000000000000A
Site1_V7K_EXTSG04_A 600507680284801ac80000000000000a00000000000000000000000000000000
enterprise no 1 site1

In SVC 7.6, you can also create a child pool, which is a storage pool that is inside a parent
pool (Example 10-26 on page 586).

Example 10-26 Creating a child pool

IBM_2145:ITSO_SVC_DH8:superuser>mkmdiskgrp -name Pool0_child -unit gb -size 20


-parentmdiskgrp Site2_Pool

Now, the required tasks to create a storage pool are complete.

10.3.12 Viewing storage pool information


Use the lsmdiskgrp command to display information about the storage pools that are
defined in the SVC, as shown in Example 10-27.

Example 10-27 lsmdiskgrp command


IBM_2145:ITSO_SVC_DH8:superuser>lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_capacity:
used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_status:compression_a
ctive:compression_virtual_capacity:compression_compressed_capacity:compression_uncompressed
_capacity:parent_mdisk_grp_id:parent_mdisk_grp_name:child_mdisk_grp_count:child_mdisk_grp_c
apacity:type:encrypt:owner_type:site_id:site_name
0:Site2_Pool:online:10:8:4.88TB:1024:2.08TB:3.10TB:1.10TB:1.10TB:63:80:auto:balanced:no:0.0
0MB:0.00MB:0.00MB:0:Site2_Pool:1:1.71TB:parent:no:none:2:site2
1:Pool0_child:online:0:0:1.71TB:1024:1.71TB:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no:0.00M
B:0.00MB:0.00MB:0:Site2_Pool:0:0.00MB:child_thick::none:2:site2
2:Quorum:online:4:0:1.95TB:1024:1.95TB:0.00MB:0.00MB:0.00MB:0:80:auto:balanced:no:0.00MB:0.
00MB:0.00MB:2:Quorum:0:0.00MB:parent:no:none:3:quorum
3:Site1_Pool:online:10:2:4.88TB:1024:4.87TB:15.00GB:15.00GB:15.00GB:0:80:auto:balanced:no:0
.00MB:0.00MB:0.00MB:3:Site1_Pool:0:0.00MB:parent:no:none:1:site1
4:CompressedV7000:online:0:0:0:256:0:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no:0.00MB:0.00M
B:0.00MB:4:CompressedV7000:0:0.00MB:parent::none::

586 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

5:ITSO_Pool1:online:2:0:2.00GB:256:2.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no:0.00MB:
0.00MB:0.00MB:5:ITSO_Pool1:0:0.00MB:parent:no:none::

10.3.13 Renaming a storage pool


Use the chmdiskgrp command to change the name of a storage pool. To verify the change,
run the lsmdiskgrp command. Example 10-28 shows both of these commands.

Example 10-28 chmdiskgrp command


IBM_2145:ITSO_SVC_DH8:superuser>chmdiskgrp -name ITSO_Pool1_renamed 5
IBM_2145:ITSO_SVC_DH8:superuser>lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_capacity:
used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_status:compression_a
ctive:compression_virtual_capacity:compression_compressed_capacity:compression_uncompressed
_capacity:parent_mdisk_grp_id:parent_mdisk_grp_name:child_mdisk_grp_count:child_mdisk_grp_c
apacity:type:encrypt:owner_type:site_id:site_name
0:Site2_Pool:online:10:8:4.88TB:1024:2.47TB:3.10TB:1.10TB:1.10TB:63:80:auto:balanced:no:0.0
0MB:0.00MB:0.00MB:0:Site2_Pool:1:1.71TB:parent:no:none:2:site2
1:Pool0_child:online:0:0:1.71TB:1024:1.71TB:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no:0.00M
B:0.00MB:0.00MB:0:Site2_Pool:0:0.00MB:child_thick::none:2:site2
2:Quorum:online:4:0:1.95TB:1024:1.95TB:0.00MB:0.00MB:0.00MB:0:80:auto:balanced:no:0.00MB:0.
00MB:0.00MB:2:Quorum:0:0.00MB:parent:no:none:3:quorum
3:Site1_Pool:online:10:2:4.88TB:1024:4.28TB:15.00GB:15.00GB:15.00GB:0:80:auto:balanced:no:0
.00MB:0.00MB:0.00MB:3:Site1_Pool:0:0.00MB:parent:no:none:1:site1
4:CompressedV7000:online:0:0:0:256:0:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no:0.00MB:0.00M
B:0.00MB:4:CompressedV7000:0:0.00MB:parent::none::
5:ITSO_Pool1_renamed:online:2:0:2.00GB:256:2.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no
:0.00MB:0.00MB:0.00MB:5:ITSO_Pool1_renamed:0:0.00MB:parent:no:none::

This command renamed the storage pool ITSO_Pool1 to ITSO_Pool1_renamed as shown


above.

Changing the storage pool: The chmdiskgrp command specifies the new name first. You
can use letters A - Z, a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new
name can be 1 - 63 characters. However, the new name cannot start with a number, dash,
or the word “mdiskgrp” because this prefix is reserved for SVC assignment only.

10.3.14 Deleting a storage pool


Use the rmmdiskgrp command to remove a storage pool from the SVC system configuration,
as shown in Example 10-29.

Example 10-29 rmmdiskgrp


IBM_2145:ITSO_SVC_DH8:superuser>rmmdiskgrp STGPool_DS3500-2_new

This command removes storage pool STGPool_DS3500-2_new from the SVC system
configuration.

Chapter 10. Operations using the CLI 587


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Removing a storage pool from the SVC system configuration: If there are MDisks
within the storage pool, you must use the -force flag to remove the storage pool from the
SVC system configuration, as shown in the following example:
rmmdiskgrp STGPool_DS3500-2_new -force

Confirm that you want to use this flag because it destroys all mapping information and data
that is held on the volumes. The mapping information and data cannot be recovered.

10.3.15 Removing MDisks from a storage pool


Use the rmmdisk command to remove an MDisk from a storage pool, as shown in
Example 10-30.

Example 10-30 rmmdisk command


IBM_2145:ITSO_SVC_DH8:superuser>rmmdisk -mdisk 8 -force 2

This command removes the MDisk with ID 8 from the storage pool with ID 2. The -force flag
is set because volumes are using this storage pool.

Sufficient space: The removal takes place if there is sufficient space to migrate the
volume’s data to other extents on MDisks that remain in the same storage pool. If there is
no sufficient space available, the command will not be issued and an error message
(CMMVC5860E The action failed because there were not enough extents in the
managed disk group.) will be displayed. After you remove the MDisk from the storage pool,
changing the mode from managed to unmanaged takes time, depending on the size of the
MDisk that you are removing.

10.4 Working with hosts


In this section, we explain the tasks that you can perform at a host level. When we create a
host in our SVC system, we must define the connection method. Starting with SVC 5.1, we
define our host as iSCSI-attached or FC-attached.

10.4.1 Creating an FC-attached host


In the following sections, we describe how to create an FC-attached host under various
circumstances.

Host is powered on, connected, and zoned to the SAN Volume Controller
When you create your host on the SVC, it is a preferred practice to check whether the host
bus adapter (HBA) worldwide port names (WWPNs) of the server are visible to the SVC. By
checking, you ensure that zoning is done and that the correct WWPN is used. Run the
lshbaportcandidate command, as shown in Example 10-31.

588 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Example 10-31 lshbaportcandidate command


IBM_2145:ITSO_SVC_DH8:superuser>lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA

After you know the WWPNs that are displayed, match your host (use host or SAN switch
utilities to verify) and use the mkhost command to create your host.

Name: If you do not provide the -name parameter, the SVC automatically generates the
name hostx (where x is the ID sequence number that is assigned by the SVC internally).

You can use the letters A - Z and a - z, the numbers 0 - 9, the dash (-), and the underscore
(_). The name can be 1 - 63 characters. However, the name cannot start with a number,
dash, or the word “host” because this prefix is reserved for SVC assignment only.

The command to create a host is shown in Example 10-32.

Example 10-32 mkhost command


IBM_2145:ITSO_SVC_DH8:superuser>mkhost -name Almaden -hbawwpn
210000E08B89C1CD:210000E08B054CAA
Host, id [2], successfully created

This command creates a host that is called Almaden that uses WWPN
21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA.

Ports: You can define 1 - 8 ports per host, or you can use the addport command for
additional ports later on, which is shown in 10.4.5, “Adding ports to a defined host” on
page 592.

Host is not powered on or not connected to the SAN


If you want to create a host on the SVC without seeing your target WWPN by using the
lshbaportcandidate command, add the -force flag to your mkhost command, as shown in
Example 10-33. This option is more open to human error than if you choose the WWPN from
a list, but it is often used when many host definitions are created at the same time, such as
through a script.

In this case, you can enter the WWPN of your HBA or HBAs and use the -force flag to create
the host, regardless of whether they are connected, as shown in Example 10-33.

Example 10-33 mkhost -force command


IBM_2145:ITSO_SVC_DH8:superuser>mkhost -name Almaden -hbawwpn
210000E08B89C1CD:210000E08B054CAA -force
Host, id [2], successfully created

This command forces the creation of a host that is called Almaden that uses WWPN
210000E08B89C1CD:210000E08B054CAA.

WWPNs: WWPNs are not case sensitive in the CLI.

Chapter 10. Operations using the CLI 589


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.4.2 Creating an iSCSI-attached host


Now, we can create host definitions to a host that is not connected to the SAN but has LAN
access to our SVC nodes. Before we create the host definition, we configure our SVC
systems to use the new iSCSI connection method. For more information about configuring
your nodes to use iSCSI, see 10.8.3, “iSCSI configuration” on page 626.

The iSCSI functionality allows the host to access volumes through the SVC without being
attached to the SAN. Back-end storage and node-to-node communication still need the FC
network to communicate, but the host does not necessarily need to be connected to the SAN.

When we create a host that uses iSCSI as a communication method, iSCSI initiator software
must be installed on the host to initiate the communication between the SVC and the host.
This installation creates an iSCSI qualified name (IQN) identifier that is needed before we
create our host.

Before we start, we check our server’s IQN address (we are running Windows Server 2008 in
the example shown below). We select Start → Programs → Administrative tools, and we
select iSCSI initiator. The IQN in our example is
iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com, as shown in Figure 10-1.

Figure 10-1 IQN from the iSCSI initiator tool

We create the host by issuing the mkhost command, as shown in Example 10-34. When the
command completes successfully, we display our created host.

Example 10-34 mkhost command

IBM_2145:ITSO_SVC_DH8:superuser>mkhost -name Baldur -iogrp 0 -iscsiname


iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
Host, id [4], successfully created

590 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

IBM_2145:ITSO_SVC_DH8:superuser>lshost 4
id 4
name Baldur
port_count 1
type generic
mask 1111111111111111111111111111111111111111111111111111111111111111
iogrp_count 1
status offline
site_id
site_name
iscsi_name iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
node_logged_in_count 0
state offline

Important: When the host is initially configured, the default authentication method is set to
no authentication and no Challenge Handshake Authentication Protocol (CHAP) secret is
set. To set a CHAP secret for authenticating the iSCSI host with the SVC system, use the
chhost command with the chapsecret parameter. If you must display a CHAP secret for a
defined server, use the lsiscsiauth command. The lsiscsiauth command lists the
CHAP secret that is configured for authenticating an entity to the SVC system.

We now created our host definition. We map a volume to our new iSCSI server, as shown in
Example 10-35. We created the volume, as described in 10.6.1, “Creating a volume” on
page 595. In our scenario, our volume’s ID is 21 and the host name is Baldur. We map it to
our iSCSI host.

Example 10-35 Mapping a volume to the iSCSI host


IBM_2145:ITSO_SVC_DH8:superuser>mkvdiskhostmap -host Baldur 21
Virtual Disk to Host map, id [0], successfully created

Tip: FC hosts and iSCSI hosts are handled in the same way operationally after they are
created.

10.4.3 Modifying a host


Use the chhost command to change the name of a host. To verify the change, run the lshost
command. Example 10-36 shows both of these commands.

Example 10-36 chhost command


IBM_2145:ITSO_SVC_DH8:superuser>chhost -name Angola Guinea

IBM_2145:ITSO_SVC_DH8:superuser>lshost
id name port_count iogrp_count status site_id site_name
0 Palau 2 4 online
1 Nile 2 1 online
2 Kanaga 2 1 online
3 Siam 2 2 online
4 Angola 1 4 online

This command renamed the host from Guinea to Angola.

Chapter 10. Operations using the CLI 591


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Host name: The chhost command specifies the new name first. You can use letters A - Z
and a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new name can be 1 - 63
characters. However, it cannot start with a number, dash, or the word “host” because this
prefix is reserved for SVC assignment only.

Hosts that require the -type parameter: If you use Hewlett-Packard UNIX (HP-UX), you
use the -type option. For more information about the hosts that require the -type
parameter, see IBM System Storage Open Software Family SAN Volume Controller: Host
Attachment Guide, SC26-7563.

10.4.4 Deleting a host


Use the rmhost command to delete a host from the SVC configuration. If your host is still
mapped to volumes and you use the -force flag, the host and all the mappings with it are
deleted. The volumes are not deleted; only the mappings to them are deleted.

The command that is shown in Example 10-37 deletes the host that is called Angola from the
SVC configuration.

Example 10-37 rmhost Angola


IBM_2145:ITSO_SVC_DH8:superuser>rmhost Angola

Deleting a host: If any volumes are assigned to the host, you must use the -force flag, for
example, rmhost -force Angola.

10.4.5 Adding ports to a defined host


If you add an HBA or a network interface controller (NIC) to a server that is defined within the
SVC, you can use the addhostport command to add the new port definitions to your host
configuration.

If your host is connected through SAN with FC and if the WWPN is zoned to the SVC system,
issue the lshbaportcandidate command to compare with the information that you have from
the server administrator, as shown in Example 10-38.

Example 10-38 lshbaportcandidate command


IBM_2145:ITSO_SVC_DH8:superuser>lshbaportcandidate
id
210000E08B054CAA

Use host or SAN switch utilities to verify whether the WWPN matches your information. If the
WWPN matches your information, use the addhostport command to add the port to the host,
as shown in Example 10-39.

Example 10-39 addhostport command


IBM_2145:ITSO_SVC_DH8:superuser>addhostport -hbawwpn 210000E08B054CAA Palau

This command adds the WWPN of 210000E08B054CAA to the Palau host.

592 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Adding multiple ports: You can add multiple ports at one time by using the separator or
colon (:) between WWPNs, as shown in the following example:
addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau

If the new HBA is not connected or zoned, the lshbaportcandidate command does not
display your WWPN. In this case, you can manually enter the WWPN of your HBA or HBAs
and use the -force flag to create the host, as shown in Example 10-40.

Example 10-40 addhostport command


IBM_2145:ITSO_SVC_DH8:superuser>addhostport -hbawwpn 210000E08B054CAA -force Palau

This command forces the addition of the WWPN that is named 210000E08B054CAA to the host
called Palau.

WWPNs: WWPNs are not case sensitive within the CLI.

If you run the lshost command again, you can see your host with an updated port count of 3,
as shown in Example 10-41.

Example 10-41 lshost command: Port count


IBM_2145:ITSO_SVC_DH8:superuser>lshost
id name port_count iogrp_count status site_id site_name
0 Palau 3 4 online
1 Nile 2 1 online
2 Kanaga 2 1 online
3 Siam 2 2 online

If your host uses iSCSI as a connection method, you must have the new iSCSI IQN ID before
you add the port. Unlike FC-attached hosts, you cannot check for available candidates with
iSCSI.

After you acquire the other iSCSI IQN, use the addhostport command, as shown in
Example 10-42.

Example 10-42 Adding an iSCSI port to an already configured host


IBM_2145:ITSO_SVC_DH8:superuser>addhostport -iscsiname iqn.1991-05.com.microsoft:baldur 4

10.4.6 Deleting ports


If you make a mistake when you are adding a port or if you remove an HBA from a server that
is defined within the SVC, you can use the rmhostport command to remove WWPN
definitions from an existing host.

Before you remove the WWPN, ensure that it is the correct WWPN by issuing the lshost
command, as shown in Example 10-43.

Example 10-43 lshost command


IBM_2145:ITSO_SVC_DH8:superuser>lshost Palau
id 0
name Palau
port_count 3

Chapter 10. Operations using the CLI 593


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

type generic
mask 1111
iogrp_count 4
WWPN 210000E08B054CAA
node_logged_in_count 2
state active
WWPN 210000E08B89C1CD
node_logged_in_count 2
state offline

When you know the WWPN or iSCSI IQN, use the rmhostport command to delete a host
port, as shown in Example 10-44.

Example 10-44 rmhostport command


For removing WWPN
IBM_2145:ITSO_SVC_DH8:superuser>rmhostport -hbawwpn 210000E08B89C1CD Palau

Use this command to remove the iSCSI IQN:


IBM_2145:ITSO_SVC_DH8:superuser>rmhostport -iscsiname iqn.1991-05.com.microsoft:baldur
Baldur

This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI
IQN iqn.1991-05.com.microsoft:baldur from the Baldur host.

Removing multiple ports: You can remove multiple ports at one time by using the
separator or colon (:) between the port names, as shown in the following example:
rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola

10.5 Working with the Ethernet port for iSCSI


In this section, we describe the commands that are used for setting, changing, and displaying
the SVC Ethernet port for iSCSI configuration.

Example 10-45 shows the lsportip command that lists the iSCSI IP addresses that are
assigned for each port on each node in the system.

Example 10-45 The lsportip command


IBM_2145:ITSO_SVC_DH8:superuser>lsportip -delim " "
id node_id node_name IP_address mask gateway IP_address_6 prefix_6 gateway_6 MAC duplex
state speed failover link_state host remote_copy host_6 remote_copy_6 remote_copy_status

594 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

remote_copy_status_6 vlan vlan_6 adapter_location adapter_port_id lossless_iscsi


lossless_iscsi6
1 5 node1 40:f2:e9:2f:70:92 Full unconfigured 1Gb/s no active 0 0 0 1
1 5 node1 40:f2:e9:2f:70:92 Full unconfigured 1Gb/s yes active 0 0 0 1
2 5 node1 10.18.228.144 255.255.255.0 10.18.228.1 40:f2:e9:2f:70:93 configured no
inactive yes 0 0 0 2 off
2 5 node1 40:f2:e9:2f:70:93 configured yes inactive 0 0 0 2
3 5 node1 40:f2:e9:2f:70:94 unconfigured no inactive 0 0 0 3
3 5 node1 40:f2:e9:2f:70:94 unconfigured yes inactive 0 0 0 3
1 4 node2 40:f2:e9:2e:c0:02 Full unconfigured 1Gb/s no active 0 0 0 1
1 4 node2 40:f2:e9:2e:c0:02 Full unconfigured 1Gb/s yes active 0 0 0 1
2 4 node2 10.18.228.143 255.255.255.0 10.18.228.1 40:f2:e9:2e:c0:03 configured no
inactive yes 0 0 0 2 off
2 4 node2 40:f2:e9:2e:c0:03 configured yes inactive 0 0 0 2
3 4 node2 40:f2:e9:2e:c0:04 unconfigured no inactive 0 0 0 3
3 4 node2 40:f2:e9:2e:c0:04 unconfigured yes inactive 0 0 0 3

Example 10-46 shows how the cfgportip command assigns an IP address to each node
Ethernet port for iSCSI I/O.

Example 10-46 The cfgportip command


svctask cfgportip -node 5 -ip 10.18.228.144 -mask 255.255.255.0 -gw 10.18.228.1 2
svctask cfgportip -node 4 -ip 10.18.228.143 -mask 255.255.255.0 -gw 10.18.228.1 2

10.6 Working with volumes


In this section, we describe the various configuration and administrative tasks that can be
performed on volumes within the SVC environment.

10.6.1 Creating a volume


The mkvdisk command creates sequential, striped, or image mode volume objects. When
they are mapped to a host object, these objects are seen as disk drives with which the host
can perform I/O operations.

When a volume is created, you must enter several parameters at the CLI. Mandatory and
optional parameters are available.

For more information, see Command-Line Interface User’s Guide, SC27-2287.

Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.

When you are ready to create a volume, you must know the following information before you
start to create the volume:
򐂰 In which storage pool the volume has its extents
򐂰 From which I/O Group the volume is accessed
򐂰 Which SVC node is the preferred node for the volume
򐂰 Size of the volume
򐂰 Name of the volume
򐂰 Type of the volume
򐂰 Whether this volume is managed by Easy Tier to optimize its performance

Chapter 10. Operations using the CLI 595


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

When you are ready to create your striped volume, use the mkvdisk command. In
Example 10-47, this command creates a 10 GB striped volume with volume ID 20 within the
storage pool STGPool_DS3500-2 and assigns it to the io_grp0 I/O Group. Its preferred node is
node 1.

Example 10-47 mkvdisk command


IBM_2145:ITSO_SVC_DH8:superuser>mkvdisk -mdiskgrp Site1_Pool -iogrp io_grp0 -node 4 -size
10 -unit gb -name Tiger
Virtual Disk, id [6], successfully created

To verify the results, use the lsvdisk command, as shown in Example 10-48.

Example 10-48 lsvdisk command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk 6
id 6
name Tiger
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 3
mdisk_grp_name Site1_Pool
capacity 10.00GB
type striped
formatted no
formatting yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801FE80840800000000000020
throttling 0
preferred_node_id 4
fast_write_state not_empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 3
parent_mdisk_grp_name Site1_Pool
owner_type none
owner_id
owner_name
encrypt no
volume_id 6
volume_name Tiger
function

copy_id 0

596 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 3
mdisk_grp_name Site1_Pool
type striped
mdisk_id
mdisk_name
fast_write_state not_empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 10.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 3
parent_mdisk_grp_name Site1_Pool
encrypt no

The required tasks to create a volume are complete.

10.6.2 Volume information


Use the lsvdisk command to display summary information about all volumes that are
defined within the SVC environment. To display more detailed information about a specific
volume, run the command again and append the volume name parameter or the volume ID.

Example 10-49 shows both of these commands.

Example 10-49 lsvdisk command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk -delim " "
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count
RC_change compressed_copy_count parent_mdisk_grp_id parent_mdisk_grp_name formatting
encrypt volume_id volume_name function
0 SQL_Data0 0 io_grp0 online 3 Site1_Pool 300.00GB striped
6005076801FE80840800000000000000 0 1 empty 0 no 0 3 Site1_Pool no no 0 SQL_Data0
1 vdisk0 0 io_grp0 online 0 Site2_Pool 2.00TB striped 6005076801FE8084080000000000001B
0 1 empty 1 no 0 0 Site2_Pool no no 1 vdisk0
2 Stretched_test 0 io_grp0 online many many 10.00GB many
6005076801FE8084080000000000001C 0 2 empty 0 no 0 many many no no 2 Stretched_test
3 test_basic 0 io_grp0 online many many 5.00GB many 6005076801FE8084080000000000001D 0
2 empty 0 no 0 many many no no 3 test_basic
4 SQL_Data4 0 io_grp0 online 3 Site1_Pool 300.00GB striped
6005076801FE80840800000000000004 0 1 empty 0 no 0 3 Site1_Pool no no 4 SQL_Data4

Chapter 10. Operations using the CLI 597


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

5 VM_Datastore0 0 io_grp0 online 0 Site2_Pool 250.00GB striped


6005076801FE80840800000000000005 0 1 empty 0 no 0 0 Site2_Pool no no 5 VM_Datastore0
9 VM_Datastore4 0 io_grp0 online 0 Site2_Pool 250.00GB striped
6005076801FE80840800000000000009 0 1 empty 0 no 0 0 Site2_Pool no no 9 VM_Datastore4
15 child5 0 io_grp0 online 0 Site2_Pool 10.00GB striped
6005076801FE80840800000000000010 0 1 empty 0 no 0 0 Site2_Pool no no 15 child5

10.6.3 Creating a thin-provisioned volume


Example 10-50 shows how to create a thin-provisioned volume. In addition to the normal
parameters, you must use the following parameters:
򐂰 -rsize: This parameter makes the volume a thin-provisioned volume; otherwise, the
volume is fully allocated.
򐂰 -autoexpand: This parameter specifies that thin-provisioned volume copies automatically
expand their real capacities by allocating new extents from their storage pool.
򐂰 -grainsize: This parameter sets the grain size (in KB) for a thin-provisioned volume.

Example 10-50 Usage of the command mkvdisk


IBM_2145:ITSO_SVC_DH8:superuser>mkvdisk -mdiskgrp Site1_Pool -iogrp 0 -vtype striped -size
10 -unit gb -rsize 50% -autoexpand -grainsize 256
Virtual Disk, id [21], successfully created

This command creates a space-efficient 10 GB volume. The volume belongs to the storage
pool that is named Site1_Pool and is owned by I/O Group io_grp0. The real capacity
automatically expands until the volume size of 10 GB is reached. The grain size is set to 256
KB, which is the default.

Disk size: When the -rsize parameter is used, you have the following options: disk_size,
disk_size_percentage, and auto.

Specify the disk_size_percentage value by using an integer, or an integer that is


immediately followed by the percent (%) symbol.

Specify the units for a disk_size integer by using the -unit parameter; the default is MB.
The -rsize value can be greater than, equal to, or less than the size of the volume.

The auto option creates a volume copy that uses the entire size of the MDisk. If you specify
the -rsize auto option, you must also specify the -vtype image option.

An entry of 1 GB uses 1,024 MB.

10.6.4 Creating a volume in image mode


This virtualization type allows an image mode volume to be created when an MDisk has data
on it, perhaps from a pre-virtualized subsystem. When an image mode volume is created, it
directly corresponds to the previously unmanaged MDisk from which it was created.
Therefore, except for a thin-provisioned image mode volume, the volume’s logical block
address (LBA) x equals MDisk LBA x.

You can use this command to bring a non-virtualized disk under the control of the clustered
system. After it is under the control of the clustered system, you can migrate the volume from
the single managed disk.

598 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

When the first MDisk extent is migrated, the volume is no longer an image mode volume. You
can add an image mode volume to an already populated storage pool with other types of
volumes, such as striped or sequential volumes.

Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That
is, the minimum size that can be specified for an image mode volume must be the same as
the storage pool extent size to which it is added, with a minimum of 16 MiB.

You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode volume.

Capacity: If you create a mirrored volume from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks
and the remaining space on the larger MDisk is inaccessible.

If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.

Use the mkvdisk command to create an image mode volume, as shown in Example 10-51.

Example 10-51 mkvdisk (image mode) command


IBM_2145:ITSO_SVC_DH8:superuser>mkvdisk -mdiskgrp ITSO_Pool1 -iogrp 0 -mdisk mdisk25 -vtype
image -name Image_Volume_A
Virtual Disk, id [6], successfully created

This command creates an image mode volume that is called Image_Volume_A that uses the
mdisk10 MDisk. The volume belongs to the storage pool STGPool_DS3500-1 and the volume is
owned by the io_grp0 I/O Group.

If we run the lsvdisk command again, the volume that is named Image_Volume_A has a
status of image, as shown in Example 10-52.

Example 10-52 lsvdisk command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk -filtervalue type=image
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count
fast_write_state se_copy_count RC_change compressed_copy_count parent_mdisk_grp_id
parent_mdisk_grp_name formatting encrypt volume_id volume_name function
6 Image_Volume_A 0 io_grp0 online 5 ITSO_Pool1 1.00GB
image 6005076801FE80840800000000000021 0 1
empty 0 no 0 5
ITSO_Pool1 no no 6 Image_Volume_A

10.6.5 Adding a mirrored volume copy


You can create a mirrored copy of a volume, which keeps a volume accessible even when the
MDisk on which it depends becomes unavailable. You can create a copy of a volume on
separate storage pools or by creating an image mode copy of the volume. Copies increase
the availability of data; however, they are not separate objects. You can create or change
mirrored copies from the volume only.

In addition, you can use volume mirroring as an alternative method of migrating volumes
between storage pools.

Chapter 10. Operations using the CLI 599


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

For example, if you have a non-mirrored volume in one storage pool and want to migrate that
volume to another storage pool, you can add a copy of the volume and specify the second
storage pool. After the copies are synchronized, you can delete the copy on the first storage
pool. The volume is copied to the second storage pool while remaining online during the copy.

To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds
a copy of the chosen volume to the selected storage pool, which changes a non-mirrored
volume into a mirrored volume.

In the following scenario, we show creating a mirrored volume from one storage pool to
another storage pool.

As you can see in Example 10-53, the volume has a copy with copy_id 0.

Example 10-53 lsvdisk command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk Volume_no_mirror
id 23
name Volume_no_mirror
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F1000000000000019
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100

600 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB

In Example 10-54, we add the volume copy mirror by using the addvdiskcopy command.

Example 10-54 addvdiskcopy command


IBM_2145:ITSO_SVC_DH8:superuser>addvdiskcopy -mdiskgrp STGPool_DS5000-1 -vtype striped
-unit gb Volume_no_mirror
Vdisk [23] copy [1] successfully created

During the synchronization process, you can see the status by using the
lsvdisksyncprogress command. As shown in Example 10-55, the first time that the status is
checked, the synchronization progress is at 48%, and the estimated completion time is
151026203918 (Estimated completion time is displayed in the YYMMDDHHMMSS format. In
our example it is 2015, Oct-26 20:39:18). The second time that the command is run, the
progress status is at 100%, and the synchronization is complete.

Example 10-55 Synchronization


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
23 Volume_no_mirror 1 48 151026203918
IBM_2145:ITSO_SVC_DH8:superuser>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
23 Volume_no_mirror 1 100

As you can see in Example 10-56, the new mirrored volume copy (copy_id 1) was added and
can be seen by using the lsvdisk command.

Example 10-56 lsvdisk command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk 23
id 23
name Volume_no_mirror
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 1.00GB
type many
formatted no
mdisk_id many
mdisk_name many
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F1000000000000019

Chapter 10. Operations using the CLI 601


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 2
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 1.00GB

copy_id 1
status online
sync yes
primary no
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd

602 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

tier_capacity
1.00GB

While you are adding a volume copy mirror, you can define a mirror with different parameters
to the volume copy. Therefore, you can define a thin-provisioned volume copy for a
non-volume copy volume and vice versa, which is one way to migrate a non-thin-provisioned
volume to a thin-provisioned volume.

Volume copy mirror parameters: To change the parameters of a volume copy mirror, you
must delete the volume copy and redefine it with the new values.

Now, we can change the name of the volume that was mirrored from Volume_no_mirror to
Volume_mirrored, as shown in Example 10-57.

Example 10-57 Volume name changes


IBM_2145:ITSO_SVC_DH8:superuser>chvdisk -name Volume_mirrored Volume_no_mirror

10.6.6 Adding a compressed volume copy


Use the addvdiskcopy command to add a compressed copy to an existing volume.

We will show the usage of addvdiskcopy with the -autodelete flag set. The -autodelete flag
specifies the primary copy is deleted once the secondary copy is synchronized.

Example 10-58 shows a shortened lsvdisk output of an uncompressed volume.

Example 10-58 uncompressed volume


IBM_Storwize:ITSO_Gen2_SiteB:superuser>lsvdisk 0
id 0
name Uncompressed_Volume
IO_group_id 0
IO_group_name io_grp0
status online
..
compressed_copy_count 0
..

copy_id 0
status online
sync yes
auto_delete no
primary yes
..
compressed_copy no
uncompressed_used_capacity 2.00GB
..

In Example 10-59 we will add a compressed copy with the -autodelete flag set.

Example 10-59 Compressed copy


IBM_Storwize:ITSO_Gen2_SiteB:superuser>addvdiskcopy -autodelete -rsize 2 -mdiskgrp 0
-compressed 0
Vdisk [0] copy [1] successfully created

Chapter 10. Operations using the CLI 603


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Example 10-60 shows the lsvdisk output with an additional compressed volume (copy 1) and
volume copy 0 being set to auto_delete yes.

Example 10-60 lsvdisk output


IBM_Storwize:ITSO_Gen2_SiteB:superuser>lsvdisk 0
id 0
name Uncompressed_Volume
..
copy_count 2
..
compressed_copy_count 1
..

copy_id 0
status online
sync yes
auto_delete yes
primary yes
..
compressed_copy no
uncompressed_used_capacity 2.00GB
..

copy_id 1
status online
sync no
auto_delete no
primary no
..
compressed_copy yes
uncompressed_used_capacity 0.00MB
..

Once copy 1 is synchronized, copy 0 will be deleted. You can monitor the progress of volume
copy synchronization by using lsvdisksyncprogress.

Note: Consider the compression best practices before adding the first compressed copy to
a system.

10.6.7 Splitting a mirrored volume


The splitvdiskcopy command creates a volume in the specified I/O Group from a copy of the
specified volume. If the copy that you are splitting is not synchronized, you must use the
-force parameter. The command fails if you are attempting to remove the only synchronized
copy. To avoid this failure, wait for the copy to synchronize or split the unsynchronized copy
from the volume by using the -force parameter. You can run the command when either
volume copy is offline.

Example 10-61 shows the splitvdiskcopy command, which is used to split a mirrored
volume. It creates a volume that is named Volume_new from the volume that is named
Volume_mirrored.

Example 10-61 Split volume


IBM_2145:ITSO_SVC_DH8:superuser>splitvdiskcopy -copy 1 -iogrp 0 -name Volume_new
Volume_mirrored

604 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Virtual Disk, id [24], successfully created

As you can see in Example 10-62 on page 605, the new volume that is named Volume_new
was created as an independent volume.

Example 10-62 lsvdisk command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk Volume_new
id 24
name Volume_new
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001A
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd

Chapter 10. Operations using the CLI 605


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

tier_capacity
1.00GB

By issuing the command that is shown in Example 10-61 on page 604, Volume_mirrored does
not have its mirrored copy and a volume is created automatically.

10.6.8 Modifying a volume


Running the chvdisk command modifies a single property of a volume. Only one property
can be modified at a time. Therefore, changing the name and modifying the I/O Group require
two invocations of the command.

You can specify a new name or label. The new name can be used to reference the volume.
The I/O Group with which this volume is associated can be changed. Changing the I/O Group
with which this volume is associated requires a flush of the cache within the nodes in the
current I/O Group to ensure that all data is written to disk. I/O must be suspended at the host
level before you perform this operation.

Tips: If the volume has a mapping to any hosts, it is impossible to move the volume to an
I/O Group that does not include any of those hosts.

This operation fails if insufficient space exists to allocate bitmaps for a mirrored volume in
the target I/O Group.

If the -force parameter is used and the system is unable to destage all write data from the
cache, the contents of the volume are corrupted by the loss of the cached data.

If the -force parameter is used to move a volume that has out-of-sync copies, a full
resynchronization is required.

10.6.9 I/O governing


You can set a limit on the number of I/O operations that are accepted for a volume. The limit is
set in terms of I/Os per second or MB per second. By default, no I/O governing rate is set
when a volume is created.

Base the choice between I/O and MB as the I/O governing throttle on the disk access profile
of the application. Database applications generally issue large amounts of I/O, but they
transfer only a relatively small amount of data. In this case, setting an I/O governing throttle
that is based on MB per second does not achieve much. It is better to use an I/Os per second
as a second throttle.

At the other extreme, a streaming video application generally issues a small amount of I/O,
but it transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle that is based on I/Os per second does not achieve much, so it is better to
use an MB per second throttle.

I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of
the lsvdisk command) does not mean that zero I/Os per second (or MB per second) can
be achieved. It means that no throttle is set.

An example of the chvdisk command is shown in Example 10-63 on page 607.

606 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Example 10-63 chvdisk command


IBM_2145:ITSO_SVC_DH8:superuser>chvdisk -rate 20 -unitmb volume_7
IBM_2145:ITSO_SVC_DH8:superuser>chvdisk -warning 85% volume_7

New name first: The chvdisk command specifies the new name first. The name can
consist of letters A - Z and a - z, numbers 0 - 9, the dash (-), and the underscore (_). It can
be 1 - 63 characters. However, it cannot start with a number, dash, or the word “vdisk”
because this prefix is reserved for SVC assignment only.

The first command changes the volume throttling of volume_7 to 20 MBps. The second
command changes the thin-provisioned volume warning to 85%. To verify the changes, issue
the lsvdisk command, as shown in Example 10-64.

Example 10-64 lsvdisk command: Verifying throttling


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk volume_7
id 1
name volume_7
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001F
virtual_disk_throttling (MB) 20
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496

Chapter 10. Operations using the CLI 607


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

autoexpand on
warning 85
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
2.02GB

10.6.10 Deleting a volume


When this command is run on an existing fully managed mode volume, any data that
remained on it is lost. The extents that made up this volume are returned to the pool of free
extents that are available in the storage pool.

If any remote copy, FlashCopy, or host mappings still exist for this volume, the delete fails
unless the -force flag is specified. This flag ensures the deletion of the volume and any
volume to host mappings and copy mappings.

If the volume is the subject of a “migrate to image mode” process, the delete fails unless the
-force flag is specified. This flag halts the migration and then deletes the volume.

If the command succeeds (without the -force flag) for an image mode volume, the underlying
back-end controller logical unit is consistent with the data that a host might previously read
from the image mode volume. That is, all fast write data was flushed to the underlying LUN. If
the -force flag is used, consistency is not guaranteed.

If any non-destaged data exists in the fast write cache for this volume, the deletion of the
volume fails unless the -force flag is specified. Now, any non-destaged data in the fast write
cache is deleted.

Use the rmvdisk command to delete a volume from your SVC configuration, as shown in
Example 10-65.

Example 10-65 rmvdisk command


IBM_2145:ITSO_SVC_DH8:superuser>rmvdisk volume_A

This command deletes the volume_A volume from the SVC configuration. If the volume is
assigned to a host, you must use the -force flag to delete the volume, as shown in
Example 10-66.

Example 10-66 rmvdisk -force command


IBM_2145:ITSO_SVC_DH8:superuser>rmvdisk -force volume_A

10.6.11 Using volume protection


To prevent active volumes or host mappings from being deleted inadvertently, the system
supports a global setting that prevents these objects from being deleted if the system detects
that they have had recent I/O activity.

608 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Use the chsystem command to set an interval at which the volume must be idle before it can
be deleted from the system. Any command that is affected by this setting will fail. If the -force
parameter is used the deletion will fail if the volume has not been idle for the specified
interval.

The following commands are affected by this setting:


򐂰 rmvdisk
򐂰 rmvolume
򐂰 rmvdiskcopy
򐂰 rmvdiskhostmap
򐂰 rmmdiskgrp
򐂰 rmhostiogrp
򐂰 rmhost
򐂰 rmhostport

To enable volume protection by setting an inactive interval to prevent deletion of volumes,


complete these steps:

Issue svctask chsystem -vdiskprotectionenabled yes -vdiskprotectiontime 60. The


parameter -vdiskprotectionenabled yes enables volume protection and the
-vdiskprotectiontime parameter indicates how long a volume must be inactive before it can
be deleted. Volumes can only be deleted if they have been inactive for over 60 minutes.

To disable volume protection, complete the following step:

Issue svctask chsystem -vdiskprotectionenabled no.

10.6.12 Expanding a volume


Expanding a volume presents a larger capacity disk to your operating system. Although this
expansion can be easily performed by using the SVC, you must ensure that your operating
systems support expansion before this function is used.

Assuming that your operating systems support expansion, you can use the expandvdisksize
command to increase the capacity of a volume, as shown in Example 10-67.

Example 10-67 expandvdisksize command


IBM_2145:ITSO_SVC_DH8:superuser>expandvdisksize -size 5 -unit gb volume_C

This command expands the volume_C volume (which was 35 GB) by another 5 GB to give it a
total size of 40 GB.

To expand a thin-provisioned volume, you can use the -rsize option, as shown in
Example 10-68. This command changes the real size of the volume_B volume to a real
capacity of 55 GB. The capacity of the volume is unchanged.

Example 10-68 lsvdisk


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk volume_B
id 26
capacity 100.00GB
type striped
.
.
copy_id 0
status online

Chapter 10. Operations using the CLI 609


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

used_capacity 0.41MB
real_capacity 50.02GB
free_capacity 50.02GB
overallocation 199
autoexpand on
warning 80
grainsize 32
se_copy yes
IBM_2145:ITSO_SVC_DH8:superuser>expandvdisksize -rsize 5 -unit gb volume_B
IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk volume_B
id 26
name volume_B
capacity 100.00GB
type striped
.
.
copy_id 0
status online
used_capacity 0.41MB
real_capacity 55.02GB
free_capacity 55.02GB
overallocation 181
autoexpand on
warning 80
grainsize 32
se_copy yes

Important: If a volume is expanded, its type becomes striped even if it was previously
sequential or in image mode. If there are not enough extents to expand your volume to the
specified size, you receive the following error message:
CMMVC5860E Ic_failed_vg_insufficient_virtual_extents

10.6.13 Assigning a volume to a host


Use the mkvdiskhostmap command to map a volume to a host. When run, this command
creates a mapping between the volume and the specified host, which presents this volume to
the host as though the disk was directly attached to the host. It is only after this command is
run that the host can perform I/O to the volume. Optionally, a SCSI LUN ID can be assigned to
the mapping.

When the HBA on the host scans for devices that are attached to it, the HBA discovers all of
the volumes that are mapped to its FC ports. When the devices are found, each one is
allocated an identifier (SCSI LUN ID).

For example, the first disk that is found is generally SCSI LUN 1. You can control the order in
which the HBA discovers volumes by assigning the SCSI LUN ID, as required. If you do not
specify a SCSI LUN ID, the system automatically assigns the next available SCSI LUN ID,
based on any mappings that exist with that host.

By using the volume and host definition that we created in the previous sections, we assign
volumes to hosts that are ready for their use. We use the mkvdiskhostmap command, as
shown in Example 10-69.

Example 10-69 mkvdiskhostmap command


IBM_2145:ITSO_SVC_DH8:superuser>mkvdiskhostmap -host Almaden volume_B
Virtual Disk to Host map, id [0], successfully created

610 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

IBM_2145:ITSO_SVC_DH8:superuser>mkvdiskhostmap -host Almaden volume_C


Virtual Disk to Host map, id [1], successfully created

This command displays volume_B and volume_C that are assigned to host Almaden, as shown
in Example 10-70.

Example 10-70 lshostvdiskmap -delim command


IBM_2145:ITSO_SVC_DH8:superuser>lshostvdiskmap -delim :
id:name:SCSI_id:vdisk_id:vdisk_name:vdisk_UID
2:Almaden:0:26:volume_B:6005076801AF813F1000000000000020
2:Almaden:1:27:volume_C:6005076801AF813F1000000000000021

Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can
help assign a specific LUN ID to a volume that is to be associated with a host. The default
(if nothing is specified) is to increment based on what is already assigned to the host.

Certain HBA device drivers stop when they find a gap in the SCSI LUN IDs, as shown in the
following examples:
򐂰 Volume 1 is mapped to Host 1 with SCSI LUN ID 1.
򐂰 Volume 2 is mapped to Host 1 with SCSI LUN ID 2.
򐂰 Volume 3 is mapped to Host 1 with SCSI LUN ID 4.

When the device driver scans the HBA, it might stop after discovering volumes 1 and 2
because no SCSI LUN is mapped with ID 3.

Important: Ensure that the SCSI LUN ID allocation is contiguous.

It is not possible to map a volume to a host more than one time at separate LUNs
(Example 10-71).

Example 10-71 mkvdiskhostmap command


IBM_2145:ITSO_SVC_DH8:superuser>mkvdiskhostmap -host Siam volume_A
Virtual Disk to Host map, id [0], successfully created

This command maps the volume that is called volume_A to the host that is called Siam.

All tasks that are required to assign a volume to an attached host are complete.

10.6.14 Showing volumes to host mapping


Use the lshostvdiskmap command to show the volumes that are assigned to a specific host,
as shown in Example 10-72.

Example 10-72 lshostvdiskmap command


IBM_2145:ITSO_SVC_DH8:superuser>lshostvdiskmap -delim , Siam
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID
3,Siam,0,0,volume_A,210000E08B18FF8A,60050768018301BF280000000000000C

From this command, you can see that the host Siam has only one assigned volume that is
called volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is

Chapter 10. Operations using the CLI 611


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

presented to the host. If no host is specified, all defined host-to-volume mappings are
returned.

Specifying the flag before the host name: Although the -delim flag normally comes at
the end of the command string, in this case, you must specify this flag before the host
name. Otherwise, it returns the following message:
CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or
incorrect argument sequence has been detected. Ensure that the input is as per
the help.

10.6.15 Deleting a volume to host mapping


When you are deleting a volume mapping, you are not deleting the volume, only the
connection from the host to the volume. If you mapped a volume to a host by mistake or you
want to reassign the volume to another host, use the rmvdiskhostmap command to unmap a
volume from a host, as shown in Example 10-73.

Example 10-73 rmvdiskhostmap command


IBM_2145:ITSO_SVC_DH8:superuser>rmvdiskhostmap -host Tiger volume_D

This command unmaps the volume that is called volume_D from the host that is called Tiger.

10.6.16 Migrating a volume


You might want to migrate volumes from one set of MDisks to another set of MDisks to
decommission an old disk subsystem to have better balanced performance across your
virtualized environment, or to migrate data into the SVC environment transparently by using
image mode. For more information about migration, see Chapter 6, “Data migration” on
page 237.

Important: After migration is started, it continues until completion unless it is stopped or


suspended by an error condition or the volume that is being migrated is deleted.

As you can see from the parameters that are shown in Example 10-74, before you can
migrate your volume, you must know the name of the volume that you want to migrate and the
name of the storage pool to which you want to migrate it. To discover the names, run the
lsvdisk and lsmdiskgrp commands.

After you know these details, you can run the migratevdisk command, as shown in
Example 10-74.

Example 10-74 migratevdisk command


IBM_2145:ITSO_SVC_DH8:superuser>migratevdisk -mdiskgrp STGPool_DS5000-1 -vdisk volume_C

This command moves volume_C to the storage pool named STGPool_DS5000-1.

612 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Tips: If insufficient extents are available within your target storage pool, you receive an
error message. Ensure that the source MDisk group and target MDisk group have the
same extent size.

By using the optional threads parameter, you can assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.

You can run the lsmigrate command at any time to see the status of the migration process,
as shown in Example 10-75.

Example 10-75 lsmigrate command


IBM_2145:ITSO_SVC_DH8:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0

IBM_2145:ITSO_SVC_DH8:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id
0

Progress: The progress is shown as percent complete. If you receive no more replies, it
means that the process finished.

10.6.17 Migrating a fully managed volume to an image mode volume


Migrating a fully managed volume to an image mode volume allows the SVC to be removed
from the data path, which might be useful where the SVC is used as a data mover appliance.
You can use the migratetoimage command.

To migrate a fully managed volume to an image mode volume, the following rules apply:
򐂰 The destination MDisk must be greater than or equal to the size of the volume.
򐂰 The MDisk that is specified as the target must be in an unmanaged state.
򐂰 Regardless of the mode in which the volume starts, it is reported as a managed mode
during the migration.
򐂰 Both of the MDisks that are involved are reported as being image mode volumes during
the migration.
򐂰 If the migration is interrupted by a system recovery or cache problem, the migration
resumes after the recovery completes.

Chapter 10. Operations using the CLI 613


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Example 10-76 shows an example of the command.

Example 10-76 migratetoimage command


IBM_2145:ITSO_SVC_DH8:superuser>migratetoimage -vdisk volume_A -mdisk mdisk10 -mdiskgrp
STGPool_IMAGE

In this example, you migrate the data from volume_A onto mdisk10, and the MDisk must be put
into the STGPool_IMAGE storage pool.

10.6.18 Shrinking a volume


The shrinkvdisksize command reduces the capacity that is allocated to the particular
volume by the amount that you specify. You cannot shrink the real size of a thin-provisioned
volume to less than its used size. All capacities (including changes) must be in multiples of
512 bytes. An entire extent is reserved even if it is only partially used. The default capacity
units are MBs.

You can use this command to shrink the physical capacity that is allocated to a particular
volume by the specified amount. You also can use this command to shrink the virtual capacity
of a thin-provisioned volume without altering the physical capacity that is assigned to the
volume. Use the following parameters:
򐂰 For a non-thin-provisioned volume, use the -size parameter.
򐂰 For a thin-provisioned volume’s real capacity, use the -rsize parameter.
򐂰 For the thin-provisioned volume’s virtual capacity, use the -size parameter.

When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.

The system arbitrarily reduces the capacity of the volume by removing a partial extent, one
extent, or multiple extents from those extents that are allocated to the volume. You cannot
control which extents are removed; therefore, you cannot assume that it is unused space that
is removed.

Image mode volumes cannot be reduced in size. Instead, they must first be migrated to fully
managed mode. To run the shrinkvdisksize command on a mirrored volume, all copies of
the volume must be synchronized.

Important: Consider the following guidelines when you are shrinking a disk:
򐂰 If the volume contains data, do not shrink the disk.
򐂰 Certain operating systems or file systems use the outer edge of the disk for
performance reasons. This command can shrink a FlashCopy target volume to the
same capacity as the source.
򐂰 Before you shrink a volume, validate that the volume is not mapped to any host objects.
If the volume is mapped, data is displayed. You can determine the exact capacity of the
source or master volume by issuing the svcinfo lsvdisk -bytes vdiskname command.
Shrink the volume by the required amount by issuing the shrinkvdisksize -size
disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command.

Assuming that your operating system supports it, you can use the shrinkvdisksize command
to decrease the capacity of a volume, as shown in Example 10-77.

614 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Example 10-77 shrinkvdisksize command


IBM_2145:ITSO_SVC_DH8:superuser>shrinkvdisksize -size 44 -unit gb volume_D

This command shrinks a volume that is called volume_D from a total size of 80 GB by 44 GB,
to a new total size of 36 GB.

10.6.19 Showing a volume on an MDisk


Use the lsmdiskmember command to display information about the volume that is using
space on a specific MDisk, as shown in Example 10-78.

Example 10-78 lsmdiskmember command


IBM_2145:ITSO_SVC_DH8:superuser>lsmdiskmember mdisk8
id copy_id
24 0
27 0

This command displays a list of all of the volume IDs that correspond to the volume copies
that use mdisk8.

To correlate the IDs that are displayed in this output to volume names, we can run the
lsvdisk command. For more information, see 10.6, “Working with volumes” on page 595.

10.6.20 Showing which volumes are using a storage pool


Use the lsvdisk -filtervalue command to see which volumes are part of a specific storage
pool, as shown in Example 10-79 on page 615. This command shows all of the volumes that
are part of the storage pool that is named STGPool_DS3500-2.

Example 10-79 lsvdisk -filtervalue: VDisks in the managed disk group (MDG)
IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2 -delim
,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,0,no
9,W2K3_SRV2_VOL03,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
00000000000000A,0,1,empty,0,0,no
10,W2K3_SRV2_VOL04,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000B,0,1,empty,0,0,no
11,W2K3_SRV2_VOL05,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000C,0,1,empty,0,0,no
12,W2K3_SRV2_VOL06,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000D,0,1,empty,0,0,no
16,AIX_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,20.00GB,striped,,,,,6005076801AF813F1
000000000000011,0,1,empty,0,0,no

Chapter 10. Operations using the CLI 615


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.6.21 Showing which MDisks are used by a specific volume


Use the lsvdiskmember command to show from which MDisks a specific volume’s extents
came, as shown in Example 10-80.

Example 10-80 lsvdiskmember command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdiskmember 0
id
4
5
6
7

If you want to know more about these MDisks, you can run the lsmdisk command, as
described in 10.2, “New commands and functions” on page 573 (by using the ID that is
displayed in Example 10-80 rather than the name).

10.6.22 Showing from which storage pool a volume has its extents
Use the lsvdisk command to show to which storage pool a specific volume belongs, as
shown in Example 10-81.

Example 10-81 lsvdisk command: Storage pool name


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk Volume_D
id 25
name Volume_D
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001E
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes

616 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 2.02GB

To learn more about these storage pools, you can run the lsmdiskgrp command, as
described in 10.3.10, “Working with a storage pool” on page 584.

10.6.23 Showing the host to which the volume is mapped


To show the hosts to which a specific volume was assigned, run the lsvdiskhostmap
command, as shown in Example 10-82.

Example 10-82 lsvdiskhostmap command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdiskhostmap -delim , volume_B
id,name,SCSI_id,host_id,host_name,vdisk_UID
26,volume_B,0,2,Almaden,6005076801AF813F1000000000000020

This command shows the host or hosts to which the volume_B volume was mapped. Duplicate
entries are normal because more paths exist between the clustered system and the host. To
be sure that the operating system on the host sees the disk only one time, you must install
and configure a multipath software application, such as IBM Subsystem Driver (SDD).

Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.

10.6.24 Showing the volume to which the host is mapped


To show the volume to which a specific host was assigned, run the lshostvdiskmap
command, as shown in Example 10-83.

Example 10-83 lshostvdiskmap command example


IBM_2145:ITSO_SVC_DH8:superuser>lshostvdiskmap -delim , Almaden
id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID
2,Almaden,0,26,volume_B,60050768018301BF2800000000000005
2,Almaden,1,27,volume_A,60050768018301BF2800000000000004

Chapter 10. Operations using the CLI 617


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

This command shows which volumes are mapped to the host called Almaden.

Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.

10.6.25 Tracing a volume from a host back to its physical disk


In many cases, you must verify exactly which physical disk is presented to the host, for
example, from which storage pool a specific volume comes. However, from the host side, it is
not possible for the server administrator who is using the GUI to see on which physical disks
the volumes are running.

Instead, you must enter the command that is shown in Example 10-84 from your multipath
command prompt.

Complete the following steps:


1. On your host, run the datapath query device command. You see a long disk serial
number for each vpath device, as shown in Example 10-84.

Example 10-84 datapath query device command


DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 60050768018301BF2800000000000005
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0
1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050768018301BF2800000000000004
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0
1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 60050768018301BF2800000000000006
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0
1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0

State: In Example 10-84, the state of each path is OPEN. Sometimes, the state is
CLOSED. This state does not necessarily indicate a problem because it might be a
result of the path’s processing stage.

2. Run the lshostvdiskmap command to return a list of all assigned volumes, as shown in
Example 10-85.

Example 10-85 lshostvdiskmap command


IBM_2145:ITSO_SVC_DH8:superuser>lshostvdiskmap -delim , Almaden
id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID
2,Almaden,0,26,volume_B,60050768018301BF2800000000000005
2,Almaden,1,27,volume_A,60050768018301BF2800000000000004

618 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

2,Almaden,2,28,volume_C,60050768018301BF2800000000000006

Look for the disk serial number that matches your datapath query device output. This
host was defined in our SVC as Almaden.
3. Run the lsvdiskmember vdiskname command for the MDisk or a list of the MDisks that
make up the specified volume, as shown in Example 10-86.

Example 10-86 lsvdiskmember command


IBM_2145:ITSO_SVC_DH8:superuser>lsvdiskmember volume_E
id
0
1
2
3
4
10
11
13
15
16
17

4. Query the MDisks with the lsmdisk mdiskID command to discover their controller and
LUN information, as shown in Example 10-87. The output displays the controller name
and the controller LUN ID to help you to track back to a LUN within the disk subsystem (if
you gave your controller a unique name, such as a serial number). See Example 10-87.

Example 10-87 lsmdisk command


IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk 0
id 0
name mdisk0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 128.0GB
quorum_index 1
block_size 512
controller_name ITSO-DS3500
ctrl_type 4
ctrl_WWNN 20080080E51B09E8
controller_id 2
path_count 4
max_path_count 4
ctrl_LUN_# 0000000000000000
UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000
preferred_WWPN 20580080E51B09E8
active_WWPN 20580080E51B09E8
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier generic_hdd

Chapter 10. Operations using the CLI 619


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.7 Scripting under the CLI for SAN Volume Controller task
automation

Command prefix changes: The svctask and svcinfo command prefixes are no longer
necessary when a command is run. If you have existing scripts that use those prefixes,
they continue to function. You do not need to change the scripts.

The use of scripting constructs works better for the automation of regular operational jobs.
You can use available shells to develop scripts. Scripting enhances the productivity of SVC
administrators and the integration of their storage virtualization environment. You can create
your own customized scripts to automate many tasks for completion at various times and run
them through the CLI.

We suggest that you keep the scripting as simple as possible in large SAN environments
where scripting commands are used. It is harder to manage fallback, documentation, and the
verification of a successful script before execution in a large SAN environment.

In this section, we present an overview of how to automate various tasks by creating scripts
by using the SVC CLI.

10.7.1 Scripting structure


When you create scripts to automate the tasks on the SVC, use the structure that is shown in
Figure 10-2 on page 620.

C re a te
c o n n e c tio n
(S S H ) to th e
SVC

S c h e d u le d
a c tiv a t io n
R u n th e or
ccoommmmaanndds M anual
a c tiv a t io n

P e r fo r m
lo g g in g

Figure 10-2 Scripting structure for SVC task automation

Creating a Secure Shell connection to the SAN Volume Controller

Secure Shell Key: The use of a Secure Shell (SSH) key is optional. (You can use a user
ID and password to access the system.) However, we suggest the use of an SSH key for
security reasons. We provide a sample of its use in this section.

620 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

When you create a connection to the SVC, you must have access to a public key that
corresponds to a public key that was previously uploaded to the SVC if you are running a
script.

The key is used to establish the SSH connection that is needed to use the CLI on the SVC. If
the SSH keypair is generated without a passphrase, you can connect without the need of
special scripting to parse in the passphrase.

On UNIX systems, you can use the ssh command to create an SSH connection with the SVC.
On Windows systems, you can use a utility that is called plink.exe (which is provided with the
PuTTY tool) to create an SSH connection with the SVC. In the following examples, we use
plink to create the SSH connection to the SVC.

Running the commands


For more information about the correct syntax and an explanation of each command when
you are using the CLI, see IBM System Storage SAN Volume Controller Command-Line
Interface User’s Guide, GC27-2287, which is available at this website:
https://ibm.biz/BdHnKF

When you use the CLI, not all commands provide a response to determine the status of the
started command. Therefore, always create checks that can be logged for monitoring and
troubleshooting purposes.

Connecting to the SAN Volume Controller by using a predefined SSH


connection
The easiest way to create an SSH connection to the SVC is when plink can call a predefined
PuTTY session.

Define a session, including the following information:


򐂰 The auto-login user name. Set the auto-login username to your SVC superuser user name
(for example, superuser). Set this parameter by clicking Connection → Data, as shown in
Figure 10-3.

Figure 10-3 Auto-login configuration

Chapter 10. Operations using the CLI 621


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 The private key for authentication (for example, icat.ppk). This key is the private key that
you created. Set this parameter by clicking Connection → Session → Auth, as shown in
Figure 10-4 on page 622.

Figure 10-4 An ssh private key configuration

򐂰 The IP address of the SVC clustered system. Set this parameter by clicking Session, as
shown in Figure 10-5.

Figure 10-5 IP address

Enter the following information:


– A session name. Our example uses ITSO_SVC1.

622 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

– Our PuTTY version is 0.63.


– To use this predefined PuTTY session, use the following syntax:
plink ITSO_SVC_DH8
– If a predefined PuTTY session is not used, use the following syntax:
plink superuser@<your cluster ip add> -i "C:\DirectoryPath\KeyName.PPK"

IBM provides a suite of scripting tools that are based on Perl. You can download these
scripting tools from this website:
http://www.alphaworks.ibm.com/tech/svctools

10.8 Managing the clustered system by using the CLI


In the following sections, we describe how to perform system administration.

10.8.1 Viewing clustered system properties

Important changes: The following changes were made since SVC 6.3:
򐂰 The svcinfo lscluster command was changed to lssystem.
򐂰 The svctask chcluster command was changed to chsystem, and several optional
parameters were moved to new commands. For example, to change the IP address of
the system, you can now use the chsystemip command. All of the old commands are
maintained for compatibility.

Use the lssystem command to display summary information about the clustered system, as
shown in Example 10-88.

Example 10-88 lssystem command


IBM_2145:ITSO_SVC_DH8:superuser>lssystem
id 000002007F600A10
name ITSO_SVC_DH8
location local
partnership
total_mdisk_capacity 825.0GB
space_in_mdisk_grps 571.0GB
space_allocated_to_vdisks 75.05GB
total_free_space 750.0GB
total_vdiskcopy_capacity 85.00GB
total_used_capacity 75.00GB
total_overallocation 10
total_vdisk_capacity 75.00GB
total_allocated_extent_capacity 81.00GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 7.4.0.0 (build 103.11.1410200000)
console_IP 10.18.228.140:443
id_alias 000002007F600A10
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0

Chapter 10. Operations using the CLI 623


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply [email protected]
email_contact Support team
email_contact_primary 123456789
email_contact_alternate 123456789
email_contact_location IBM
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 7
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 50
tier ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_capacity 571.00GB
tier_free_capacity 493.00GB
tier nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
has_nas_key no
layer replication
rc_buffer_size 48
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
cache_prefetch on
email_organization IBM
email_machine_address Street
email_machine_city City
email_machine_state CA
email_machine_zip 99999
email_machine_country CA
total_drive_raw_capacity 0
compression_destage_mode off
local_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask 11111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology standard
topology_status
rc_auth_method none
vdisk_protection_time 60
vdisk_protection_enabled yes
product_name IBM SAN Volume Controller

624 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Use the lssystemstats command to display the most recent values of all node statistics
across all nodes in a clustered system, as shown in Example 10-89.

Example 10-89 lssystemstats command


IBM_2145:ITSO_SVC_DH8:superuser>lssystemstats
stat_name stat_current stat_peak stat_peak_time
cpu_pc 1 1 110927162859
fc_mb 0 0 110927162859
fc_io 7091 7314 110927162524
sas_mb 0 0 110927162859
sas_io 0 0 110927162859
iscsi_mb 0 0 110927162859
iscsi_io 0 0 110927162859
write_cache_pc 0 0 110927162859
total_cache_pc 0 0 110927162859
vdisk_mb 0 0 110927162859
vdisk_io 0 0 110927162859
vdisk_ms 0 0 110927162859
mdisk_mb 0 0 110927162859
mdisk_io 0 0 110927162859
mdisk_ms 0 0 110927162859
drive_mb 0 0 110927162859
drive_io 0 0 110927162859
drive_ms 0 0 110927162859
vdisk_r_mb 0 0 110927162859
vdisk_r_io 0 0 110927162859
vdisk_r_ms 0 0 110927162859
vdisk_w_mb 0 0 110927162859
vdisk_w_io 0 0 110927162859
vdisk_w_ms 0 0 110927162859
mdisk_r_mb 0 0 110927162859
mdisk_r_io 0 0 110927162859
mdisk_r_ms 0 0 110927162859
mdisk_w_mb 0 0 110927162859
mdisk_w_io 0 0 110927162859
mdisk_w_ms 0 0 110927162859
drive_r_mb 0 0 110927162859
drive_r_io 0 0 110927162859
drive_r_ms 0 0 110927162859
drive_w_mb 0 0 110927162859
drive_w_io 0 0 110927162859
drive_w_ms 0 0 110927162859

10.8.2 Changing system settings


Use the chsystem command to change the settings of the system. This command modifies
specific features of a clustered system. You can change multiple features by issuing a single
command.

All command parameters are optional; however, you must specify at least one parameter.

Important: Changing the speed on a running system breaks I/O service to the attached
hosts. Before the fabric speed is changed, stop the I/O from the active hosts and force
these hosts to flush any cached data by unmounting volumes (for UNIX host types) or by
removing drive letters (for Windows host types). You might need to reboot specific hosts to
detect the new fabric speed.

Example 10-90 shows configuring the Network Time Protocol (NTP) IP address.

Chapter 10. Operations using the CLI 625


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Example 10-90 chsystem command


IBM_2145:ITSO_SVC_DH8:superuser>chsystem -ntpip 10.200.80.1

10.8.3 iSCSI configuration


SVC 5.1 introduced the IP-based Small Computer System Interface (iSCSI) as a supported
method of communication between the SVC and hosts. All back-end storage and intracluster
communication still use FC and the SAN; therefore, iSCSI cannot be used for that type of
communication.

For more information about how iSCSI works, see Chapter 2, “IBM SAN Volume Controller”
on page 11. In this section, we show how we configured our system for use with iSCSI.

We configured our nodes to use the primary and secondary Ethernet ports for iSCSI and to
contain the clustered system IP. When we configured our nodes to be used with iSCSI, we did
not affect our clustered system IP. The clustered system IP is changed, as described in
10.8.2, “Changing system settings” on page 625.

Important: You can have more than a one IP address to one physical connection
relationship. We can have a four-to-one relationship (4:1), which consists of two IPv4
addresses plus two IPv6 addresses (four total) to one physical connection per port per
node.

Tip: When you are reconfiguring IP ports, be aware that configured iSCSI connections
must reconnect if changes are made to the IP addresses of the nodes.

Two methods are available to perform iSCSI authentication or Challenge Handshake


Authentication Protocol (CHAP): for the whole clustered system or per host connection.
Example 10-91 shows configuring CHAP for the whole clustered system.

Example 10-91 Setting a CHAP secret for the entire clustered system to passw0rd
IBM_2145:ITSO_SVC_DH8:superuser>chsystem -iscsiauthmethod chap -chapsecret passw0rd

In our scenario, our clustered system IP address is 9.64.210.64, which is not affected during
our configuration of the node’s IP addresses.

We start by listing our ports by using the lsportip command (not shown). We see that we
have two ports per node with which to work. Both ports can have two IP addresses that can
be used for iSCSI.

We configure the secondary port in both nodes in our I/O Group, as shown in Example 10-92.

Example 10-92 Configuring the secondary Ethernet port on both SVC nodes
IBM_2145:ITSO_SVC_DH8:superuser>cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask
255.255.255.0 2
IBM_2145:ITSO_SVC_DH8:superuser>cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask
255.255.255.0 2

While both nodes are online, each node is available to iSCSI hosts on the IP address that we
configured. iSCSI failover between nodes is enabled automatically. Therefore, if a node goes
offline for any reason, its partner node in the I/O Group becomes available on the failed

626 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

node’s port IP address. This design ensures that hosts can continue to perform I/O. The
lsportip command displays the port IP addresses that are active on each node.

10.8.4 Modifying IP addresses


We can use both IP ports of the nodes. However, all IP information is required the first time
that you configure a second port because port 1 on the system must always have one stack
that is fully configured.

Now, two active system ports are on the configuration node. If the system IP address is
changed, the open command-line shell closes during the processing of the command. You
must reconnect to the new IP address if connected through that port.

If the clustered system IP address is changed, the open command-line shell closes during the
processing of the command and you must reconnect to the new IP address. If this node
cannot rejoin the clustered system, you can start the node in service mode. In this mode, the
node can be accessed as a stand-alone node by using the service IP address.

List the IP addresses of the clustered system by issuing the lssystemip command, as shown
in Example 10-93.

Example 10-93 lssystemip command


IBM_2145:ITSO_SVC_DH8:superuser>lssystemip
cluster_id cluster_name location port_id IP_address subnet_mask gateway
IP_address_6 prefix_6 gateway_6
000002007F600A10 ITSO_SVC_DH8 local 1 10.18.228.140 255.255.255.0 10.18.228.1
0000:0000:0000:0000:0000:ffff:0a12:e48c 24 0000:0000:0000:0000:0000:ffff:0a12:e401
000002007F600A10 ITSO_SVC_DH8 local 2

Modify the IP address by running the chsystemip command. You can specify a static IP
address or have the system assign a dynamic IP address, as shown in Example 10-94.

Example 10-94 chsystemip -systemip


IBM_2145:ITSO_SVC_DH8:superuser>chsystemip -systemip 10.20.133.5 -gw 10.20.135.1 -mask
255.255.255.0 -port 1

This command changes the IP address of the clustered system to 10.20.133.5.

Important: If you specify a new system IP address, the existing communication with the
system through the CLI is broken and the PuTTY application automatically closes. You
must relaunch the PuTTY application and point to the new IP address, but your SSH key
still works.

List the IP service addresses of the clustered system by running the lsserviceip command.

10.8.5 Supported IP address formats


Table 10-2 lists the IP address formats.

Chapter 10. Operations using the CLI 627


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Table 10-2 ip_address_list formats


IP type ip_address_list format

IPv4 (no port set, so SVC uses the default) 1.2.3.4

IPv4 with specific port 1.2.3.4:22

Full IPv6, default port 1234:1234:0001:0123:1234:1234:1234:1234

Full IPv6, default port, leading zeros suppressed 1234:1234:1:123:1234:1234:1234:1234

Full IPv6 with port [2002:914:fc12:848:209:6bff:fe8c:4ff6]:23

Zero-compressed IPv6, default port 2002::4ff6

Zero-compressed IPv6 with port [2002::4ff6]:23

The required tasks to change the IP addresses of the clustered system are complete.

10.8.6 Using the ping command to diagnose IP configuration problems


Use the ping command to diagnose IP configuration problems by checking whether the
specified IP address is accessible from the node on which the command is run using the
specified IP address.

This command checks whether the specified IP address is accessible from the node on which
the command is run using the specified IP address.

Use this command to ping from any port on any node as long as you are logged on to the
service assistant on that node.

Example 10-95 shows an invocation example of the ping command from 10.18.228.140 to
10.18.228.142.

Example 10-95 ping command


IBM_2145:ITSO SVC DH8:superuser>ping -srcip4 10.18.228.140 10.18.228.142
PING 10.18.228.142 (10.18.228.142) from 10.18.228.140 : 56(84) bytes of data.
64 bytes from 10.18.228.142: icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from 10.18.228.142: icmp_seq=2 ttl=64 time=0.024 ms
64 bytes from 10.18.228.142: icmp_seq=3 ttl=64 time=0.012 ms
64 bytes from 10.18.228.142: icmp_seq=4 ttl=64 time=0.014 ms
64 bytes from 10.18.228.142: icmp_seq=5 ttl=64 time=0.036 ms

--- 10.18.228.142 ping statistics ---


5 packets transmitted, 5 received, 0% packet loss, time 3996ms
rtt min/avg/max/mdev = 0.012/0.024/0.036/0.009 ms

10.8.7 Setting the clustered system time zone and time


Use the -timezone parameter to specify the numeric ID of the time zone that you want to set.
Run the lstimezones command to list the time zones that are available on the system. This
command displays a list of valid time zone settings.

Tip: If you changed the time zone, you must clear the event log dump directory before you
can view the event log through the web application.

628 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Setting the clustered system time zone


Complete the following steps to set the clustered system time zone and time:
1. Enter the showtimezone command to determine for which time zone your system is
configured, as shown in Example 10-96.

Example 10-96 showtimezone command


IBM_2145:ITSO_SVC_DH8:superuser>showtimezone
id timezone
522 UTC

2. To find the time zone code that is associated with your time zone, enter the lstimezones
command, as shown in Example 10-97. A truncated list is provided for this example. If this
setting is correct (for example, 522 UTC), go to Step 4. If the setting is incorrect, continue
with Step 3.

Example 10-97 lstimezones command


IBM_2145:ITSO_SVC_DH8:superuser>lstimezones
id timezone
.
.
507 Turkey
508 UCT
509 Universal
510 US/Alaska
511 US/Aleutian
512 US/Arizona
513 US/Central
514 US/Eastern
515 US/East-Indiana
516 US/Hawaii
517 US/Indiana-Starke
518 US/Michigan
519 US/Mountain
520 US/Pacific
521 US/Samoa
522 UTC
.
.

3. Set the time zone by running the settimezone command, as shown in Example 10-98.

Example 10-98 settimezone command


IBM_2145:ITSO_SVC_DH8:superuser>settimezone -timezone 520

4. Set the system time by running the setsystemtime command, as shown in Example 10-99.

Example 10-99 setsystemtime command


IBM_2145:ITSO_SVC_DH8:superuser>setsystemtime -time 061718402008

The format of the time is MMDDHHmmYYYY (where M is month, D is day, H is hour, m is


minute, and Y is year).

The clustered system time zone and time are now set.

Chapter 10. Operations using the CLI 629


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.8.8 Starting statistics collection


Statistics are collected at the end of each sampling period (as specified by the -interval
parameter). These statistics are written to a file. A file is created at the end of each sampling
period. Separate files are created for MDisks, volumes, and node statistics.

Use the startstats command to start the collection of statistics, as shown in


Example 10-100.

Example 10-100 startstats command


IBM_2145:ITSO_SVC_DH8:superuser>startstats -interval 5

Specify the interval (1 - 60) in minutes. This command starts statistics collection and gathers
data at 5-minute intervals.

Statistics collection: To verify that the statistics collection is set, display the system
properties again, as shown in Example 10-101.

Example 10-101 Statistics collection status and frequency


IBM_2145:ITSO_SVC_DH8:superuser>lssystem
statistics_status on
statistics_frequency 5
-- The output has been shortened for easier reading. --

V6.3: Starting with V6.3, the command svctask stopstats is deprecated. You can no
longer disable statistics collection.

The statistics collection is now started on the clustered system.

10.8.9 Determining the status of a copy operation


Use the lscopystatus command to determine whether a file copy operation is in progress,
as shown in Example 10-102. Only one file copy operation can be performed at a time. The
output of this command is a status of either active or inactive.

Example 10-102 lscopystatus command


IBM_2145:ITSO_SVC_DH8:superuser>lscopystatus
status
inactive

10.8.10 Shutting down a clustered system


If all input power to an SVC system is to be removed for more than a few minutes (for
example, if the machine room power is to be shut down for maintenance), it is important to
shut down the clustered system before you remove the power. If the input power is removed
from the uninterruptible power supply units without first shutting down the system and the
uninterruptible power supply units, the uninterruptible power supply units remain operational
and eventually are drained of power.

When input power is restored to the uninterruptible power supply units, they start to recharge.
However, the SVC does not permit any I/O activity to be performed to the volumes until the

630 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

uninterruptible power supply units are charged enough to enable all of the data on the SVC
nodes to be destaged in a subsequent unexpected power loss. Recharging the uninterruptible
power supply can take up to two hours.

Shutting down the clustered system before input power is removed to the uninterruptible
power supply units prevents the battery power from being drained. It also makes it possible for
I/O activity to be resumed when input power is restored.

Complete the following steps to shut down the system:


1. Use the stopsystem command to shut down your SVC system, as shown in
Example 10-103.

Example 10-103 stopsystem command


IBM_2145:ITSO_SVC_DH8:superuser>stopsystem
Are you sure that you want to continue with the shut down?

This command shuts down the SVC clustered system. All data is flushed to disk before the
power is removed. You lose administrative contact with your system and the PuTTY
application automatically closes.
2. You are presented with the following message:
Warning: Are you sure that you want to continue with the shut down?
Ensure that you stopped all FlashCopy mappings, Metro Mirror (remote copy)
relationships, data migration operations, and forced deletions before you continue. Enter y
in response to this message to run the command. No feedback is displayed. Entering
anything other than y or Y results in the command not running. No feedback is displayed.

Important: Before a clustered system is shut down, ensure that all I/O operations are
stopped that are destined for this system because you lose all access to all volumes
that are provided by this system. Failure to do so can result in failed I/O operations
being reported to the host operating systems.

Begin the process of quiescing all I/O to the system by stopping the applications on the
hosts that are using the volumes that are provided by the clustered system.

We completed the tasks that are required to shut down the system. To shut down the
uninterruptible power supply units, press the power-on button on the front panel of each
uninterruptible power supply unit.

Restarting the system: To restart the clustered system, you must first restart the
uninterruptible power supply units by pressing the power button on their front panels. Then,
press the power-on button on the service panel of one of the nodes within the system. After
the node is fully booted (for example, displaying Cluster: on line 1 and the cluster name
on line 2 of the panel), you can start the other nodes in the same way.

As soon as all of the nodes are fully booted, you can reestablish administrative contact by
using PuTTY, and your system is fully operational again.

10.9 Nodes
In this section, we describe the tasks that can be performed at an individual node level.

Chapter 10. Operations using the CLI 631


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.9.1 Viewing node details


Use the lsnode command to view the summary information about the nodes that are defined
within the SVC environment. To view more details about a specific node, append the node
name (for example, SVC2N1) to the command.

Example 10-104 shows both of these commands.

Tip: The -delim parameter truncates the content in the window and separates data fields
with colons (:) as opposed to wrapping text over multiple lines.

Example 10-104 lsnode command


IBM_2145:ITSO_SVC_DH8:superuser>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h
ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number
1,SVC2N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.SVC2N1,,108283,,,
2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n2,,110711,,,
3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n4,,110775,,,
4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n3,,104643,,,
IBM_2145:ITSO_SVC_DH8:superuser>lsnode SVC2N1
id 1
name SVC2N1
UPS_serial_number 1000739004
WWNN 50050768010027E2
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id 2
partner_node_name SVC1N2
config_node no
UPS_unique_id 10000000000027E2
port_id 50050768014027E2
port_status active
port_speed 2Gb
port_id 50050768013027E2
port_status active
port_speed 2Gb
port_id 50050768011027E2
port_status active
port_speed 2Gb
port_id 50050768012027E2
port_status active
port_speed 2Gb
hardware 8G4
iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.SVC2N1
iscsi_alias
failover_active no
failover_name SVC1N2
failover_iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.svc1n2
failover_iscsi_alias
panel_name 108283
enclosure_id
canister_id
enclosure_serial_number

632 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

service_IP_address 10.18.228.101
service_gateway 10.18.228.1
service_subnet_mask 255.255.255.0
service_IP_address_6
service_gateway_6
service_prefix_6

10.9.2 Adding a node


After a clustered system is created by using the service panel (the front panel of one of the
SVC nodes) and the system web interface, only one node (the configuration node) is set up.

To have a fully functional SVC system, you must add a second node to the configuration. To
add a node to a clustered system, complete the following steps to gather the necessary
information:
1. Before you can add a node, you must know which unconfigured nodes are available as
candidates. Issue the lsnodecandidate command, as shown in Example 10-105.

Example 10-105 lsnodecandidate command


id panel_name UPS_serial_number UPS_unique_id hardware serial_number
product_mtm machine_signature
500507680100E85F 168167 1000739007 100000000000E85F CG8 78G0123
2145-CG8 0123-4567-89AB-CDEF

2. You must specify to which I/O Group you are adding the node. If you enter the lsnode
command, you can identify the I/O Group ID of the group to which you are adding your
node, as shown in Example 10-106.

Tip: The node that you want to add must have a separate uninterruptible power supply
unit serial number from the uninterruptible power supply unit on the first node.

Example 10-106 lsnode command

IBM_2145:ITSO_SVC_DH8:superuser>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS
_unique_id,hardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,
enclosure_serial_number
4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,i
qn.1986-03.com.ibm:2145.itsosvc1.svc1n3,,104643,,,

3. Now that you know the available nodes, use the addnode command to add the node to the
SVC clustered system configuration, as shown in Example 10-107.

Example 10-107 addnode -wwnodename command


IBM_2145:ITSO_SVC_DH8:superuser>addnode -wwnodename 50050768010037E5 -iogrp
io_grp1
Node, id [5], successfully added

This command adds the candidate node with the wwnodename of 50050768010037E5 to
the I/O Group called io_grp1.
The -wwnodename parameter (50050768010037E5) was used. However, you can also use the
-panelname parameter (104643) instead, as shown in Example 10-108. If you are standing

Chapter 10. Operations using the CLI 633


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

in front of the node, it is easier to read the panel name than it is to get the worldwide node
name (WWNN).

Example 10-108 addnode -panelname command


IBM_2145:ITSO_SVC_DH8:superuser>addnode -panelname 104643 -name SVC1N3 -iogrp
io_grp1

The optional -name parameter (SVC1N3) also was used. If you do not provide the -name
parameter, the SVC automatically generates the name nodex (where x is the ID sequence
number that is assigned internally by the SVC).

Name: If you want to provide a name, you can use letters A - Z and a - z, numbers 0 - 9,
the dash (-), and the underscore (_). The name can be 1 - 63 characters. However, the
name cannot start with a number, dash, or the word “node” because this prefix is
reserved for SVC assignment only.

4. If the addnode command returns no information, your second node is powered on, the
zones are correctly defined, and the preexisting system configuration data can be stored
in the node. If you are sure that this node is not part of another active SVC system, you
can use the service panel to delete the existing system information. After this action is
complete, reissue the lsnodecandidate command and you see that the node is listed.

10.9.3 Renaming a node


Use the chnode command to rename a node within the SVC system configuration, as shown
in Example 10-109.

Example 10-109 chnode -name command


IBM_2145:ITSO_SVC_DH8:superuser>chnode -name ITSO_SVC_DH8_SVC1N3 4

This command renames node ID 4 to ITSO_SVC_DH8_SVC1N3 4.

Name: The chnode command specifies the new name first. You can use letters A - Z and
a - z, numbers 0 - 9, the dash (-), and the underscore (_). The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or the word “node”
because this prefix is reserved for SVC assignment only.

10.9.4 Deleting a node


Use the rmnode command to remove a node from the SVC clustered system configuration, as
shown in Example 10-110.

Example 10-110 rmnode command


IBM_2145:ITSO_SVC_DH8:superuser>rmnode SVC1N2

This command removes SVC1N2 from the SVC clustered system.

Because SVC1N2 also was the configuration node, the SVC transfers the configuration node
responsibilities to a surviving node within the I/O Group. Unfortunately, the PuTTY session
cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses
communication and closes automatically.

634 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

We must restart the PuTTY application to establish a secure session with the new
configuration node.

Important: If this node is the last node in an I/O Group and volumes are still assigned to
the I/O Group, the node is not deleted from the clustered system.

If this node is the last node in the system and the I/O Group has no remaining volumes, the
clustered system is destroyed and all virtualization information is lost. Any data that is still
required must be backed up or migrated before the system is destroyed.

10.9.5 Shutting down a node


Shutting down a single node within the clustered system might be necessary to perform
tasks, such as scheduled maintenance, while the SVC environment is left up and running.

Use the stopsystem -node command to shut down a single node, as shown in
Example 10-111.

Example 10-111 stopcluster -node command


IBM_2145:ITSO_SVC_DH8:superuser>stopsystem -node SVC1N3
Are you sure that you want to continue with the shut down?

This command shuts down node SVC1N3 in a graceful manner. When this node is shut down,
the other node in the I/O Group destages the contents of its cache and enters write-through
mode until the node is powered up and rejoins the clustered system.

Important: You do not need to stop FlashCopy mappings, remote copy relationships, and
data migration operations. The other node handles these activities, but be aware that the
system has a single point of failure now.

If this node is the last node in an I/O Group, all access to the volumes in the I/O Group is lost.
Verify that you want to shut down this node before this command is run. You must specify the
-force flag.

By reissuing the lsnode command (as shown in Example 10-112 on page 635), we can see
that the node is now offline.

Example 10-112 lsnode command


IBM_2145:ITSO_SVC_DH8:superuser>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h
ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number
1,SVC2N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.SVC2N1,,108283,,,
2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n2,,110711,,,
3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n4,,110775,,,
4,SVC1N3,1000739007,50050768010037E5,offline,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n3,,104643,,,
IBM_2145:ITSO_SVC_DH8:superuser>lsnode SVC1N3
CMMVC5782E The object specified is offline.

Chapter 10. Operations using the CLI 635


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Restart: To restart the node manually, press the power-on button that is on the service
panel of the node.

We completed the tasks that are required to view, add, delete, rename, and shut down a node
within an SVC environment.

10.10 I/O Groups


In this section, we describe the tasks that you can perform at an I/O Group level.

10.10.1 Viewing I/O Group details


Use the lsiogrp command to view information about the I/O Groups that are defined within
the SVC environment, as shown in Example 10-113.

Example 10-113 I/O Group details


IBM_2145:ITSO_SVC_DH8:superuser>lsiogrp
id name node_count vdisk_count host_count site_id site_name
0 io_grp0 2 24 9
1 io_grp1 2 22 9
2 io_grp2 0 0 1
3 io_grp3 0 0 1
4 recovery_io_grp 0 0 0

In our example, the SVC predefines five I/O Groups. In a four-node clustered system (similar
to our example), only two I/O Groups are in use. The other I/O Groups (io_grp2 and io_grp3)
are for a six-node or eight-node clustered system.

The recovery I/O Group is a temporary home for volumes when all nodes in the I/O Group
that normally owns them experience multiple failures. By using this design, the volumes can
be moved to the recovery I/O Group and then into a working I/O Group. While temporarily
assigned to the recovery I/O Group, I/O access is not possible.

10.10.2 Renaming an I/O Group


Use the chiogrp command to rename an I/O Group, as shown in Example 10-114 on
page 636.

Example 10-114 chiogrp command


IBM_2145:ITSO_SVC_DH8:superuser>chiogrp -name io_grpA io_grp1

This command renames the I/O Group io_grp1 to io_grpA.

Name: The chiogrp command specifies the new name first.


If you want to provide a name, you can use letters A - Z, letters a - z, numbers 0 - 9, the
dash (-), and the underscore (_). The name can be 1 - 63 characters. However, the name
cannot start with a number, dash, or the word “iogrp” because this prefix is reserved for
SVC assignment only.

636 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

To see whether the renaming was successful, run the lsiogrp command again to see the
change.

We completed the required tasks to rename an I/O Group.

10.10.3 Adding and removing hostiogrp


To map or unmap a specific host object to a specific I/O Group to reach the maximum number
of hosts that is supported by an SVC clustered system, use the addhostiogrp command to
map a specific host to a specific I/O Group, as shown in Example 10-115.

Example 10-115 addhostiogrp command


IBM_2145:ITSO_SVC_DH8:superuser>addhostiogrp -iogrp 1 Kanaga

The addhostiogrp command uses the following parameters:


򐂰 -iogrp iogrp_list -iogrpall
Specify a list of one or more I/O Groups that must be mapped to the host. This parameter
is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all
the I/O Groups must be mapped to the specified host. This parameter is mutually
exclusive with -iogrp.
򐂰 -host host_id_or_name
Identify the host by ID or name to which the I/O Groups must be mapped.

Use the rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in
Example 10-116.

Example 10-116 rmhostiogrp command


IBM_2145:ITSO_SVC_DH8:superuser>rmhostiogrp -iogrp 0 Kanaga

The rmhostiogrp command uses the following parameters:


򐂰 -iogrp iogrp_list -iogrpall
Specify a list of one or more I/O Groups that must be unmapped to the host. This
parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies
that all of the I/O Groups must be unmapped to the specified host. This parameter is
mutually exclusive with -iogrp.
򐂰 -force
If the removal of a host to I/O Group mapping results in the loss of the volume to host
mappings, the command fails if the -force flag is not used. However, the -force flag
overrides this behavior and forces the deletion of the host to I/O Group mapping.
򐂰 host_id_or_name
Identify the host by the ID or name to which the I/O Groups must be unmapped.

10.10.4 Listing I/O Groups


To list all of the I/O Groups that are mapped to the specified host and vice versa, use the
lshostiogrp command and specify the host name, as in our example, Kanaga, as shown in
Example 10-117.

Chapter 10. Operations using the CLI 637


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Example 10-117 lshostiogrp command


IBM_2145:ITSO_SVC_DH8:superuser>lshostiogrp Kanaga
id name
1 io_grp1

To list all of the host objects that are mapped to the specified I/O Group, use the lsiogrphost
command, as shown in Example 10-118.

Example 10-118 lsiogrphost command


IBM_2145:ITSO_SVC_DH8:superuser> lsiogrphost io_grp1
id name
1 Nile
2 Kanaga
3 Siam

In Example 10-118, io_grp1 is the I/O Group name.

10.11 Managing authentication


In the following sections, we describe authentication administration.

10.11.1 Managing users by using the CLI


In this section, we describe how to operate and manage authentication by using the CLI. All
users must now be a member of a predefined user group. You can list those groups by using
the lsusergrp command, as shown in Example 10-119.

Example 10-119 lsusergrp command


IBM_2145:ITSO_SVC_DH8:superuser>lsusergrp
id name role remote
0 SecurityAdmin SecurityAdmin yes
1 Administrator Administrator no
2 CopyOperator CopyOperator no
3 Service Service yes
4 Monitor Monitor no
5 support Service no

Example 10-120 shows a simple example of creating a user. User John is added to the user
group Monitor with the password m0nitor.

Example 10-120 mkuser creates a user called John with password m0nitor
IBM_2145:ITSO_SVC_DH8:superuser>mkuser -name John -usergrp Monitor -password m0nitor
User, id [6], successfully created

Local users are users that are not authenticated by a remote authentication server. Remote
users are users that are authenticated by a remote central registry server.

The user groups include a defined authority role, as listed in Table 10-3.

638 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Table 10-3 Authority roles


User group Role User

Security superuser All commands Superusers

VASAProvider All commands except: IBM Spectrum Control software


򐂰 chauthservice uses this role to implement the
򐂰 chldap VMware virtual volumes
򐂰 chldapserver function.
򐂰 chsecurity
򐂰 chuser
򐂰 chusergrp
򐂰 mkldapserver
򐂰 mkuser
򐂰 mkusergrp
򐂰 rmldapserver
򐂰 rmuser
򐂰 rmusergro
򐂰 setpwdreset

Administrator All commands except: Administrator who controls the


򐂰 chauthservice SVC
򐂰 mkuser
򐂰 rmuser
򐂰 chuser
򐂰 mkusergrp
򐂰 rmusergrp
򐂰 chusergrp
򐂰 setpwdreset

Copy operator All display commands and the Controls all of the copy
following commands: functionality of the cluster
򐂰 prestartfcconsistgrp
򐂰 startfcconsistgrp
򐂰 stopfcconsistgrp
򐂰 chfcconsistgrp
򐂰 prestartfcmap
򐂰 startfcmap
򐂰 stopfcmap
򐂰 chfcmap
򐂰 startrcconsistgrp
򐂰 stoprcconsistgrp
򐂰 switchrcconsistgrp
򐂰 chrcconsistgrp
򐂰 startrcrelationship
򐂰 stoprcrelationship
򐂰 switchrcrelationship
򐂰 chrcrelationship
򐂰 chpartnership
In addition, all commands
allowed by the Monitor role.

Chapter 10. Operations using the CLI 639


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

User group Role User

Service All display commands Performs service maintenance


and the following commands: and other hardware tasks on
򐂰 applysoftware the system
򐂰 setlocale
򐂰 addnode
򐂰 rmnode
򐂰 cherrstate
򐂰 writesernum
򐂰 detectmdisk
򐂰 includemdisk
򐂰 clearerrlog
򐂰 cleardumps
򐂰 settimezone
򐂰 stopsystem
򐂰 startstats
򐂰 settime
In addition, all commands
allowed by the Monitor role.

Monitor All display commands and the Need view access only
following commands:
򐂰 finderr
򐂰 dumperrlog
򐂰 dumpinternallog
򐂰 chcurrentuser
򐂰 ping
򐂰 svcconfig backup

10.11.2 Managing user roles and groups


Role-based security commands are used to restrict the administrative abilities of a user. We
cannot create user roles, but we can create user groups and assign a predefined role to our
group.

As of SVC 6.3, you can connect to the clustered system by using the same user name with
which you log in to an SVC GUI.

To view the user roles on your system, use the lsusergrp command, as shown in
Example 10-121.

Example 10-121 lsusergrp command


IBM_2145:ITSO_SVC_DH8:superuser>lsusergrp
id name role remote
0 SecurityAdmin SecurityAdmin no
1 Administrator Administrator no
2 CopyOperator CopyOperator no
3 Service Service no
4 Monitor Monitor no
5 VASAProvider VasaProvider no
6 SVC_LDAP_CopyOperator CopyOperator yes

To view the defined users and the user groups to which they belong, use the lsuser
command, as shown in Example 10-122.

640 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Example 10-122 lsuser command


IBM_2145:ITSO_SVC_DH8:superuser>lsuser -delim ,
id,name,password,ssh_key,remote,usergrp_id,usergrp_name
0,superuser,yes,no,no,0,Securitysuperuser
1,superuser,yes,yes,no,0,Securitysuperuser
2,Torben,yes,no,no,0,Securitysuperuser
3,Massimo,yes,no,no,1,superuseristrator
4,Christian,yes,no,no,1,superuseristrator
5,Alejandro,yes,no,no,1,superuseristrator
6,John,yes,no,no,4,Monitor

10.11.3 Changing a user


To change user passwords, run the chuser command.

By using the chuser command, you can modify a user. You can rename a user, assign a new
password (if you are logged on with administrative privileges), and move a user from one user
group to another user group. However, be aware that a member can be a member of only one
group at a time.

10.11.4 Audit log command


The audit log is helpful to show the commands that were entered on a system. Most action
commands that are issued by the old or new CLI are recorded in the audit log.

The native GUI performs actions by using the CLI programs.

The SVC console performs actions by issuing Common Information Model (CIM) commands
to the CIM object manager (CIMOM), which then runs the CLI programs.

Actions that are performed by using the native GUI and the SVC Console are recorded in the
audit log.

The following commands are not audited:


򐂰 dumpconfig
򐂰 cpdumps
򐂰 cleardumps
򐂰 finderr
򐂰 dumperrlog
򐂰 dumpinternallog
򐂰 svcservicetask dumperrlog
򐂰 svcservicetask finderror

The audit log contains approximately 1 MB of data, which can contain about 6,000
average-length commands. When this log is full, the system copies it to a new file in the
/dumps/audit directory on the configuration node and resets the in-memory audit log.

To display entries from the audit log, use the catauditlog -first 5 command to return a list
of five in-memory audit log entries, as shown in Example 10-123.

Example 10-123 catauditlog command


IBM_2145:ITSO_SVC_DH8:superuser>catauditlog -first 5
audit_seq_no timestamp cluster_user ssh_ip_address result res_obj_id action_cmd

Chapter 10. Operations using the CLI 641


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

459 110928150506 superuser 10.18.228.173 0 6 svctask mkuser


-name John -usergrp Monitor -password '######'
460 110928160353 superuser 10.18.228.173 0 7 svctask
mkmdiskgrp -name DS5000-2 -ext 256
461 110928160535 superuser 10.18.228.173 0 1 svctask mkhost
-name hostone -hbawwpn 210100E08B251DD4 -force -mask 1001
462 110928160755 superuser 10.18.228.173 0 1 svctask mkvdisk
-iogrp 0 -mdiskgrp 3 -size 10 -unit gb -vtype striped -autoexpand -grainsize 32 -rsize 20%
463 110928160817 superuser 10.18.228.173 0 svctask rmvdisk
1

If you must dump the contents of the in-memory audit log to a file on the current configuration
node, use the dumpauditlog command. This command does not provide any feedback; it
provides the prompt only. To obtain a list of the audit log dumps, use the lsdumps command,
as shown in Example 10-124.

Example 10-124 lsdumps command


IBM_2145:ITSO_SVC_DH8:superuser>lsdumps
id filename
0 dump.110711.110914.182844
1 svc.config.cron.bak_108283
2 sel.110711.trc
3 endd.trc
4 rtc.race_mq_log.txt.110711.trc
5 dump.110711.110920.102530
6 ethernet.110711.trc
7 svc.config.cron.bak_110711
8 svc.config.cron.xml_110711
9 svc.config.cron.log_110711
10 svc.config.cron.sh_110711
11 110711.trc

10.12 Managing Copy Services


In the following sections, we describe how to manage Copy Services.

10.12.1 FlashCopy operations


In this section, we use a scenario to show how to use commands with PuTTY to perform
FlashCopy. For information about other commands, see the IBM System Storage Open
Software Family SAN Volume Controller: Command-Line Interface User’s Guide, GC27-2287.

Scenario description
We use the scenario that is described in this section in both the CLI section and the GUI
section. In this scenario, we want to FlashCopy the following volumes:
򐂰 DB_Source: Database files
򐂰 Log_Source: Database log files
򐂰 App_Source: Application files

We create Consistency Groups to handle the FlashCopy of DB_Source and Log_Source


because data integrity must be kept on DB_Source and Log_Source.

642 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

In our scenario, the application files are independent of the database; therefore, we create a
single FlashCopy mapping for App_Source. We make two FlashCopy targets for DB_Source
and Log_Source and, therefore, two Consistency Groups. The scenario is shown in
Figure 10-6 on page 643.

Figure 10-6 FlashCopy scenario

10.12.2 Setting up FlashCopy


We created the source and target volumes. The following source and target volumes are
identical in size, which is a requirement of the FlashCopy function:
򐂰 DB_Source, DB_Target1, and DB_Target2
򐂰 Log_Source, Log_Target1, and Log_Target2
򐂰 App_Source and App_Target1

Complete the following steps to set up the FlashCopy:


1. Create the following FlashCopy Consistency Groups:
– FCCG1
– FCCG2
2. Create the following FlashCopy mappings for source volumes:
– DB_Source FlashCopy to DB_Target1; the mapping name is DB_Map1.
– DB_Source FlashCopy to DB_Target2; the mapping name is DB_Map2.
– Log_Source FlashCopy to Log_Target1; the mapping name is Log_Map1.
– Log_Source FlashCopy to Log_Target2; the mapping name is Log_Map2.
– App_Source FlashCopy to App_Target1; the mapping name is App_Map1.
– Copyrate 50

Chapter 10. Operations using the CLI 643


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.12.3 Creating a FlashCopy Consistency Group


Use the command mkfcconsistgrp to create a new FlashCopy Consistency Group. The ID of
the new group is returned. If you created several FlashCopy mappings for a group of volumes
that contain elements of data for the same application, it might be convenient to assign these
mappings to a single FlashCopy Consistency Group. Then, you can issue a single prepare or
start command for the whole group so that, for example, all files for a particular database are
copied at the same time.

In Example 10-125, the FCCG1 and FCCG2 Consistency Groups are created to hold the
FlashCopy maps of DB and Log. This step is important for FlashCopy on database
applications because it helps to maintain data integrity during FlashCopy.

Example 10-125 Creating two FlashCopy Consistency Groups


IBM_2145:ITSO_SVC3:superuser>mkfcconsistgrp -name FCCG1
FlashCopy Consistency Group, id [1], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcconsistgrp -name FCCG2


FlashCopy Consistency Group, id [2], successfully created

In Example 10-126, we checked the status of the Consistency Groups. Each Consistency
Group has a status of empty.

Example 10-126 Checking the status


IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 empty
2 FCCG2 empty

If you want to change the name of a Consistency Group, you can use the chfcconsistgrp
command. Type chfcconsistgrp -h for help with this command.

10.12.4 Creating a FlashCopy mapping


To create a FlashCopy mapping, use the mkfcmap command. This command creates a
FlashCopy mapping that maps a source volume to a target volume to prepare for subsequent
copying.

When this command is run, a FlashCopy mapping logical object is created. This mapping
persists until it is deleted. The mapping specifies the source and destination volumes. The
destination must be identical in size to the source or the mapping fails. Issue the lsvdisk
-bytes command to find the exact size of the source volume for which you want to create a
target disk of the same size.

In a single mapping, source and destination cannot be on the same volume. A mapping is
triggered at the point in time when the copy is required. The mapping can optionally be given
a name and assigned to a Consistency Group. These groups of mappings can be triggered at
the same time, which enables multiple volumes to be copied at the same time and creates a
consistent copy of multiple disks. A consistent copy of multiple disks is required for database
products in which the database and log files are on separate disks.

If no Consistency Group is defined, the mapping is assigned to the default group 0, which is a
special group that cannot be started as a whole. Mappings in this group can be started only
on an individual basis.

644 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

The background copy rate specifies the priority that must be given to completing the copy. If 0
is specified, the copy does not proceed in the background. The default is 50.

Tip: You can use a parameter to delete FlashCopy mappings automatically after the
background copy is completed (when the mapping gets to the idle_or_copied state). Use
the following command:
mkfcmap -autodelete

This command does not delete mappings in cascade with dependent mappings because it
cannot get to the idle_or_copied state in this situation.

Example 10-127 shows the creation of the first FlashCopy mapping for DB_Source,
Log_Source, and App_Source.

Example 10-127 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
IBM_2145:ITSO_SVC3:superuser>mkfcmap -source DB_Source -target DB_Target1 -name DB_Map1
-consistgrp FCCG1
FlashCopy Mapping, id [0], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Log_Source -target Log_Target1 -name Log_Map1


-consistgrp FCCG1
FlashCopy Mapping, id [1], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source App_Source -target App_Target1 -name App_Map1


FlashCopy Mapping, id [2], successfully created

Example 10-128 shows the command to create a second FlashCopy mapping for volume
DB_Source and volume Log_Source.

Example 10-128 Create more FlashCopy mappings


IBM_2145:ITSO_SVC3:superuser>mkfcmap -source DB_Source -target DB_Target2 -name DB_Map2
-consistgrp FCCG2
FlashCopy Mapping, id [3], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Log_Source -target Log_Target2 -name Log_Map2


-consistgrp FCCG2
FlashCopy Mapping, id [4], successfully created

Example 10-129 shows the result of these FlashCopy mappings. The status of the mapping is
idle_or_copied.

Example 10-129 Check the result of Multiple Target FlashCopy mappings


IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 DB_Map1 3 DB_Source 4 DB_Target1 1
FCCG1 idle_or_copied 0 50 100 off
no no
1 Log_Map1 6 Log_Source 7 Log_Target1 1
FCCG1 idle_or_copied 0 50 100 off
no no
2 App_Map1 9 App_Source 10 App_Target1
idle_or_copied 0 50 100 off
no no

Chapter 10. Operations using the CLI 645


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

3 DB_Map2 3 DB_Source 5 DB_Target2 2


FCCG2 idle_or_copied 0 50 100 off
no no
4 Log_Map2 6 Log_Source 8 Log_Target2 2
FCCG2 idle_or_copied 0 50 100 off
no no
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied

If you want to change the FlashCopy mapping, you can use the chfcmap command. Enter
chfcmap -h to get help with this command.

10.12.5 Preparing (pre-triggering) the FlashCopy mapping


Although the mappings were created, the cache still accepts data for the source volumes. You
can trigger the mapping only when the cache does not contain any data for FlashCopy source
volumes. You must issue a prestartfcmap command to prepare a FlashCopy mapping to
start. This command tells the SVC to flush the cache of any content for the source volume
and to pass through any further write data for this volume.

When the prestartfcmap command is run, the mapping enters the Preparing state. After the
preparation is complete, it changes to the Prepared state. At this point, the mapping is ready
for triggering. Preparing and the subsequent triggering are performed on a Consistency
Group basis.

Only mappings that belong to Consistency Group 0 can be prepared on their own because
Consistency Group 0 is a special group that contains the FlashCopy mappings that do not
belong to any Consistency Group. A FlashCopy must be prepared before it can be triggered.

In our scenario, App_Map1 is not in a Consistency Group. In Example 10-130, we show how to
start the preparation for App_Map1.

Example 10-130 Prepare a FlashCopy without a Consistency Group


IBM_2145:ITSO_SVC3:superuser>prestartfcmap App_Map1

IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status prepared
progress 0
copy_rate 50
start_time
dependent_mappings 0
autodelete off
clean_progress 0
clean_rate 50
incremental off
difference 0
grain_size 256

646 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

Another option is to add the -prep parameter to the startfcmap command, which prepares
the mapping and then starts the FlashCopy.

In Example 10-130 on page 646, we also show how to check the status of the current
FlashCopy mapping. The status of App_Map1 is prepared.

10.12.6 Preparing (pre-triggering) the FlashCopy Consistency Group


Use the prestartfcconsistgrp command to prepare a FlashCopy Consistency Group. As
described 10.12.5, “Preparing (pre-triggering) the FlashCopy mapping” on page 646, this
command flushes the cache of any data that is destined for the source volume and forces the
cache into the write-through mode until the mapping is started. The difference is that this
command prepares a group of mappings (at a Consistency Group level) instead of one
mapping.

When you assign several mappings to a FlashCopy Consistency Group, you must issue only
a single prepare command for the whole group to prepare all of the mappings at one time.

Example 10-131 shows how we prepare the Consistency Groups for DB and Log and check
the result. After the command runs all of the FlashCopy maps that we have, all of the maps
and Consistency Groups are in the prepared status. Now, we are ready to start the
FlashCopy.

Example 10-131 Prepare FlashCopy Consistency Groups


IBM_2145:ITSO_SVC3:superuser>prestartfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:superuser>prestartfcconsistgrp FCCG2

IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp FCCG1
id 1
name FCCG1
status prepared
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1

IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 prepared
2 FCCG2 prepared

10.12.7 Starting (triggering) FlashCopy mappings


The startfcmap command is used to start a single FlashCopy mapping. When a single
FlashCopy mapping is started, a point-in-time copy of the source volume is created on the
target volume.

Chapter 10. Operations using the CLI 647


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy
proceeds depends on the background copy rate attribute of the mapping. If the mapping is set
to 0 (NOCOPY), only data that is then updated on the source is copied to the destination. We
suggest that you use this scenario as a backup copy while the mapping exists in the Copying
state. If the copy is stopped, the destination is unusable.

If you want a duplicate copy of the source at the destination, set the background copy rate
greater than 0. By setting this rate, the system copies all of the data (even unchanged data) to
the destination and eventually reaches the idle_or_copied state. After this data is copied, you
can delete the mapping and have a usable point-in-time copy of the source at the destination.

In Example 10-132, App_Map1 changes to the copying status after the FlashCopy is started.

Example 10-132 Start App_Map1


IBM_2145:ITSO_SVC3:superuser>startfcmap App_Map1
IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 DB_Map1 3 DB_Source 4 DB_Target1 1
FCCG1 prepared 0 50 0 off
no no
1 Log_Map1 6 Log_Source 7 Log_Target1 1
FCCG1 prepared 0 50 0 off
no no
2 App_Map1 9 App_Source 10 App_Target1
copying 0 50 100 off no
110929113407 no
3 DB_Map2 3 DB_Source 5 DB_Target2 2
FCCG2 prepared 0 50 0 off
no no
4 Log_Map2 6 Log_Source 8 Log_Target2 2
FCCG2 prepared 0 50 0 off
no no
IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status copying
progress 0
copy_rate 50
start_time 110929113407
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 0
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no

648 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

rc_controlled no

10.12.8 Starting (triggering) the FlashCopy Consistency Group


Run the startfcconsistgrp command and afterward the database can be resumed, as
shown in Example 10-133. We created two point-in-time consistent copies of the DB and Log
volumes. After this command is run, the Consistency Group and the FlashCopy maps are all
in the copying status.

Example 10-133 Start FlashCopy Consistency Group


IBM_2145:ITSO_SVC3:superuser>startfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:superuser>startfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp FCCG1
id 1
name FCCG1
status copying
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1
IBM_2145:ITSO_SVC3:superuser>
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 copying
2 FCCG2 copying

10.12.9 Monitoring the FlashCopy progress


To monitor the background copy progress of the FlashCopy mappings, run the
lsfcmapprogress command for each FlashCopy mapping.

Alternatively, you can also query the copy progress by using the lsfcmap command. As
shown in Example 10-134, DB_Map1 returns information that the background copy is 23%
completed and Log_Map1 returns information that the background copy is 41% completed.
DB_Map2 returns information that the background copy is 5% completed and Log_Map2 returns
information that the background copy is 4% completed.

Example 10-134 Monitoring the background copy progress


IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress DB_Map1
id progress
0 23
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress Log_Map1
id progress
1 41
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress Log_Map2
id progress
4 4
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress DB_Map2
id progress
3 5
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress App_Map1
id progress
2 10

Chapter 10. Operations using the CLI 649


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

When the background copy completes, the FlashCopy mapping enters the idle_or_copied
state. When all of the FlashCopy mappings in a Consistency Group enter this status, the
Consistency Group is at the idle_or_copied status.

When in this state, the FlashCopy mapping can be deleted and the target disk can be used
independently if, for example, another target disk is to be used for the next FlashCopy of the
particular source volume.

10.12.10 Stopping the FlashCopy mapping


The stopfcmap command is used to stop a FlashCopy mapping. By using this command, you
can stop an active mapping (copying) or suspended mapping. When this command is run, it
stops a single FlashCopy mapping.

Tip: If you want to stop a mapping or group in a Multiple Target FlashCopy environment,
consider whether you want to keep any of the dependent mappings. If you do not want to
keep these mappings, run the stop command with the -force parameter. This command
stops all of the dependent maps and negates the need for the stopping copy process to
run.

When a FlashCopy mapping is stopped, the target volume becomes invalid. The target
volume is set offline by the SVC. The FlashCopy mapping must be prepared again or
retriggered to bring the target volume online again.

Important: Stop a FlashCopy mapping only when the data on the target volume is not in
use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and it is set offline by the SVC if the mapping
is in the copying state and progress=100.

Example 10-135 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 changed
to idle_or_copied.

Example 10-135 Stop App_Map1 FlashCopy


IBM_2145:ITSO_SVC3:superuser>stopfcmap App_Map1

IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status idle_or_copied
progress 100
copy_rate 50
start_time 110929113407
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256

650 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

10.12.11 Stopping the FlashCopy Consistency Group


The stopfcconsistgrp command is used to stop any active FlashCopy Consistency Group. It
stops all mappings in a Consistency Group. When a FlashCopy Consistency Group is
stopped for all mappings that are not 100% copied, the target volumes become invalid and
are set offline by the SVC. The FlashCopy Consistency Group must be prepared again and
restarted to bring the target volumes online again.

Important: Stop a FlashCopy mapping only when the data on the target volume is not in
use or when you want to modify the FlashCopy Consistency Group. When a Consistency
Group is stopped, the target volume might become invalid and be set offline by the SVC,
depending on the state of the mapping.

As shown in Example 10-136, we stop the FCCG1 and FCCG2 Consistency Groups. The status
of the two Consistency Groups changed to stopped. Most of the FlashCopy mapping
relationships now have the status of stopped. As you can see, several of them completed the
copy operation and are now in a status of idle_or_copied.

Example 10-136 Stop FCCG1 and FCCG2 Consistency Groups


IBM_2145:ITSO_SVC3:superuser>stopfcconsistgrp FCCG1

IBM_2145:ITSO_SVC3:superuser>stopfcconsistgrp FCCG2

IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied

IBM_2145:ITSO_SVC3:superuser>lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_
name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,res
toring,start_time,rc_controlled
0,DB_Map1,3,DB_Source,4,DB_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,110929113806,
no
1,Log_Map1,6,Log_Source,7,Log_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,1109291138
06,no
2,App_Map1,9,App_Source,10,App_Target1,,,idle_or_copied,100,50,100,off,,,no,110929113407,no
3,DB_Map2,3,DB_Source,5,DB_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,110929113806,
no
4,Log_Map2,6,Log_Source,8,Log_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,1109291138
06,no

10.12.12 Deleting the FlashCopy mapping


To delete a FlashCopy mapping, use the rmfcmap command. When the command is run, it
attempts to delete the specified FlashCopy mapping. If the FlashCopy mapping is stopped,

Chapter 10. Operations using the CLI 651


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

the command fails unless the -force flag is specified. If the mapping is active (copying), it
must first be stopped before it can be deleted.

Deleting a mapping deletes only the logical relationship between the two volumes. However,
when issued on an active FlashCopy mapping that uses the -force flag, the delete renders
the data on the FlashCopy mapping target volume as inconsistent.

Tip: If you want to use the target volume as a normal volume, monitor the background copy
progress until it is complete (100% copied) and, then, delete the FlashCopy mapping.
Another option is to set the -autodelete option when the FlashCopy mapping is created.

As shown in Example 10-137, we delete App_Map1.

Example 10-137 Delete App_Map1


IBM_2145:ITSO_SVC3:superuser>rmfcmap App_Map1

10.12.13 Deleting the FlashCopy Consistency Group


The rmfcconsistgrp command is used to delete a FlashCopy Consistency Group. When run,
this command deletes the specified Consistency Group. If mappings are members of the
group, the command fails unless the -force flag is specified.

If you also want to delete all of the mappings in the Consistency Group, first delete the
mappings and then delete the Consistency Group.

As shown in Example 10-138, we delete all of the maps and Consistency Groups and then
check the result.

Example 10-138 Remove the mappings and the Consistency Groups


IBM_2145:ITSO_SVC3:superuser>rmfcmap DB_Map1
IBM_2145:ITSO_SVC3:superuser>rmfcmap DB_Map2
IBM_2145:ITSO_SVC3:superuser>rmfcmap Log_Map1
IBM_2145:ITSO_SVC3:superuser>rmfcmap Log_Map2
IBM_2145:ITSO_SVC3:superuser>rmfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:superuser>rmfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
IBM_2145:ITSO_SVC3:superuser>lsfcmap
IBM_2145:ITSO_SVC3:superuser>

10.12.14 Migrating a volume to a thin-provisioned volume


Complete the following steps to migrate a volume to a thin-provisioned volume:
1. Create a thin-provisioned, space-efficient target volume with the same size as the volume
that you want to migrate.
Example 10-139 shows the details of a volume with ID 11. It was created as a
thin-provisioned volume with the same size as the App_Source volume.

Example 10-139 lsvdisk 11 command


IBM_2145:ITSO_SVC3:superuser>lsvdisk 11
id 11
name App_Source_SE
IO_group_id 0

652 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018281BEE00000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 221.17MB
free_capacity 220.77MB
overallocation 4629
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 221.17MB

2. Define a FlashCopy mapping in which the non-thin-provisioned volume is the source and
the thin-provisioned volume is the target. Specify a copy rate as high as possible and
activate the -autodelete option for the mapping, as shown in Example 10-140.

Example 10-140 mkfcmap command

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source App_Source -target App_Source_SE -name


MigrtoThinProv -copyrate 100 -autodelete
FlashCopy Mapping, id [0], successfully created

Chapter 10. Operations using the CLI 653


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

IBM_2145:ITSO_SVC3:superuser>lsfcmap 0
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status idle_or_copied
progress 0
copy_rate 100
start_time
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

3. Run the prestartfcmap command and the lsfcmap MigrtoThinProv command, as shown
in Example 10-141.

Example 10-141 prestartfcmap and lsfcmap commands

IBM_2145:ITSO_SVC3:superuser>prestartfcmap MigrtoThinProv
IBM_2145:ITSO_SVC3:superuser>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status prepared
progress 0
copy_rate 100
start_time
dependent_mappings 0
autodelete on
clean_progress 0
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

654 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

4. Run the startfcmap command, as shown in Example 10-142.

Example 10-142 startfcmap command


IBM_2145:ITSO_SVC3:superuser>startfcmap MigrtoThinProv

5. Monitor the copy process by using the lsfcmapprogress command, as shown in


Example 10-143.

Example 10-143 lsfcmapprogress command


IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress MigrtoThinProv
id progress
0 67

6. The FlashCopy mapping is deleted automatically, as shown in Example 10-144.

Example 10-144 lsfcmap command

IBM_2145:ITSO_SVC3:superuser>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status copying
progress 67
copy_rate 100
start_time 110929135848
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no

IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress MigrtoThinProv
CMMVC5804E The action failed because an object that was specified in the command does
not exist.
IBM_2145:ITSO_SVC3:superuser>

An independent copy of the source volume (App_Source) was created. The migration
completes, as shown in Example 10-145.

Example 10-145 lsvdisk App_Source

IBM_2145:ITSO_SVC3:superuser>lsvdisk App_Source
id 9
name App_Source
IO_group_id 0
IO_group_name io_grp0

Chapter 10. Operations using the CLI 655


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018281BEE000000000000009
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency

copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB

Real size: Independently of what you defined as the real size of the target thin-provisioned
volume, the real size is at least the capacity of the source volume.

To migrate a thin-provisioned volume to a fully allocated volume, you can follow the same
scenario.

656 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

10.12.15 Reverse FlashCopy


You can also have a reverse FlashCopy mapping without having to remove the original
FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning.

In Example 10-146, FCMAP_1 is the forward FlashCopy mapping, and FCMAP_rev_1 is a reverse
FlashCopy mapping. We also have a cascade FCMAP_2 where its source is FCMAP_1’s target
volume, and its target is a separate volume that is named Volume_FC_T1.

In our example, we started the FCMAP_1 and later FCMAP_2 after the environment was created.

As an example, we started FCMAP_rev_1 without specifying the -restore parameter to show


why we must use the -restore parameter, and to show the following message that is issued if
you do not use the -restore parameter:

CMMVC6298E The command failed because a target VDisk has dependent FlashCopy
mappings.

When a reverse FlashCopy mapping is started, you must use the -restore option to indicate
that you want to overwrite the data on the source disk of the forward mapping.

Example 10-146 Reverse FlashCopy


IBM_2145:ITSO_SVC3:superuser>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_write_state se_copy_count RC_change
3 Volume_FC_S 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB
striped 60050768018281BEE000000000000003 0 1
empty 0 0 no
4 Volume_FC_T_S1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB
striped 60050768018281BEE000000000000004 0 1
empty 0 0 no
5 Volume_FC_T1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB
striped 60050768018281BEE000000000000005 0 1
empty 0 0 no

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Volume_FC_S -target Volume_FC_T_S1 -name


FCMAP_1 -copyrate 50
FlashCopy Mapping, id [0], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_S -name


FCMAP_rev_1 -copyrate 50
FlashCopy Mapping, id [1], successfully created

IBM_2145:ITSO_SVC3:superuser>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_T1 -name


FCMAP_2 -copyrate 50
FlashCopy Mapping, id [2], successfully created

IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
idle_or_copied 0 50 100 off 1 FCMAP_rev_1
no no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no

Chapter 10. Operations using the CLI 657


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1


idle_or_copied 0 50 100 off
no no

IBM_2145:ITSO_SVC3:superuser>startfcmap -prep FCMAP_1

IBM_2145:ITSO_SVC3:superuser>startfcmap -prep FCMAP_2

IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 copying 0
50 100 off 1 FCMAP_rev_1 no
no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 4 50 100 off
no 110929143739 no

IBM_2145:ITSO_SVC3:superuser>startfcmap -prep FCMAP_rev_1


CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings.

IBM_2145:ITSO_SVC3:superuser>startfcmap -prep -restore FCMAP_rev_1

IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
copying 43 100 56 off 1 FCMAP_rev_1 no
110929151911 no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
copying 56 100 43 off 0 FCMAP_1 yes
110929152030 no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 37 100 100 off no
110929151926 no

As you can see in Example 10-146 on page 657, FCMAP_rev_1 shows a restoring value of yes
while the FlashCopy mapping is copying. After it finishes copying, the restoring value field is
changed to no.

10.12.16 Split-stopping of FlashCopy maps


The stopfcmap command has a -split option. By using this option, the source target of a
map (which is 100% complete) can be removed from the head of a cascade when the map is
stopped.

For example, if we have four volumes in a cascade (A → B → C → D), and the map A → B is
100% complete, the use of the stopfcmap -split mapAB command results in mapAB becoming
idle_copied and the remaining cascade becoming B → C → D.

658 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Without the -split option, volume A remains at the head of the cascade (A → C → D).
Consider the following sequence of steps:
1. The user takes a backup that uses the mapping A → B. A is the production volume and B
is a backup.
2. At a later point, the user experiences corruption on A and so reverses the mapping to
B → A.
3. The user then takes another backup from the production disk A, which results in the
cascade B → A → C.

Stopping A → B without the -split option results in the cascade B → C. The backup disk B is
now at the head of this cascade.

When the user next wants to take a backup to B, the user can still start mapping A → B (by
using the -restore flag), but the user cannot then reverse the mapping to A (B → A or C →
A).

Stopping A → B with the -split option results in the cascade A → C. This action does not
result in the same problem because the production disk A is at the head of the cascade
instead of the backup disk B.

10.13 Metro Mirror operation

Intercluster example: The example in this section is for intercluster operations only.

If you want to set up intracluster operations, we highlight the parts of the following
procedure that you do not need to perform.

In the following scenario, we set up an intercluster Metro Mirror relationship between the SVC
system ITSO_SVC_DH8 primary site and the SVC system ITSO_SVC4 at the secondary site.
Table 10-4 shows the details of the volumes.

Table 10-4 Volume details


Content of volume Volumes at primary site Volumes at secondary site

Database files MM_DB_Pri MM_DB_Sec

Database log files MM_DBLog_Pri MM_DBLog_Sec

Application files MM_App_Pri MM_App_Sec

Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri volumes, a
CG_WIN2K3_MM Consistency Group is created to handle Metro Mirror relationships for them.

Because application files are independent of the database in this scenario, a stand-alone
Metro Mirror relationship is created for the MM_App_Pri volume. Figure 10-7 shows the Metro
Mirror setup.

Chapter 10. Operations using the CLI 659


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Figure 10-7 Metro Mirror scenario

10.13.1 Setting up Metro Mirror


In this section, we assume that the source and target volumes were created and that the
inter-switch links (ISLs) and zoning are in place to enable the SVC clustered systems to
communicate.

Complete the following steps to set up the Metro Mirror:


1. Create an SVC partnership between ITSO_SVC_DH8 and ITSO_SVC4 on both of the SVC
clustered systems.
2. Create a Metro Mirror Consistency Group that is named CG_W2K3_MM.
3. Create the Metro Mirror relationship for MM_DB_Pri with the following settings:
– Master: MM_DB_Pri
– Auxiliary: MM_DB_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: MMREL1
– Consistency Group: CG_W2K3_MM
4. Create the Metro Mirror relationship for MM_DBLog_Pri with the following settings:
– Master: MM_DBLog_Pri
– Auxiliary: MM_DBLog_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: MMREL2
– Consistency Group: CG_W2K3_MM
5. Create the Metro Mirror relationship for MM_App_Pri with the following settings:
– Master: MM_App_Pri
– Auxiliary: MM_App_Sec

660 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

– Auxiliary SVC system: ITSO_SVC4


– Name: MMREL3

In the following section, we perform each step by using the CLI.

10.13.2 Creating a SAN Volume Controller partnership between


ITSO_SVC_DH8 and ITSO_SVC4
We create the SVC partnership on both systems.

Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform
the next step; instead, go to 10.13.3, “Creating a Metro Mirror Consistency Group” on
page 664.

Pre-verification
To verify that both systems can communicate with each other, use the
lspartnershipcandidate command.

As shown in Example 10-147, ITSO_SVC4 is an eligible SVC system candidate at


ITSO_SVC_DH8 for the SVC system partnership, and vice versa. Therefore, both systems
communicate with each other.

Example 10-147 Listing the available SVC systems for partnership


IBM_2145:ITSO_SVC_DH8:superuser>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
000002006AC03A42 no ITSO_SVC_DH8
0000020060A06FB8 no ITSO_SVC3
00000200A0C006B2 no ITSO-Storwize-V7000-2

IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate
id configured name
000002006AC03A42 no ITSO_SVC_DH8
0000020060A06FB8 no ITSO_SVC3
00000200A0C006B2 no ITSO-Storwize-V7000-2
000002006BE04FC4 no ITSO_SVC_DH8

Example 10-148 on page 661 shows the output of the lspartnership and lssystem
commands before setting up the Metro Mirror relationship. We show them so that you can
compare with the same relationship after setting up the Metro Mirror relationship.

As of SVC 6.3, you can create a partnership between the SVC system and the IBM Storwize
V7000 system. Be aware that to create this partnership, you must change the layer
parameter on the IBM Storwize V7000 system. It must be changed from storage to
replication with the chsystem command.

This parameter cannot be changed on the SVC system. It is fixed to the value of appliance,
as shown in Example 10-148 on page 661.

Example 10-148 Pre-verification of system configuration


IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local

Chapter 10. Operations using the CLI 661


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local

IBM_2145:ITSO_SVC_DH8:superuser>lssystem
id 000002006BE04FC4
name ITSO_SVC_DH8
location local
partnership
bandwidth
total_mdisk_capacity 766.5GB
space_in_mdisk_grps 766.5GB
space_allocated_to_vdisks 0.00MB
total_free_space 766.5GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 1.50GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method chap
iscsi_chap_secret passw0rd
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 766.50GB
has_nas_key no
layer appliance

662 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

IBM_2145:ITSO_SVC4:superuser>lssystem
id 0000020061C06FCA
name ITSO_SVC4
location local
partnership
bandwidth
total_mdisk_capacity 768.0GB
space_in_mdisk_grps 0
space_allocated_to_vdisks 0.00MB
total_free_space 768.0GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 0.00MB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.84:443
id_alias 0000020061C06FCA
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
has_nas_key no
layer appliance

Chapter 10. Operations using the CLI 663


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Partnership between clustered systems


In Example 10-149, a partnership is created between ITSO_SVC_DH8 and ITSO_SVC4 that
specifies that 50 MBps bandwidth is to be used for the background copy.

To check the status of the newly created partnership, run the lspartnership command. Also,
the new partnership is only partially configured. It remains partially configured until the Metro
Mirror relationship is created on the other node.

Example 10-149 Creating the partnership from ITSO_SVC_DH8 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

In Example 10-150, the partnership is created between ITSO_SVC4 back to ITSO_SVC_DH8,


which specifies the bandwidth that is to be used for a background copy of 50 MBps.

After the partnership is created, verify that the partnership is fully configured on both systems
by reissuing the lspartnership command.

Example 10-150 Creating the partnership from ITSO_SVC4 to ITSO_SVC_DH8 and verifying it
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC_DH8 remote fully_configured 50

10.13.3 Creating a Metro Mirror Consistency Group


In Example 10-151, we create the Metro Mirror Consistency Group by using the
mkrcconsistgrp command. This Consistency Group is used for the Metro Mirror relationships
of the database volumes that are named MM_DB_Pri and MM_DBLog_Pri. The Consistency
Group is named CG_W2K3_MM.

Example 10-151 Creating the Metro Mirror Consistency Group CG_W2K3_MM


IBM_2145:ITSO_SVC_DH8:superuser>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_MM
RC Consistency Group, id [0], successfully created

IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id
aux_cluster_name primary state relationship_count copy_type cycling_mode
0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC_DH8 0000020061C06FCA ITSO_SVC4
empty 0 empty_group none

10.13.4 Creating the Metro Mirror relationships


In Example 10-152, we create the Metro Mirror relationships MMREL1 and MMREL2 for MM_DB_Pri
and MM_DBLog_Pri. Also, we make them members of the Metro Mirror Consistency Group
CG_W2K3_MM. We use the lsvdisk command to list all of the volumes in the ITSO_SVC_DH8
system.

664 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

We then use the lsrcrelationshipcandidate command to show the volumes in the


ITSO_SVC4 system. By using this command, we check the possible candidates for MM_DB_Pri.
After checking all of these conditions, we use the mkrcrelationship command to create the
Metro Mirror relationship.

To verify the new Metro Mirror relationships, list them with the lsrcrelationship command.

Example 10-152 Creating Metro Mirror relationships MMREL1 and MMREL2


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk -filtervalue name=MM*
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name
RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count
RC_change
0 MM_DB_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000031 0 1 empty 0 0
no
1 MM_DBLog_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000032 0 1 empty 0 0
no
2 MM_App_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000033 0 1 empty 0 0
no
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationshipcandidate
id vdisk_name
0 MM_DB_Pri
1 MM_DBLog_Pri
2 MM_App_Pri

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationshipcandidate -aux ITSO_SVC4 -master MM_DB_Pri


id vdisk_name
0 MM_DB_Sec
1 MM_DBLog_Sec
2 MM_App_Sec

IBM_2145:ITSO_SVC_DH8:superuser>mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO_SVC4


-consistgrp CG_W2K3_MM -name MMREL1
RC Relationship, id [0], successfully created
IBM_2145:ITSO_SVC_DH8:superuser>mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO_SVC4
-consistgrp CG_W2K3_MM -name MMREL2
RC Relationship, id [3], successfully created

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id
aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 MMREL1 000002006BE04FC4 ITSO_SVC_DH8 0 MM_DB_Pri 0000020061C06FCA
ITSO_SVC4 0 MM_DB_Sec master 0 CG_W2K3_MM
inconsistent_stopped 50 0 metro none
3 MMREL2 000002006BE04FC4 ITSO_SVC_DH8 3 MM_Log_Pri
0000020061C06FCA ITSO_SVC4 3 MM_Log_Sec master 0
CG_W2K3_MM inconsistent_stopped 50 0 metro none

10.13.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri


In Example 10-153, we create the stand-alone Metro Mirror relationship MMREL3 for
MM_App_Pri. After the stand-alone Metro Mirror relationship is created, we check the status of
this Metro Mirror relationship.

Chapter 10. Operations using the CLI 665


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

The state of MMREL3 is consistent_stopped. MMREL3 is in this state because it was created with
the -sync option. The -sync option indicates that the secondary (auxiliary) volume is
synchronized with the primary (master) volume. Initial background synchronization is skipped
when this option is used, even though the volumes are not synchronized in this scenario.

We want to show the option of pre-synchronized master and auxiliary volumes before the
relationship is set up. We created the relationship for MM_App_Sec by using the -sync option.

Tip: The -sync option is used only when the target volume mirrored all of the data from the
source volume. By using this option, there is no initial background copy between the
primary volume and the secondary volume.

MMREL2 and MMREL1 are in the inconsistent_stopped state because they were not created with
the -sync option. Therefore, their auxiliary volumes must be synchronized with their primary
volumes.

Example 10-153 Creating a stand-alone relationship and verifying it


IBM_2145:ITSO_SVC_DH8:superuser>mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync
-cluster ITSO_SVC4 -name MMREL3
RC Relationship, id [2], successfully created

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship 2
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

10.13.6 Starting Metro Mirror


Now that the Metro Mirror Consistency Group and relationships are in place, we are ready to
use Metro Mirror relationships in our environment.

When Metro Mirror is implemented, the goal is to reach a consistent and synchronized state
that can provide redundancy for a data set if a failure occurs that affects the production site.

666 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

In this section, we show how to stop and start stand-alone Metro Mirror relationships and
Consistency Groups.

Starting a stand-alone Metro Mirror relationship


In Example 10-154, we start a stand-alone Metro Mirror relationship that is named MMREL3.
Because the Metro Mirror relationship was in the Consistent stopped state and no updates
were made to the primary volume, the relationship quickly enters the
consistent_synchronized state.

Example 10-154 Starting the stand-alone Metro Mirror relationship


IBM_2145:ITSO_SVC_DH8:superuser>startrcrelationship MMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

10.13.7 Starting a Metro Mirror Consistency Group


In Example 10-155 on page 667, we start the Metro Mirror Consistency Group CG_W2K3_MM.
Because the Consistency Group was in the Inconsistent stopped state, it enters the
Inconsistent copying state until the background copy completes for all of the relationships in
the Consistency Group.

Upon completion of the background copy, it enters the consistent_synchronized state.

Example 10-155 Starting the Metro Mirror Consistency Group


IBM_2145:ITSO_SVC_DH8:superuser>startrcconsistgrp CG_W2K3_MM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name
primary state relationship_count copy_type cycling_mode
0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC_DH8 0000020061C06FCA ITSO_SVC4
master inconsistent_copying 2 metro none

Chapter 10. Operations using the CLI 667


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.13.8 Monitoring the background copy progress


To monitor the background copy progress, we can use the lsrcrelationship command. This
command shows all of the defined Metro Mirror relationships if it is used without any
arguments. In the command output, progress indicates the current background copy
progress. Our Metro Mirror relationship is shown in Example 10-156.

Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Metro Mirror Consistency Groups or relationships change their state.

Example 10-156 Monitoring the background copy progress example


IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL1
id 0
name MMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 0
master_vdisk_name MM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name MM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state inconsistent_copying
bg_copy_priority 50
progress 81
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL2
id 3
name MMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 3
master_vdisk_name MM_Log_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 3
aux_vdisk_name MM_Log_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state inconsistent_copying
bg_copy_priority 50
progress 82
freeze_time
status online

668 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

When all Metro Mirror relationships complete the background copy, the Consistency Group
enters the consistent_synchronized state, as shown in Example 10-157.

Example 10-157 Listing the Metro Mirror Consistency Group


IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

10.13.9 Stopping and restarting Metro Mirror


In this section and in the following sections, we describe how to stop, restart, and change the
direction of the stand-alone Metro Mirror relationships and the Consistency Group.

10.13.10 Stopping a stand-alone Metro Mirror relationship


Example 10-158 on page 669 shows how to stop the stand-alone Metro Mirror relationship
while enabling access (write I/O) to the primary and secondary volumes. It also shows the
relationship entering the idling state.

Example 10-158 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary
IBM_2145:ITSO_SVC_DH8:superuser>stoprcrelationship -access MMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4

Chapter 10. Operations using the CLI 669


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

10.13.11 Stopping a Metro Mirror Consistency Group


Example 10-159 shows how to stop the Metro Mirror Consistency Group without specifying
the -access flag. The Consistency Group enters the consistent_stopped state.

Example 10-159 Stopping a Metro Mirror Consistency Group


IBM_2145:ITSO_SVC_DH8:superuser>stoprcconsistgrp CG_W2K3_MM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

If we want to enable access (write I/O) to the secondary volume later, we reissue the
stoprcconsistgrp command and specify the -access flag. The Consistency Group changes
to the idling state, as shown in Example 10-160.

Example 10-160 Stopping a Metro Mirror Consistency Group and enabling access to the secondary
IBM_2145:ITSO_SVC_DH8:superuser>stoprcconsistgrp -access CG_W2K3_MM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4

670 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

10.13.12 Restarting a Metro Mirror relationship in the Idling state


When you are restarting a Metro Mirror relationship in the Idling state, you must specify the
copy direction.
If any updates were performed on the master or the auxiliary volume, consistency is
compromised. Therefore, we must issue the command with the -force flag to restart a
relationship, as shown in Example 10-161.
Example 10-161 Restarting a Metro Mirror relationship after updates in the Idling state
IBM_2145:ITSO_SVC_DH8:superuser>startrcrelationship -primary master -force MMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

Chapter 10. Operations using the CLI 671


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.13.13 Restarting a Metro Mirror Consistency Group in the Idling state


When you are restarting a Metro Mirror Consistency Group in the Idling state, the copy
direction must be specified.

If any updates were performed on the master or the auxiliary volume in any of the Metro
Mirror relationships in the Consistency Group, the consistency is compromised. Therefore, we
must use the -force flag to start a relationship. If the -force flag is not used, the command
fails.

In Example 10-162, we change the copy direction by specifying the auxiliary volumes to
become the primaries.

Example 10-162 Restarting a Metro Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC_DH8:superuser>startrcconsistgrp -force -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

10.13.14 Changing the copy direction for Metro Mirror


In this section, we show how to change the copy direction of the stand-alone Metro Mirror
relationship and the Consistency Group.

10.13.15 Switching the copy direction for a Metro Mirror relationship


When a Metro Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship by using the switchrcrelationship command, which
specifies the primary volume. If the specified volume is a primary when you issue this
command, the command has no effect.

In Example 10-163, we change the copy direction for the stand-alone Metro Mirror
relationship by specifying the auxiliary volume to become the primary volume.

672 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from the primary to the secondary because all of the I/O is
inhibited to that volume when it becomes the secondary. Therefore, careful planning is
required before the switchrcrelationship command is used.

Example 10-163 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC_DH8:superuser>switchrcrelationship -primary aux MMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id

Chapter 10. Operations using the CLI 673


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

10.13.16 Switching the copy direction for a Metro Mirror Consistency Group
When a Metro Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the Consistency Group by using the switchrcconsistgrp
command and specifying the primary volume.

If the specified volume is already a primary volume when you issue this command, the
command has no effect.

In Example 10-164, we change the copy direction for the Metro Mirror Consistency Group by
specifying the auxiliary volume to become the primary volume.

Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all of the I/O is inhibited when
that volume becomes the secondary. Therefore, careful planning is required before the
switchrcconsistgrp command is used.

Example 10-164 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
IBM_2145:ITSO_SVC_DH8:superuser>switchrcconsistgrp -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync

674 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2

10.13.17 Creating a SAN Volume Controller partnership among clustered


systems
Starting with SVC 5.1, you can have a clustered system partnership among many SVC
systems. By using this capability, you can create the following configurations that use a
maximum of four connected systems:
򐂰 Star configuration
򐂰 Triangle configuration
򐂰 Fully connected configuration
򐂰 Daisy-chain configuration

In this section, we describe how to configure the SVC system partnership for each
configuration.

Important: To have a supported and working configuration, all SVC systems must be at
level 5.1 or higher.

In our scenarios, we configure the SVC partnership by referring to the clustered systems as
A, B, C, and D, as shown in the following examples:
򐂰 ITSO_SVC_DH8 = A
򐂰 ITSO_SVC_DH8 = B
򐂰 ITSO_SVC3 = C
򐂰 ITSO_SVC4 = D

Example 10-165 shows the available systems for a partnership by using the
lsclustercandidate command on each system.

Example 10-165 Available clustered systems


IBM_2145:ITSO_SVC_DH8:superuser>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
0000020060A06FB8 no ITSO_SVC3
000002006AC03A42 no ITSO_SVC_DH8

IBM_2145:ITSO_SVC_DH8:superuser>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
000002006BE04FC4 no ITSO_SVC_DH8
0000020060A06FB8 no ITSO_SVC3

IBM_2145:ITSO_SVC3:superuser>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC_DH8
0000020061C06FCA no ITSO_SVC4
000002006AC03A42 no ITSO_SVC_DH8

Chapter 10. Operations using the CLI 675


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC_DH8
0000020060A06FB8 no ITSO_SVC3
000002006AC03A42 no ITSO_SVC_DH8

10.13.18 Star configuration partnership


Figure 10-8 shows the star configuration.

Figure 10-8 Star configuration

Example 10-166 shows the sequence of mkpartnership commands that are run to create a
star configuration.

Example 10-166 Creating a star configuration by using the mkpartnership command


From ITSO_SVC_DH8 to multiple systems

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC4

From ITSO_SVC_DH8 to ITSO_SVC1

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC1

From ITSO_SVC3 to ITSO_SVC_DH8

IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8

From ITSO_SVC4 to ITSO_SVC_DH8

IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8

From ITSO_SVC_DH8

IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
000002006AC03A42 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote fully_configured 50

676 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

From ITSO_SVC_DH8

IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC_DH8 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50

From ITSO_SVC3

IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC_DH8 remote fully_configured 50

From ITSO_SVC4

IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC_DH8 remote fully_configured 50

After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.

Triangle configuration
Figure 10-9 shows the triangle configuration.

Figure 10-9 Triangle configuration

Example 10-167 shows the sequence of mkpartnership commands that are run to create a
triangle configuration.

Example 10-167 Creating a triangle configuration


From ITSO_SVC_DH8 to ITSO_SVC1 and ITSO_SVC3

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8


IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
000002006AC03A42 ITSO_SVC1 remote partially_configured_local 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50

From ITSO_SVC_DH8 to ITSO_SVC_DH8 and ITSO_SVC3

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC1

Chapter 10. Operations using the CLI 677


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC3


IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC_DH8 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50

From ITSO_SVC3 to ITSO_SVC_DH8 and ITSO_SVC1

IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8


IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8
IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
000002006AC03A42 ITSO_SVC_DH8 remote fully_configured 50

After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.

Fully connected configuration


Figure 10-10 shows the fully connected configuration.

Figure 10-10 Fully connected configuration

Example 10-168 on page 678 shows the sequence of mkpartnership commands that are run
to create a fully connected configuration.

Example 10-168 Creating a fully connected configuration


From ITSO_SVC_DH8 to ITSO_SVC1, ITSO_SVC3 and ITSO_SVC4

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8


IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
000002006AC03A42 ITSO_SVC1 remote partially_configured_local 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

From ITSO_SVC_DH8 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC4

678 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC_DH8 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

From ITSO_SVC3 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4

IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
000002006AC03A42 ITSO_SVC3 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

From ITSO-SVC4 to ITSO_SVC1, ITSO_SVC_DH8 and ITSO_SVC3

IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8


IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
000002006AC03A42 ITSO_SVC_DH8 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50

After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.

Daisy-chain configuration
Figure 10-11 shows the daisy-chain configuration.

Figure 10-11 Daisy-chain configuration

Example 10-169 shows the sequence of mkpartnership commands that are run to create a
daisy-chain configuration.

Example 10-169 Creating a daisy-chain configuration


From ITSO_SVC_DH8 to ITSO_SVC1

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC1


IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
000002006AC03A42 ITSO_SVC1 remote partially_configured_local 50

From ITSO_SVC_DH8 to ITSO_SVC1 and ITSO_SVC3

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC1

Chapter 10. Operations using the CLI 679


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC3


IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC_DH8 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50

From ITSO_SVC3 to ITSO_SVC_DH8 and ITSO_SVC4

IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8


IBM_2145:ITSO_SVC3:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006AC03A42 ITSO_SVC_DH8 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50

From ITSO_SVC4 to ITSO_SVC3

IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC3


IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50

After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.

10.14 Global Mirror operation


In the following scenario, we set up an intercluster Global Mirror relationship between the
SVC system ITSO_SVC_DH8 at the primary site and the SVC system ITSO_SVC4 at the
secondary site.

Intercluster example: This example is for an intercluster Global Mirror operation only. If
you want to set up an intracluster operation, we highlight the steps in the following
procedure that you do not need to perform.

Table 10-5 shows the details of the volumes.

Table 10-5 Details of the volumes for the Global Mirror relationship scenario
Content of volume Volumes at primary site Volumes at secondary site

Database files GM_DB_Pri GM_DB_Sec

Database log files GM_DBLog_Pri GM_DBLog_Sec

Application files GM_App_Pri GM_App_Sec

680 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a


Consistency Group to handle Global Mirror relationships for them. Because the application
files are independent of the database in this scenario, we create a stand-alone Global Mirror
relationship for GM_App_Pri. Figure 10-12 shows the Global Mirror relationship setup.

Figure 10-12 Global Mirror scenario

10.14.1 Setting up Global Mirror


In this section, we assume that the source and target volumes were created and that the ISLs
and zoning are in place to enable the SVC systems to communicate.

Complete the following steps to set up the Global Mirror:


1. Create an SVC partnership between ITSO_SVC_DH8 and ITSO_SVC4 on both SVC clustered
systems with a bandwidth of 100 MBps.
2. Create a Global Mirror Consistency Group with the name CG_W2K3_GM.
3. Create the Global Mirror relationship for GM_DB_Pri that uses the following settings:
– Master: GM_DB_Pri
– Auxiliary: GM_DB_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: GMREL1
– Consistency Group: CG_W2K3_GM
4. Create the Global Mirror relationship for GM_DBLog_Pri that uses the following settings:
– Master: GM_DBLog_Pri
– Auxiliary: GM_DBLog_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: GMREL2
– Consistency Group: CG_W2K3_GM

Chapter 10. Operations using the CLI 681


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

5. Create the Global Mirror relationship for GM_App_Pri that uses the following settings:
– Master: GM_App_Pri
– Auxiliary: GM_App_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: GMREL3

In the following sections, we perform each step by using the CLI.

10.14.2 Creating a SAN Volume Controller partnership between


ITSO_SVC_DH8 and ITSO_SVC4
We create an SVC partnership between these clustered systems.

Intracluster Global Mirror: If you are creating an intracluster Global Mirror, do not perform
the next step. Instead, go to 10.14.3, “Changing link tolerance and system delay
simulation” on page 683.

Pre-verification
To verify that both clustered systems can communicate with each other, use the
lspartnershipcandidate command. Example 10-170 confirms that our clustered systems
can communicate because ITSO_SVC4 is an eligible SVC system candidate to ITSO_SVC_DH8
for the SVC system partnership, and vice versa. Therefore, both systems can communicate
with each other.

Example 10-170 Listing the available SVC systems for partnership


IBM_2145:ITSO_SVC_DH8:superuser>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC_DH8

In Example 10-171, we show the output of the lspartnership command before we set up the
SVC systems’ partnership for Global Mirror. We show this output for comparison after we set
up the SVC partnership.

Example 10-171 Pre-verification of the system configuration


IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local

IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local

Partnership between systems


In Example 10-172, we create the partnership from ITSO_SVC_DH8 to ITSO_SVC4 and specify a
100 MBps bandwidth to use for the background copy. To verify the status of the new
partnership, we issue the lspartnership command. The new partnership is only partially
configured. It remains partially configured until we run the mkpartnership command on the
other clustered system.

682 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Example 10-172 Creating the partnership from ITSO_SVC_DH8 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 100 ITSO_SVC4
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 100

In Example 10-173, we create the partnership from ITSO_SVC4 back to ITSO_SVC_DH8 and
specify a 100 MBps bandwidth to use for the background copy. After the partnership is
created, verify that the partnership is fully configured by reissuing the lspartnership
command.

Example 10-173 Creating the partnership from ITSO_SVC4 to ITSO_SVC_DH8 and verifying it
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 100 ITSO_SVC_DH8

IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC_DH8 remote fully_configured 100

IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
0000020061C06FCA ITSO_SVC4 remote fully_configured 100

10.14.3 Changing link tolerance and system delay simulation


The -gmlinktolerance parameter defines the sensitivity of the SVC to inter-link overload
conditions. The value is the number of seconds of continuous link difficulties that is tolerated
before the SVC stops the remote copy relationships to prevent affecting host I/O at the
primary site. To change the value, use the following command:
chsystem -gmlinktolerance link_tolerance

The link_tolerance value is 20 - 86400 seconds in increments of 10 seconds. The default


value for the link tolerance is 300 seconds. A value of 0 disables link tolerance.

Important: We strongly suggest that you use the default value. If the link is overloaded for
a period (which affects host I/O at the primary site), the relationships are stopped to protect
those hosts.

Intercluster and intracluster delay simulation


This Global Mirror feature permits a simulation of a delayed write to a remote volume. This
feature allows testing to be performed that detects colliding writes. You can use this feature to
test an application before the full deployment of the Global Mirror feature. The delay
simulation can be enabled separately for each intracluster or intercluster Global Mirror. To
enable this feature, run the following commands:
򐂰 For intercluster simulation, run this command:
chsystem -gminterdelaysimulation <inter_cluster_delay_simulation>
򐂰 For intracluster simulation, run this command:
chsystem -gmintradelaysimulation <intra_cluster_delay_simulation>

Chapter 10. Operations using the CLI 683


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express


the amount of time (in milliseconds) that secondary I/Os are delayed for intercluster and
intracluster relationships. You can set a value of 0 - 100 milliseconds in 1-millisecond
increments for the inter_cluster_delay_simulation value or the
intra_cluster_delay_simulation value in the previous commands. A value of 0 disables the
feature.

To check the current settings for the delay simulation, use the following command:
lssystem

In Example 10-174, we show the modification of the delay simulation value and a change of
the Global Mirror link tolerance parameters. We also show the changed values of the Global
Mirror link tolerance and delay simulation parameters.

Example 10-174 Delay simulation and link tolerance modifications


IBM_2145:ITSO_SVC_DH8:superuser>chsystem -gminterdelaysimulation 20
IBM_2145:ITSO_SVC_DH8:superuser>chsystem -gmintradelaysimulation 40
IBM_2145:ITSO_SVC_DH8:superuser>chsystem -gmlinktolerance 200
IBM_2145:ITSO_SVC_DH8:superuser>lssystem
id 000002006BE04FC4
name ITSO_SVC_DH8
location local
partnership
bandwidth
total_mdisk_capacity 866.5GB
space_in_mdisk_grps 766.5GB
space_allocated_to_vdisks 30.00GB
total_free_space 836.5GB
total_vdiskcopy_capacity 30.00GB
total_used_capacity 30.00GB
total_overallocation 3
total_vdisk_capacity 30.00GB
total_allocated_extent_capacity 31.50GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 200
gm_inter_cluster_delay_simulation 20
gm_intra_cluster_delay_simulation 40
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method chap
iscsi_chap_secret passw0rd
auth_service_configured no

684 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 736.50GB
has_nas_key no
layer appliance

10.14.4 Creating a Global Mirror Consistency Group


In Example 10-175, we create the Global Mirror Consistency Group by using the
mkrcconsistgrp command. We use this Consistency Group for the Global Mirror relationships
for the database volumes. The Consistency Group is named CG_W2K3_GM.

Example 10-175 Creating the Global Mirror Consistency Group CG_W2K3_GM


IBM_2145:ITSO_SVC_DH8:superuser>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_GM
RC Consistency Group, id [0], successfully created
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name
primary state relationship_count copy_type cycling_mode
0 CG_W2K3_GM 000002006BE04FC4 ITSO_SVC_DH8 0000020061C06FCA ITSO_SVC4
empty 0 empty_group none

10.14.5 Creating Global Mirror relationships


In Example 10-176, we create the GMREL1 and GMREL2 Global Mirror relationships for the
GM_DB_Pri and GM_DBLog_Pri volumes. We also make them members of the CG_W2K3_GM
Global Mirror Consistency Group.

We use the lsvdisk command to list all of the volumes in the ITSO_SVC_DH8 system. Then, we
use the lsrcrelationshipcandidate command to show the possible candidate volumes for
GM_DB_Pri in ITSO_SVC4.

After checking all of these conditions, we use the mkrcrelationship command to create the
Global Mirror relationship. To verify the new Global Mirror relationships, we list them by using
the lsrcrelationship command.

Example 10-176 Creating GMREL1 and GMREL2 Global Mirror relationships


IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk -filtervalue name=GM*
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name
RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count
RC_change
0 GM_DB_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000031 0 1 empty 0 0
no

Chapter 10. Operations using the CLI 685


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

1 GM_DBLog_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped


6005076801AF813F1000000000000032 0 1 empty 0 0
no
2 GM_App_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000033 0 1 empty 0 0
no

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationshipcandidate -aux ITSO_SVC4 -master GM_DB_Pri


id vdisk_name
0 GM_DB_Sec
1 GM_DBLog_Sec
2 GM_App_Sec

IBM_2145:ITSO_SVC_DH8:superuser>mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO_SVC4


-consistgrp CG_W2K3_GM -name GMREL1 -global
RC Relationship, id [0], successfully created

IBM_2145:ITSO_SVC_DH8:superuser>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4


-consistgrp CG_W2K3_GM -name GMREL2 -global
RC Relationship, id [1], successfully created

IBM_2145:ITSO_SVC_DH8:superuser>mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO_SVC4


-consistgrp CG_W2K3_GM -name GMREL1 -global
RC Relationship, id [2], successfully created
IBM_2145:ITSO_SVC_DH8:superuser>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4
-consistgrp CG_W2K3_GM -name GMREL2 -global
RC Relationship, id [3], successfully created
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id
aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 GMREL1 000002006BE04FC4 ITSO_SVC_DH8 0 GM_DB_Pri 0000020061C06FCA
ITSO_SVC4 0 GM_DB_Sec master 0 CG_W2K3_GM
inconsistent_stopped 50 0 global none
1 GMREL2 000002006BE04FC4 ITSO_SVC_DH8 1 GM_DBLog_Pri 0000020061C06FCA
ITSO_SVC4 1 GM_DBLog_Sec master 0 CG_W2K3_GM
inconsistent_stopped 50 0 global none

10.14.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri


In Example 10-177, we create the stand-alone Global Mirror relationship GMREL3 for
GM_App_Pri. After the stand-alone Global Mirror relationship GMREL3 is created, we check the
status of each of our Global Mirror relationships.

The status of GMREL3 is consistent_stopped because it was created with the -sync option. The
-sync option indicates that the secondary (auxiliary) volume is synchronized with the primary
(master) volume. The initial background synchronization is skipped when this option is used.

GMREL1 and GMREL2 are in the inconsistent_stopped state because they were not created with
the -sync option. Therefore, their auxiliary volumes must be synchronized with their primary
volumes.

Example 10-177 Creating a stand-alone Global Mirror relationship and verifying it


IBM_2145:ITSO_SVC_DH8:superuser>mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO_SVC4
-sync -name GMREL3 -global
RC Relationship, id [2], successfully created

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship -delim :

686 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_
name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority
:progress:copy_type:cycling_mode
0:GMREL1:000002006BE04FC4:ITSO_SVC_DH8:0:GM_DB_Pri:0000020061C06FCA:ITSO_SVC4:0:GM_DB_Sec:master:0:CG_W2K3_
GM:inconsistent_copying:50:73:global:none
1:GMREL2:000002006BE04FC4:ITSO_SVC_DH8:1:GM_DBLog_Pri:0000020061C06FCA:ITSO_SVC4:1:GM_DBLog_Sec:master:0:CG
_W2K3_GM:inconsistent_copying:50:75:global:none
2:GMREL3:000002006BE04FC4:ITSO_SVC_DH8:2:GM_App_Pri:0000020061C06FCA:ITSO_SVC4:2:GM_App_Sec:master:::consis
tent_stopped:50:100:global:none

10.14.7 Starting Global Mirror


Now that the Global Mirror Consistency Group and relationships are created, we are ready to
use the Global Mirror relationships in our environment.

When Global Mirror is implemented, the goal is to reach a consistent and synchronized state
that can provide redundancy if a hardware failure occurs that affects the SAN at the
production site.

In this section, we show how to start the stand-alone Global Mirror relationships and the
Consistency Group.

10.14.8 Starting a stand-alone Global Mirror relationship


In Example 10-178, we start the stand-alone Global Mirror relationship that is named GMREL3.
Because the Global Mirror relationship was in the Consistent stopped state and no updates
were made to the primary volume, the relationship quickly enters the Consistent synchronized
state.

Example 10-178 Starting the stand-alone Global Mirror relationship


IBM_2145:ITSO_SVC_DH8:superuser>startrcrelationship GMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name

Chapter 10. Operations using the CLI 687


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

aux_change_vdisk_id
aux_change_vdisk_name

10.14.9 Starting a Global Mirror Consistency Group


In Example 10-179, we start the CG_W2K3_GM Global Mirror Consistency Group. Because the
Consistency Group was in the Inconsistent stopped state, it enters the Inconsistent copying
state until the background copy completes for all of the relationships that are in the
Consistency Group.

Upon the completion of the background copy, the CG_W2K3_GM Global Mirror Consistency
Group enters the Consistent synchronized state.

Example 10-179 Starting the Global Mirror Consistency Group


IBM_2145:ITSO_SVC_DH8:superuser>startrcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp 0
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state inconsistent_copying
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

10.14.10 Monitoring the background copy progress


To monitor the background copy progress, use the lsrcrelationship command. This
command shows us all of the defined Global Mirror relationships if it is used without any
parameters. In the command output, progress indicates the current background copy
progress. Example 10-180 shows our Global Mirror relationships.

Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Global Mirror Consistency Groups or relationships change state.

Example 10-180 Example of monitoring the background copy progress


IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 0
master_vdisk_name GM_DB_Pri

688 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 38
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 76
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

When all of the Global Mirror relationships complete the background copy, the Consistency
Group enters the Consistent synchronized state, as shown in Example 10-181.

Example 10-181 Listing the Global Mirror Consistency Group


IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA

Chapter 10. Operations using the CLI 689


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

10.14.11 Stopping and restarting Global Mirror


Now that the Global Mirror Consistency Group and relationships are running, we describe
how to stop, restart, and change the direction of the stand-alone Global Mirror relationships
and the Consistency Group.

First, we show how to stop and restart the stand-alone Global Mirror relationships and the
Consistency Group.

10.14.12 Stopping a stand-alone Global Mirror relationship


In Example 10-182, we stop the stand-alone Global Mirror relationship while we enable
access (write I/O) to the primary and the secondary volume. As a result, the relationship
enters the Idling state.

Example 10-182 Stopping the stand-alone Global Mirror relationship


IBM_2145:ITSO_SVC_DH8:superuser>stoprcrelationship -access GMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name

690 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

aux_change_vdisk_id
aux_change_vdisk_name

10.14.13 Stopping a Global Mirror Consistency Group


In Example 10-183, we stop the Global Mirror Consistency Group without specifying the
-access parameter. Therefore, the Consistency Group enters the Consistent stopped state.

Example 10-183 Stopping a Global Mirror Consistency Group without specifying the -access
parameter
IBM_2145:ITSO_SVC_DH8:superuser>stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

If we want to enable access (write I/O) for the secondary volume later, we can reissue the
stoprcconsistgrp command and specify the -access parameter. The Consistency Group
changes to the Idling state, as shown in Example 10-184.

Example 10-184 Stopping a Global Mirror Consistency Group


IBM_2145:ITSO_SVC_DH8:superuser>stoprcconsistgrp -access CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1

Chapter 10. Operations using the CLI 691


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

RC_rel_name GMREL2

10.14.14 Restarting a Global Mirror relationship in the Idling state


When a Global Mirror relationship is restarted in the Idling state, we must specify the copy
direction.

If any updates were performed on the master volume or the auxiliary volume, consistency is
compromised. Therefore, we must issue the -force parameter to restart the relationship, as
shown in Example 10-185. If the -force parameter is not used, the command fails.

Example 10-185 Restarting a Global Mirror relationship after updates in the Idling state
IBM_2145:ITSO_SVC_DH8:superuser>startrcrelationship -primary master -force GMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

10.14.15 Restarting a Global Mirror Consistency Group in the Idling state


When a Global Mirror Consistency Group is restarted in the Idling state, we must specify the
copy direction.

If any updates were performed on the master volume or the auxiliary volume in any of the
Global Mirror relationships in the Consistency Group, consistency is compromised. Therefore,
we must issue the -force parameter to start the relationship. If the -force parameter is not
used, the command fails.

In Example 10-186, we restart the Consistency Group and change the copy direction by
specifying the auxiliary volumes to become the primaries.

692 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Example 10-186 Restarting a Global Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC_DH8:superuser>startrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

10.14.16 Changing the direction for Global Mirror


In this section, we show how to change the copy direction of the stand-alone Global Mirror
relationships and the Consistency Group.

10.14.17 Switching the copy direction for a Global Mirror relationship


When a Global Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship by using the switchrcrelationship command and
specifying the primary volume.

If the volume that is specified as the primary already is a primary when this command is run,
the command has no effect.

In Example 10-187, we change the copy direction for the stand-alone Global Mirror
relationship and specify the auxiliary volume to become the primary.

Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all I/O is inhibited to that
volume when it becomes the secondary. Therefore, careful planning is required before the
switchrcrelationship command is used.

Example 10-187 Switching the copy direction for a Global Mirror relationship
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4

Chapter 10. Operations using the CLI 693


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

IBM_2145:ITSO_SVC_DH8:superuser>switchrcrelationship -primary aux GMREL3


IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name

10.14.18 Switching the copy direction for a Global Mirror Consistency Group
When a Global Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the relationship by using the switchrcconsistgrp command
and specifying the primary volume. If the volume that is specified as the primary already is a
primary when this command is run, the command has no effect.

In Example 10-188, we change the copy direction for the Global Mirror Consistency Group
and specify the auxiliary to become the primary.

694 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all I/O are inhibited when that
volume becomes the secondary. Therefore, careful planning is required before the
switchrcconsistgrp command is used.

Example 10-188 Switching the copy direction for a Global Mirror Consistency Group
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
IBM_2145:ITSO_SVC_DH8:superuser>switchrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

10.14.19 Changing a Global Mirror relationship to the cycling mode


Starting with SVC 6.3, Global Mirror can operate with or without cycling. When operating
without cycling, write operations are applied to the secondary volume as soon as possible
after they are applied to the primary volume. The secondary volume often is less than
1 second behind the primary volume, which minimizes the amount of data that must be
recovered in a failover. However, this capability requires that a high-bandwidth link is
provisioned between the two sites.

Chapter 10. Operations using the CLI 695


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

When Global Mirror operates in cycling mode, changes are tracked and, where needed,
copied to intermediate Change Volumes. Changes are transmitted to the secondary site
periodically. The secondary volumes are much further behind the primary volume, and more
data must be recovered if a failover occurs. However, lower bandwidth is required to provide
an effective solution because the data transfer can be smoothed over a longer period.

A Global Mirror relationship consists of two volumes: primary and secondary. With SVC 6.3,
each of these volumes can be associated to a Change Volume. Change Volumes are used to
record changes to the remote copy volume. A FlashCopy relationship exists between the
remote copy volume and the Change Volume. This relationship cannot be manipulated as a
normal FlashCopy relationship. Most commands fail by design because this relationship is an
internal relationship.

Cycling mode transmits a series of FlashCopy images from the primary to the secondary, and
it is enabled by using svctask chrcrelationship -cycling=multi.

The primary Change Volume stores changes to be sent to the secondary volume and the
secondary Change Volume is used to maintain a consistent image at the secondary volume.
Every x seconds, the primary FlashCopy mapping is started automatically, where x is the
cycling period and is configurable. Data is then copied to the secondary volume from the
primary Change Volume. The secondary FlashCopy mapping is started if resynchronization is
needed. Therefore, a consistent copy is always at the secondary volume. The cycling period
is configurable and the default value is 300 seconds.

The recovery point objective (RPO) depends on how long the FlashCopy takes to complete. If
the FlashCopy completes within the cycling time, the maximum RPO = 2 x the cycling time;
otherwise, the RPO = 2 x the copy completion time.

You can estimate the current RPO by using the new freeze_time rcrelationship property. It is
the time of the last consistent image that is present at the secondary. Figure 10-13 shows the
cycling mode with Change Volumes.

Figure 10-13 Global Mirror with Change Volumes

Change Volume requirements


Adhere to the following rules for the Change Volume:
򐂰 The Change Volume can be a thin-provisioned volume.
򐂰 It must be the same size as the primary and secondary volumes.
򐂰 The Change Volume must be in the same I/O Group as the primary and secondary
volumes.
򐂰 It cannot be used for the user’s remote copy or FlashCopy mappings.

696 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

򐂰 You must have a Change Volume for both the primary and secondary volumes.
򐂰 You cannot manipulate it like a normal FlashCopy mapping.

In this section, we show how to change the cycling mode of the stand-alone Global Mirror
relationship (GMREL3) and the Consistency Group CG_W2K3_GM Global Mirror relationships
(GMREL1 and GMREL2).

We assume that the source and target volumes were created and that the ISLs and zoning
are in place to enable the SVC systems to communicate. We also assume that the Global
Mirror relationship was established.

Complete the following steps to change the Global Mirror to cycling mode with Change
Volumes:
1. Create thin-provisioned Change Volumes for the primary and secondary volumes at both
sites.
2. Stop the stand-alone relationship GMREL3 to change the cycling mode at the primary site.
3. Set the cycling mode on the stand-alone relationship GMREL3 at the primary site.
4. Set the Change Volume on the master volume relationship GMREL3 at the primary site.
5. Set the Change Volume on the auxiliary volume relationship GMREL3 at the secondary site.
6. Start the stand-alone relationship GMREL3 in cycling mode at the primary site.
7. Stop the Consistency Group CG_W2K3_GM to change the cycling mode at the primary site.
8. Set the cycling mode on the Consistency Group at the primary site.
9. Set the Change Volume on the master volume relationship GMREL1 of the Consistency
Group CG_W2K3_GM at the primary site.
10.Set the Change Volume on the auxiliary volume relationship GMREL1 at the secondary site.
11.Set the Change Volume on the master volume relationship GMREL2 of the Consistency
Group CG_W2K3_GM at the primary site.
12.Set the Change Volume on the auxiliary volume relationship GMREL2 at the secondary site.
13.Start the Consistency Group CG_W2K3_GM in the cycling mode at the primary site.

10.14.20 Creating the thin-provisioned Change Volumes


We start the setup process by creating thin-provisioned Change Volumes for the primary and
secondary volumes at both sites, as shown in Example 10-189.

Example 10-189 Creating the thin-provisioned volumes for Global Mirror cycling mode
IBM_2145:ITSO_SVC_DH8:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb
-rsize 20% -autoexpand -grainsize 32 -name GM_DB_Pri_CHANGE_VOL
Virtual Disk, id [3], successfully created
IBM_2145:ITSO_SVC_DH8:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb
-rsize 20% -autoexpand -grainsize 32 -name GM_DBLog_Pri_CHANGE_VOL
Virtual Disk, id [4], successfully created
IBM_2145:ITSO_SVC_DH8:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb
-rsize 20% -autoexpand -grainsize 32 -name GM_App_Pri_CHANGE_VOL
Virtual Disk, id [5], successfully created

IBM_2145:ITSO_SVC4:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize


20% -autoexpand -grainsize 32 -name GM_DB_Sec_CHANGE_VOL
Virtual Disk, id [3], successfully created

Chapter 10. Operations using the CLI 697


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

IBM_2145:ITSO_SVC4:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize


20% -autoexpand -grainsize 32 -name GM_DBLog_Sec_CHANGE_VOL
Virtual Disk, id [4], successfully created
IBM_2145:ITSO_SVC4:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize
20% -autoexpand -grainsize 32 -name GM_App_Sec_CHANGE_VOL
Virtual Disk, id [5], successfully created

10.14.21 Stopping the stand-alone remote copy relationship


We now display the remote copy relationships to ensure that they are in sync. Then, we stop
the stand-alone relationship GMREL3, as shown in Example 10-190.

Example 10-190 Stopping the remote copy stand-alone relationship


IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name
aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary
consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 GMREL1 000002006BE04FC4 ITSO_SVC_DH8 0 GM_DB_Pri
0000020061C06FCA ITSO_SVC4 0 GM_DB_Sec aux 0
CG_W2K3_GM consistent_synchronized 50 global
none
1 GMREL2 000002006BE04FC4 ITSO_SVC_DH8 1 GM_DBLog_Pri
0000020061C06FCA ITSO_SVC4 1 GM_DBLog_Sec aux 0
CG_W2K3_GM consistent_synchronized 50 global
none
2 GMREL3 000002006BE04FC4 ITSO_SVC_DH8 2 GM_App_Pri
0000020061C06FCA ITSO_SVC4 2 GM_App_Sec aux
consistent_synchronized 50 global none

IBM_2145:ITSO_SVC_DH8:superuser>stoprcrelationship GMREL3

10.14.22 Setting the cycling mode on the stand-alone remote copy


relationship
In Example 10-191, we set the cycling mode on the relationship by using the
chrcrelationship command. The cyclingmode and masterchange parameters cannot be
entered in the same command.

Example 10-191 Setting the cycling mode


IBM_2145:ITSO_SVC_DH8:superuser>chrcrelationship -cyclingmode multi GMREL3

10.14.23 Setting the Change Volume on the master volume


In Example 10-192, we set the Change Volume for the primary volume. A display shows the
name of the master Change Volume.

698 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

Example 10-192 Setting the Change Volume


IBM_2145:ITSO_SVC_DH8:superuser>chrcrelationship -masterchange GM_App_Pri_CHANGE_VOL

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name

10.14.24 Setting the Change Volume on the auxiliary volume


In Example 10-193, we set the Change Volume on the auxiliary volume in the secondary site.
From the display, we can see the name of the volume.

Example 10-193 Setting the Change Volume on the auxiliary volume


IBM_2145:ITSO_SVC4:superuser>chrcrelationship -auxchange GM_App_Sec_CHANGE_VOL 2
IBM_2145:ITSO_SVC4:superuser>
IBM_2145:ITSO_SVC4:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time

Chapter 10. Operations using the CLI 699


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL

10.14.25 Starting the stand-alone relationship in the cycling mode


In Example 10-194, we start the stand-alone relationship GMREL3. After a few minutes, we
check the freeze_time parameter to see how it changes.

Example 10-194 Starting the stand-alone relationship in the cycling mode


IBM_2145:ITSO_SVC_DH8:superuser>startrcrelationship GMREL3

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/37/20
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux

700 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

consistency_group_id
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/42/25
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL

10.14.26 Stopping the Consistency Group to change the cycling mode


In Example 10-195, we stop the Consistency Group with two relationships. You must stop it to
change Global Mirror to cycling mode. A display shows that the state of the Consistency
Group changes to consistent_stopped.

Example 10-195 Stopping the Consistency Group to change the cycling mode
IBM_2145:ITSO_SVC_DH8:superuser>stoprcconsistgrp CG_W2K3_GM

IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

10.14.27 Setting the cycling mode on the Consistency Group


In Example 10-196, we change the cycling mode of the Consistency Group CG_W2K3_GM. To
change the cycling mode of the Consistency Group, we must stop the Consistency Group;
otherwise, the command fails.

Chapter 10. Operations using the CLI 701


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Example 10-196 Setting the Global Mirror cycling mode on the Consistency Group
IBM_2145:ITSO_SVC_DH8:superuser>chrcconsistgrp -cyclingmode multi CG_W2K3_GM

IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

10.14.28 Setting the Change Volume on the master volume relationships of the
Consistency Group
In Example 10-197 on page 702, we change both of the relationships of the Consistency
Group to add the Change Volumes on the primary volumes. A display shows the name of the
master Change Volumes.

Example 10-197 Setting the Change Volume on the master volume


IBM_2145:ITSO_SVC_DH8:superuser>chrcrelationship -masterchange
GM_DB_Pri_CHANGE_VOL GMREL1

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300

702 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name GM_DB_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC_DH8:superuser>

IBM_2145:ITSO_SVC_DH8:superuser>chrcrelationship -masterchange GM_DBLog_Pri_CHANGE_VOL


GMREL2

IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 4
master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name

10.14.29 Setting the Change Volumes on the auxiliary volumes


In Example 10-198, we change both of the relationships of the Consistency Group to add the
Change Volumes to the secondary volumes. The display shows the names of the auxiliary
Change Volumes.

Example 10-198 Setting the Change Volumes on the auxiliary volumes


IBM_2145:ITSO_SVC4:superuser>chrcrelationship -auxchange GM_DB_Sec_CHANGE_VOL GMREL1
IBM_2145:ITSO_SVC4:superuser>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary aux

Chapter 10. Operations using the CLI 703


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name GM_DB_Pri_CHANGE_VOL
aux_change_vdisk_id 3
aux_change_vdisk_name GM_DB_Sec_CHANGE_VOL

IBM_2145:ITSO_SVC4:superuser>chrcrelationship -auxchange GM_DBLog_Sec_CHANGE_VOL GMREL2


IBM_2145:ITSO_SVC4:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 4
master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL
aux_change_vdisk_id 4
aux_change_vdisk_name GM_DBLog_Sec_CHANGE_VOL

10.14.30 Starting the Consistency Group CG_W2K3_GM in the cycling mode


In Example 10-199, we start the Consistency Group in the cycling mode. Looking at the field
freeze_time, you can see that the Consistency Group was started in the cycling mode and
that it is taking consistency images.

Example 10-199 Starting the Consistency Group with cycling mode


IBM_2145:ITSO_SVC_DH8:superuser>startrcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8

704 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_copying
relationship_count 2
freeze_time 2011/10/04/21/02/33
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_copying
relationship_count 2
freeze_time 2011/10/04/21/07/42
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2

10.15 Service and maintenance


In this section, we describe the various service and maintenance tasks that you can run
within the SVC environment.

10.15.1 Upgrading software


In this section, we describe how to upgrade the SVC software.

Package numbering and version


The format for software upgrade packages is four positive integers that are separated by
periods. For example, a software upgrade package is similar to 7.6.0.0, and each software
package is assigned a unique number.

Important: The support for migration from 7.1.x.x to 7.6.x.x is limited. Check with your
service representative for the recommended steps.

Chapter 10. Operations using the CLI 705


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

For more information about the recommended concurrent upgrade paths from all previous
versions of software to the latest release in each codestream, see this website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707

SAN Volume Controller software upgrade test utility


The SVC Software Upgrade Test Utility, which is on the Master Console, checks the software
levels in the system against the recommended levels, which are documented on the support
website. You are informed if the software levels are current or if you must download and install
newer levels. You can download the utility and installation instructions from this website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

After the software file is uploaded to the system (the /home/admin/upgrade directory), you can
select the software and apply it to the system. Use the web script and the applysoftware
command. When a new code level is applied, it is automatically installed on all of the nodes
within the system.

The underlying command-line tool runs the sw_preinstall script. This script checks the
validity of the upgrade file and whether it can be applied over the current level. If the upgrade
file is unsuitable, the sw_preinstall script deletes the files to prevent the buildup of invalid
files on the system.

Precaution before you perform the upgrade


Software installation often is considered to be a client’s task. The SVC supports concurrent
software upgrade. You can perform the software upgrade concurrently with I/O user
operations and certain management activities. However, only limited CLI commands are
operational from the time that the installation command starts until the upgrade operation
ends successfully or is backed out. Certain commands fail with a message that indicates that
a software upgrade is in progress.

Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs
are working. Otherwise, the applications might have I/O failures during the software upgrade.
Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem
Device Driver (SDD) command. Example 10-200 shows the output.

Example 10-200 Query adapter


#datapath query adapter
Active Adapters :2

Adpt# Name State Mode Select Errors Paths Active


0 fscsi0 NORMAL ACTIVE 1445 0 4 4
1 fscsi1 NORMAL ACTIVE 1888 0 4 4

#datapath query device


Total Devices : 2

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized


SERIAL: 60050768018201BF2800000000000000
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk3 OPEN NORMAL 0 0
1 fscsi1/hdisk7 OPEN NORMAL 972 0

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized


SERIAL: 60050768018201BF2800000000000002
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors

706 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

0 fscsi0/hdisk4 OPEN NORMAL 784 0


1 fscsi1/hdisk8 OPEN NORMAL 0 0

Write-through mode: During a software upgrade, periods occur when not all of the nodes
in the system are operational. As a result, the cache operates in write-through mode.
Write-through mode affects the throughput, latency, and bandwidth aspects of
performance.

Verify that your uninterruptible power supply unit configuration is also set up correctly (even if
your system is running without problems). Specifically, ensure that the following conditions
are true:
򐂰 Your uninterruptible power supply units are all getting their power from an external source
and that they are not daisy chained. Ensure that each uninterruptible power supply unit is
not supplying power to another node’s uninterruptible power supply unit.
򐂰 The power cable and the serial cable, which come from each node, go back to the same
uninterruptible power supply unit. If the cables are crossed and go back to separate
uninterruptible power supply units, another node might also be shut down mistakenly
during the upgrade while one node is shut down.

Important: Do not share the SVC uninterruptible power supply unit with any other devices.

You must also ensure that all I/O paths are working for each host that runs I/O operations to
the SAN during the software upgrade. You can check the I/O paths by using the datapath
query commands.

You do not need to check for hosts that have no active I/O operations to the SAN during the
software upgrade.

Upgrade procedure
To upgrade the SVC system software, complete the following steps:
1. Before the upgrade is started, you must back up the configuration and save the backup
configuration file in a safe place.
2. Before you start to transfer the software code to the clustered system, clear the previously
uploaded upgrade files in the /home/admin/upgrade SVC system directory, as shown in
Example 10-201.

Example 10-201 cleardumps -prefix /home/superuser/upgrade command


IBM_2145:ITSO_SVC_DH8:superuser>cleardumps -prefix /home/admin/upgrade
IBM_2145:ITSO_SVC_DH8:superuser>

3. Save the data collection for support diagnosis if you experience problems, as shown in
Example 10-202.

Example 10-202 svc_snap -c command


IBM_2145:ITSO_SVC_DH8:superuser>svc_snap -c
Collecting system information...
Creating Config Backup
Dumping error log...
Creating
Snap data collected in /dumps/snap.110711.111003.111031.tgz

Chapter 10. Operations using the CLI 707


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

4. List the dump that was generated by the previous command, as shown in
Example 10-203.

Example 10-203 lsdumps command


IBM_2145:ITSO_SVC_DH8:superuser>lsdumps
id filename
0 svc.config.cron.bak_108283
1 sel.110711.trc
2 rtc.race_mq_log.txt.110711.trc
3 ethernet.110711.trc
4 svc.config.cron.bak_110711
5 svc.config.cron.xml_110711
6 svc.config.cron.log_110711
7 svc.config.cron.sh_110711
8 svc.config.backup.bak_110711
9 svc.config.backup.tmp.xml
10 110711.trc
11 svc.config.backup.xml_110711
12 svc.config.backup.now.xml
13 snap.110711.111003.111031.tgz

5. Save the generated dump in a safe place by using the pscp command, as shown in
Figure 10-14.

Example 10-204 pscp -load command


C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC_DH8
[email protected]:/dumps/snap
.110711.111003.111031.tgz c:snap.110711.111003.111031.tgz
snap.110711.111003.111031 | 4999 kB | 4999.8 kB/s | ETA: 00:00:00 |
100%

Note: The pscp command does not work if you did not upload your PuTTY SSH private
key or if you are not using the user ID and password with the PuTTY pageant agent, as
shown in Figure 10-14.

Figure 10-14 Pageant example

708 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

6. Upload the new software package by using PuTTY Secure Copy. Enter the pscp -load
command, as shown in Example 10-205.

Example 10-205 pscp -load command


C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC_DH8
c:\IBM2145_INSTALL_7.4.0.0. [email protected]:/home/admin/upgrade
110926.tgz.gpg | 353712 kB | 11053.5 kB/s | ETA: 00:00:00 | 100%

7. Upload the SVC Software Upgrade Test Utility by using PuTTY Secure Copy. Enter the
command, as shown in Example 10-206.

Example 10-206 Upload utility


C:\>pscp -load ITSO_SVC_DH8 IBM2145_INSTALL_upgradetest_12.31
[email protected]:/home/admin/upgrade
IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100%

8. Verify that the packages were successfully delivered through the PuTTY command-line
application by entering the lsdumps command, as shown in Example 10-207.

Example 10-207 lsdumps command


IBM_2145:ITSO_SVC_DH8:superuser>lsdumps -prefix /home/admin/update
id filename
0 IBM2145_INSTALL_7.4.0.0
1 IBM2145_INSTALL_upgradetest_12.31

9. Now that the packages are uploaded, install the SVC Software Upgrade Test Utility, as
shown in Example 10-208.

Example 10-208 applysoftware command

IBM_2145:ITSO_SVC_DH8:superuser>applysoftware -file
IBM2145_INSTALL_upgradetest_12.31
CMMVC6227I The package installed successfully.

10.Using the svcupgradetest command, test the upgrade for known issues that might prevent
a software upgrade from completing successfully, as shown in Example 10-209.

Example 10-209 svcupgradetest command


IBM_2145:ITSO_SVC_DH8:superuser>svcupgradetest -v 7.4.0.0
svcupgradetest version 12.31 Please wait while the tool tests
for issues that may prevent a software upgrade from completing
successfully. The test will take approximately one minute to complete.
The test has not found any problems with the 2145 cluster.
Please proceed with the software upgrade.

Important: If the svcupgradetest command produces any errors, troubleshoot the


errors by using the maintenance procedures before you continue.

11.Use the applysoftware command to apply the software upgrade, as shown in


Example 10-210.

Chapter 10. Operations using the CLI 709


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

Example 10-210 Applysoftware upgrade command example


IBM_2145:ITSO_SVC_DH8:superuser>applysoftware -file IBM2145_INSTALL_7.4.0.0

While the upgrade runs, you can check the status, as shown in Example 10-211.

Example 10-211 Checking the update status


IBM_2145:ITSO_SVC_DH8:superuser>lsupdate
lsupdate
status system_updating
event_sequence_number
progress 50
estimated_completion_time 140522093020
suggested_action wait
system_new_code_level 7.4.0.1 (build 99.2.141022001)
system_forced no
system_next_node_status updating
system_next_node_time
system_next_node_id 2
system_next_node_name
node2

The new code is distributed and applied to each node in the SVC system:
– During the upgrade, you can issue the lsupdate command to see the status of the
upgrade.
– To verify that the upgrade was successful, you can run the lssystem and lsnodevpd
commands, as shown in Example 10-212. (We truncated the lssystem and lsnodevpd
information for this example.)

Example 10-212 lssystem and lsnodevpd commands


IBM_2145:ITSO_SVC_DH8:superuser>lssystem
id 000002007F600A10
...
...
name ITSO_SVC_DH8
location local
partnership
total_mdisk_capacity 825.0GB
space_in_mdisk_grps 571.0GB
space_allocated_to_vdisks 75.05GB
total_free_space 750.0GB
total_vdiskcopy_capacity 85.00GB
total_used_capacity 75.00GB
total_overallocation 10
total_vdisk_capacity 75.00GB
total_allocated_extent_capacity 81.00GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 7.4.0.0 (build 103.11.1410200000)

IBM_2145:ITSO_SVC_DH8:superuser>lsnodevpd 1
id 1
...
...

710 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

system code level: 4 fields


id 1
node_name ITSO_SVCN1
WWNN 0x500507680c000508
code_level 7.4.0.0 (build 103.11.1410200000)

The required tasks to upgrade the SVC software are complete.

10.15.2 Running the maintenance procedures


Use the finderr command to generate a list of any unfixed errors in the system. This
command analyzes the last generated log that is in the /dumps/elogs/ directory on the
system.

To generate a new log before you analyze unfixed errors, run the dumperrlog command, as
shown in Example 10-213 on page 711.

Example 10-213 dumperrlog command


IBM_2145:ITSO_SVC_DH8:superuser>dumperrlog

This command generates an errlog_timestamp file, such as errlog_110711_111003_090500:


򐂰 errlog is part of the default prefix for all event log files.
򐂰 110711 is the panel name of the current configuration node.
򐂰 111003 is the date (YYMMDD).
򐂰 090500 is the time (HHMMSS).
lsservicenodes
lsservicerecommendation
lsservicestatus
IBM_2145:ITSO_SVC_DH8:superuser>satask -h
The following actions are available with this command :
chbootdrive
chenclosurevpd
chnaskey
chnodeled
chserviceip
chvpd
chwwnn
cpfiles
installsoftware
leavecluster
mkcluster
overridequorum
rescuenode
setlocale
setpacedccu
settempsshkey
snap
startservice
stopnode
stopservice
t3recovery

Important: You must use the sainfo and satask command sets under the direction of IBM
Support. The incorrect use of these commands can lead to unexpected results.

Chapter 10. Operations using the CLI 711


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

10.16 SAN troubleshooting and data collection


When we encounter a SAN issue, the SVC is often extremely helpful in troubleshooting the
SAN because the SVC is at the center of the environment through which the communication
travels.

For more information about how to troubleshoot and collect data from the SVC, see SAN
Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is
available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html

Use the lsfabric command regularly to obtain a complete picture of the devices that are
connected and visible from the SVC cluster through the SAN. The lsfabric command
generates a report that displays the FC connectivity between nodes, controllers, and hosts.

Example 10-214 shows the output of an lsfabric command.

Example 10-214 lsfabric command


IBM_2145:ITSO_SVC_DH8:superuser>lsfabric -delim :
remote_wwpn:remote_nportid:id:node_name:local_wwpn:local_port:local_nportid:state:name:clus
ter_name:type
500507680304D190:021700:5:nodeA:500507680304A100:1:020300:active:node4:Cluster_9.115.2:node
500507680304D190:021700:2:nodeB:500507680308A101:2:021800:active:node4:Cluster_9.115.2:node
500507680304D190:021700:3:nodeC:500507680308190D:2:020A00:active:node4:Cluster_9.115.2:node
500507680308D190:011700:5:nodeA:500507680308A100:2:011000:blocked:node4:Cluster_9.115.2:nod
e
500507680308D190:011700:2:nodeB:500507680304A101:1:010D00:blocked:node4:Cluster_9.115.2:nod
e
500507680308D190:011700:3:nodeC:500507680304190D:1:011200:blocked:node4:Cluster_9.115.2:nod
e

For more information about the lsfabric command, see the V7.6 Command-Line Interface
User’s Guide for IBM System Storage SAN Volume Controller and Storwize V7000,
GC27-2287.

10.17 Recover system procedure


The recover system procedure recovers the entire system if the system state is lost from all
nodes (e.g. simultaneous power loss on all SVC nodes by pulling the power cords between
the uninterruptible power supply units and the SVC nodes). The procedure re-creates the
system by using saved configuration data. The saved configuration data is in the active
quorum disk and the latest XML configuration backup file. The recovery might not be able to
restore all volume data (e.g. data that was not destaged from cache at the time of the total
system failure). This procedure is also known as (automatic) Tier 3 (T3) recovery.

The recover system procedure is an extremely sensitive procedure that is to be used as a last
resort only. The procedure should not be used by the client or an IBM service support
representative (SSR) without direct guidance from IBM Remote Technical Support. Initiating
the T3 recover system procedure while the system is in a specific state can result in the loss
of the XML configuration backup files.

For further informations about the T3 recover system procedure refer to the IBM SAN Volume
Controller documentation:

712 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm

https://ibm.biz/BdHnKF

Chapter 10. Operations using the CLI 713


7933 09 CLI Operations Max.fm Draft Document for Review February 4, 2016 8:01 am

714 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11

Chapter 11. Operations using the GUI


In this chapter, we describe IBM SAN Volume Controller (SVC) operational management and
system administration by using IBM Spectrum Virtualize V7.6 graphical user interface (GUI).
The management GUI is a tool enabled and provided by IBM Spectrum Virtualize software
engine that helps you to monitor, manage, and configure your system.

The information is divided into normal operations and advanced operations. We explain the
basic configuration procedures that are required to get your IBM Spectrum Virtualize
environment running as quickly as possible by using GUI.

In this chapter, we focus on operational aspects. We do not discuss advanced troubleshooting


or problem determination or some of the complex operations (compression, encryption) as
they are explained in different parts of this publication.

This chapter includes the following topics:


򐂰 Normal SVC operations using GUI
򐂰 Monitoring menu
򐂰 Working with external disk controllers
򐂰 Working with storage pools
򐂰 Working with managed disks
򐂰 Migration
򐂰 Working with hosts
򐂰 Working with volumes
򐂰 Copy Services and managing FlashCopy
򐂰 Copy Services: Managing Remote copy
򐂰 Managing the SVC clustered system by using the GUI
򐂰 Upgrading software
򐂰 Managing I/O Groups
򐂰 Managing nodes
򐂰 Troubleshooting
򐂰 User management
򐂰 Configuration
򐂰 Upgrading the IBM Spectrum Virtualize software

© Copyright IBM Corp. 2015. All rights reserved. 715


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.1 Normal SVC operations using GUI


In this section, we describe several tasks that we define as regular, day-to-day activities. For
illustration, we configured the SVC cluster in a standard topology. However, most of the tasks
that are described in this chapter are similar to a Enhanced Stretched Cluster topology of
HyperSwap. Details about the SVC Stretched cluster are available in IBM SAN Volume
Controller Enhanced Stretched Cluster with VMware, SG24-8211.

Multiple users can be logged in to the GUI at any time. However, no locking mechanism
exists, so be aware that if two users change the same object at the same time, the last action
that is entered from the GUI is the one that takes effect.

Important: Data entries that are made through the GUI are case-sensitive.

11.1.1 Introduction to the GUI


As shown in Figure 11-1, the IBM SAN Volume Controller GUI System panel is an important
user interface. Throughout this chapter, we refer to this interface as the IBM SAN Volume
Controller System panel or just System panel.

In later sections of this chapter, we expect users to be able to navigate to this panel without
explanation of the procedure each time.

Dynamic system view


Dynamic menu with functions
Expandable system overview

Errors,
Performance meter
Running jobs warnings
Health indicator
Capacity overview

Figure 11-1 IBM SAN Volume Controller System panel

Dynamic menu
From any page in the IBM Spectrum Virtualize GUI, you can always access the dynamic
menu. The IBM Spectrum Virtualize GUI dynamic menu is on the left side of the GUI window.
To browse by using this menu, hover the mouse pointer over the various icons and choose a
page that you want to display, as shown in Figure 11-2 on page 717.

716 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-2 The dynamic menu in the left column of IBM SAN Volume Controller GUI

Starting with IBM Spectrum Virtualize V7.6. the dynamic function of the left menu is disabled
by default and the icons are of a static size. To enable dynamic appearance, navigate to
Settings → GUI Preferences → Navigation as shown in Figure 11-3. Tick the checkbox to
enable animation (dynamic function) and click Save button. For changes to take effect relogin
or refresh GUI cache from the General panel in GUI Preferences.

3.

2.

4. Relogin or Refresh GUI


1.

Figure 11-3 Navigation

The IBM Spectrum Virtualize dynamic menu consists of multiple panels independently on the
underlying hardware (SVC, IBM Storwize family). These panels group common configuration
and administration objects and present individual administrative objects to the IBM Spectrum
Virtualize GUI users, as shown in Figure 11-4.

Chapter 11. Operations using the GUI 717


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-4 IBM Spectrum Virtualize GUI panel managing SVC model DH8

Welcome banner
IBM Spectrum Virtualize V7.6 and higher allows administrators to configure welcome banner
- a text message that appears either in GUI login screen or at CLI login prompt. The content
of the welcome message is helpful when you need to notify users about some important
information about system, for example security warnings or location description.

The welcome banner (or login message) can be enabled from the GUI or CLI; to define such
a message use the following commands outlined in Example 11-1. Define (copy) the text file
that contains the welcome message to your configuration node and enable it in CLI. In our
case the content of the file is located in /tmp/banner.

Example 11-1 Configure welcome message


IBM_2145:ITSO SVC DH8:ITSO_admin>chbanner -file=/tmp/banner -enable
IBM_2145:ITSO SVC DH8:ITSO_admin>

## where the file contains:


node1:/tmp # cat /tmp/banner
Do not use the system if you are not sure what to do here!

## To disable showing of the message use:


IBM_2145:ITSO SVC DH8:ITSO_admin>chbanner -disable

The result of the action before is as illustrated in Figure 11-5 on page 719. It shows the login
message in the GUI and in the CLI login prompt window.

718 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-5 Welcome message in GUI and CLI

Suggested tasks
After a successful login, IBM Spectrum Virtualize opens a pop-up window with suggested
notifying administrators that several key SVC functions are not yet configured. You cannot
miss or overlook this window. However, you can close the pop-up window and perform tasks
at any time.

Figure 11-6 shows the suggested tasks in the System panel.

Figure 11-6 Suggested tasks

In this case, the SVC GUI warns you that so far no volume is mapped to the host or that no
host is defined yet. You can directly perform the task from this window or cancel it and
execute the procedure later at any convenient time. Other suggested tasks that typically

Chapter 11. Operations using the GUI 719


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

appear after the initial configuration of the SVC are to create a volume and configure a
storage pool, for example.

The dynamic IBM Spectrum Virtualize menu contains the following panels (Figure 11-4 on
page 718):
򐂰 Monitoring
򐂰 Pools
򐂰 Volumes
򐂰 Hosts
򐂰 Copy Services
򐂰 Access
򐂰 Settings

Notification status area


A control panel is available in the bottom part of the window. This panel is divided into five
status areas to provide information about your system. These persistent state notification
widgets are reduced, by default, as shown in Figure 11-7. The notification area contains two
status bar indicators (capacity and system health), performance meter, running tasks
indicator, and alerts indicator.

Figure 11-7 Notification area

Health status and alerts indication


The rightmost area of the control panel provides information about health status of the system
and alerts logged in your system, as shown in Figure 11-8.

Figure 11-8 Health Status area

If non-critical issues exist for your system nodes, external storage controllers, or remote
partnerships, a new status area opens next to the Health Status widget, as shown in
Figure 11-9.

Figure 11-9 Controller path status alert

You can fix the error by clicking Status Alerts to direct you to the Events panel fix procedures.

720 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

If a critical system connectivity error exists, the Health Status bar turns red and alerts the
system administrator for immediate action, as shown in Figure 11-10.

Figure 11-10 External storage connectivity loss

Storage allocation indicator


The leftmost indicator shows information about the overall physical capacity (the initial
amount of storage that was allocated). This indicator also shows the virtual capacity
(thin-provisioned storage). The virtual volume size is dynamically changed as data grows or
shrinks, but you still see a fixed capacity. Click the indicator to switch between physical and
virtual capacity, as shown in Figure 11-11.

Figure 11-11 Storage allocation area

The following information is displayed in this storage allocation indicator window. To view all of
the information, you must use the up and down arrow keys:
򐂰 Allocated capacity
򐂰 Virtual capacity
򐂰 Compression ratio

Important: Since IBM Spectrum Virtualize version 7.4, the capacity units use the binary
prefixes that are defined by the International Electrotechnical Commission (IEC). The
prefixes represent a multiplication by 1024 with symbols GiB (gibibyte), TiB (tibibyte), and
PiB (pebibyte).

Running tasks indicator


The left corner of area provides information about the running tasks in the system, as shown
in Figure 11-12.

Figure 11-12 Long-running tasks area

The following information is displayed in this window:

Chapter 11. Operations using the GUI 721


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Volume migration
򐂰 MDisk removal
򐂰 Image mode migration
򐂰 Extent migration
򐂰 FlashCopy
򐂰 Metro Mirror
򐂰 Global Mirror
򐂰 Volume formatting
򐂰 Space-efficient copy repair
򐂰 Volume copy verification
򐂰 Volume copy synchronization
򐂰 Estimated time for the task completion

By clicking within the square (as shown in Figure 11-7 on page 720), this area provides
detailed information about running and recently completed tasks, as shown in Figure 11-13.

Figure 11-13 Details about running tasks

Performance meter
In the middle of the notification area there is a Performance meter consisting of three
measured read and write parameters - bandwidth, IOPS, and latency. See Figure 11-14 for
details.

Figure 11-14 Performance meter

Overview window
Since IBM Spectrum Virtualize V7.4, the welcome window of the GUI changed from the
well-known former Overview panel to the new System panel, as shown in Figure 11-1 on
page 716. Clicking Overview (Figure 11-15 on page 723) in the upper-right corner of the
System panel opens a modified Overview panel with options that are similar to previous
versions of the software.

722 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-15 Opening the Overview panel

The following content of the chapter helps you to understand the structure of the panel and
how to navigate to various system components to manage them more efficiently and quickly.

11.1.2 Content view organization


The following sections describe several view options within the GUI in which you can filter (to
minimize the amount of data that is shown on the window), sort, and reorganize the content
on the window.

Table filtering
On most pages, a Filter option (magnifying glass icon) is available on the upper-left side of the
window. Use this option if the list of object entries is too long.

Complete the following steps to use search filtering:


1. Click Filter on the upper-left side of the window, as shown in Figure 11-16, to open the
search box.

Figure 11-16 Show filter search box

Chapter 11. Operations using the GUI 723


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

2. Enter the text string that you want to filter and press Enter.
3. By using this function, you can filter your table that is based on column names. In our
example, a volume list is displayed that contains the names that include DS somewhere in
the name. DS is highlighted in amber, as shown in Figure 11-17. The search option is not
case-sensitive.

Figure 11-17 Show filtered rows

4. Remove this filtered view by clicking the reset filter icon, as shown in Figure 11-18.

Figure 11-18 Reset the filtered view

Filtering: This filtering option is available in most menu options of the GUI.

Table information
In the table view, you can add or remove the information in the tables on most pages.

For example, on the Volumes page, complete the following steps to add a column to our table:
1. Right-click any column headers of the table or select the icon in the left corner of the table
header. A list of all of the available columns appears, as shown in Figure 11-19 on
page 724.

right-click

Figure 11-19 Add or remove details in a table

724 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

2. Select the column that you want to add (or remove) from this table. In our example, we
added the volume ID column and sorted the content by ID, as shown on the left in
Figure 11-20.

Figure 11-20 Table with an added ID column

3. You can repeat this process several times to create custom tables to meet your
requirements.
4. You can always return to the default table view by selecting Restore Default View in the
column selection menu, as shown in Figure 11-21 on page 725.

Figure 11-21 Restore default table view

Sorting: By clicking a column, you can sort a table that is based on that column in
ascending or descending order.

Chapter 11. Operations using the GUI 725


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Reorganizing columns in tables


You can move columns by left-clicking and moving the column right or left, as shown in
Figure 11-22. We are attempting to move the State column after the Capacity column.

Figure 11-22 Reorganizing the table columns

11.1.3 Help
To access online help, move the mouse pointer over the question mark (?) icon in the
upper-right corner of any panel and select the context-based help topic, as shown in
Figure 11-23 on page 726. Depending on the panel you are working with, the help displays its
context item.

Figure 11-23 Help link

By clicking Information Center, you are directed to the public IBM Knowledge Center, which
provides all of the information about the SVC systems, as shown in Figure 11-24.

Figure 11-24 SVC information in the IBM Knowledge Center

726 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11.2 Monitoring menu


Hover the cursor over the Monitoring function icon to open the Monitoring menu (Figure 11-25
on page 727). The Monitoring menu offers these navigation directions:
򐂰 System: This option opens the general overview of your SVC system, including the
depiction of all devices in a rack and the storage capacity. For more information, see
11.2.1, “System overview” on page 727.
򐂰 Events: This option tracks all informational, warning, and error messages that occurred in
the system. You can apply various filters to sort the messages according to your needs or
export the messages to an external comma-separated values (CSV) file. For more
information, see 11.2.3, “Events” on page 731.
򐂰 Performance: This option reports the general system statistics that relate to the processor
(CPU) utilization, host and internal interfaces, volumes, and MDisks. You can switch
between MBps or IOPS. For more information, see 11.2.4, “Performance” on page 732.

As of V7.4, the option that was formerly called System Details is integrated into the device
overview on the general System panel, which is available after logging in or when clicking the
option System from the Monitoring menu. For more information, see “Overview window” on
page 722.

Figure 11-25 Accessing the Monitoring menu

In the following sections, we describe each option on the Monitoring menu.

11.2.1 System overview


The System option on the Monitoring menu provides a general overview about your SVC
system, including the depiction of all devices in a rack and the allocated or physical storage
capacity. When thin-provisioned volumes are enabled, the virtual capacity is also shown by
hovering your mouse over the capacity indicator. For more details, see Figure 11-26 on
page 728.

Chapter 11. Operations using the GUI 727


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-26 System overview that shows capacity

When you click a specific component of a node, a pop-up window indicates the details of the
disk drives in the unit. By right-clicking and selecting Properties, you see detailed technical
parameters, such as capacity, interface, rotation speed, and the drive status (online or offline).

See Figure 11-27 for the details of node 1 in a cluster.

1 3

right-click

2
Figure 11-27 Component details

In an environment with multiple SVC systems, you can easily direct the onsite personnel or
technician to the correct device by enabling the identification LED on the front panel. Click
Identify in the pop-up window that is shown in Figure 11-26. Then, wait for confirmation from
the technician that the device in the data center was correctly identified.

After the confirmation, click Turn LED Off (Figure 11-28).

728 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-28 Using the identification LED

Alternatively, you can use the SVC command-line interface (CLI) to get the same results.
Type the following commands in this sequence:
1. Type svctask chnode -identify yes 1 (or just type chnode -identify yes 1).
2. Type svctask chnode -identify no 1 (or just type chnode -identify no 1).

Each system that is shown in the Dynamic system view in the middle of a System panel can
be rotated by 180° to see its rear side. Click the rotation arrow in the lower-right corner of the
device, as illustrated in Figure 11-29.

Figure 11-29 Rotating the enclosure

11.2.2 System details


The System Details option was removed from the Monitoring menu in IBM Spectrum
Virtualize V7.3; however, its modified information is still available directly from the System
panel. It provides an extended level of the parameters and technical details that relate to the
system, including the integration of each element into an overall system configuration.
Right-click the enclosure that you want and click Properties to obtain detailed information.

Chapter 11. Operations using the GUI 729


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-30 System details

The output is shown in Figure 11-31. By using this menu, you can also power off the machine
(without an option for remote start), remove the node or enclosure from the system, or list all
of the volumes that are associated with the system, for example.

Figure 11-31 Enclosure technical details

The Properties option now also provides the information about encryption support, either if
the hardware encryption is supported or not. In other words, if the prerequisite hardware
(additional CPU and compression cards) is installed in the system or not and encryption
licenses enabled.

730 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

In addition, from the System panel, you can see an overview of important status information
and the parameters of the Fibre Channel (FC) ports (Figure 11-32).

Figure 11-32 Canister details and vital product data (VPD)

By choosing Fibre Channel Ports, you can see the list and status of the available FC ports
with their worldwide port names (WWPNs), as shown in Figure 11-33.

Figure 11-33 Status of a node’s FC ports

11.2.3 Events
The Events option, which you select from the Monitoring menu, tracks all informational,
warning, and error messages that occur in the system. You can apply various filters to sort
them or export them to an external comma-separated values (CSV) file. A CSV file can be
created from the information that is shown here. Figure 11-34 on page 732 provides an
example of records in the SVC Event log.

Chapter 11. Operations using the GUI 731


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-34 Event log list

11.2.4 Performance
The Performance panel reports the general system statistics that relate to processor (CPU)
utilization, host and internal interfaces, volumes, and MDisks. You can switch between MBps
or IOPS or even drill down in the statistics to the node level. This capability might be useful
when you compare the performance of each node in the system if problems exist after a node
failover occurs. See Figure 11-35.

Figure 11-35 Performance statistics of the SVC

The performance statistics in the GUI show, by default, the latest five minutes of data. To see
details of each sample, click the graph and select the time stamp, as shown in Figure 11-36
on page 733.

732 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-36 Sample details

The charts that are shown in Figure 11-36 represent five minutes of the data stream. For
in-depth storage monitoring and performance statistics with historical data about your SVC
system, use the IBM Spectrum Control (enabled by former IBM Tivoli Storage Productivity
Center for Disk and IBM Virtual Storage Center).

11.3 Working with external disk controllers


In this section, we describe various configuration and administrative tasks that you perform on
external disk controllers within the SVC environment.

11.3.1 Viewing the disk controller details


To view information about a back-end disk controller that is used by the SVC environment,
select Pools in the dynamic menu and then select External Storage.

The External Storage panel that is shown in Figure 11-37 opens.

Figure 11-37 Disk controller systems

For more information about a specific controller and MDisks, click the plus sign (+) that is to
the left of the controller icon and name.

Chapter 11. Operations using the GUI 733


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.3.2 Renaming a disk controller


After you present a new storage system to the SVC, complete the following steps to name it
so that the storage administrators can identify it more easily:
1. Right-click the newly presented controller default name. Select Rename and enter the
name that you want to associate with this storage system, as shown in Figure 11-38.

Figure 11-38 Renaming a storage system

2. Enter the new name that you want to assign to the controller and then click Rename, as
shown in Figure 11-39.

Figure 11-39 Changing the name of a storage system

Controller name: The name can consist of the letters A - Z and a - z, the numbers
0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or underscore.

3. A task is started to change the name of this storage system. When it completes, you can
close this window. The new name of your controller is displayed on the External Storage
panel.

734 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11.3.3 Discovering Storage from the external panel


You can discover MDisks from the External Storage panel. Complete the following steps to
discover new MDisks:
1. Ensure that no existing controllers are highlighted. Click Actions.
2. Click Discover Storage to discover MDisks from this controller, as shown in Figure 11-40.

Figure 11-40 Discovering external storage

3. When the task completes, click Close to see the newly detected MDisks.

11.4 Working with storage pools


In this section, we describe the tasks that can be performed with the storage pools.

11.4.1 Viewing storage pool information


Perform the following tasks from the Pools panel, as shown in Figure 11-41 on page 735. To
access this panel from the SVC System panel, hover the cursor over the Pools menu and
click Volumes by Pool.

Figure 11-41 Viewing the storage pools

Chapter 11. Operations using the GUI 735


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

You can add new columns to the table, as described in “Table information” on page 724.

To retrieve more information about a specific storage pool, select any storage pool in the left
column. The upper-right corner of the panel, which is shown in Figure 11-42, contains the
following information about this pool:
򐂰 Status
򐂰 Number of MDisks
򐂰 Number of volume copies
򐂰 Whether Easy Tier is active on this pool
򐂰 Site assignment
򐂰 Volume allocation
򐂰 Capacity

Figure 11-42 Detailed information about a pool

Change the view by selecting MDisks by Pools. Select the pool with which you want to work
and click the plus sign (+), which expands the information. This panel displays the MDisks
that are present in this storage pool, as shown in Figure 11-43.

Figure 11-43 MDisks that are present in a storage pool

From this window you can also directly assign discovered storage (detected MDisks) to the
appropriate storage pools. Use the icons below the lost of controllers and click Assign button.

11.4.2 Creating storage pools


Complete the following steps to create a storage pool:
1. From the SVC dynamic menu, navigate to Pools option and select MDisks by Pools.
The MDisks by Pools panel opens. Click Create Pool, as shown in Figure 11-44.

736 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-44 Selecting the option to create a new storage pool

The Create Storage Pools wizard opens.


2. In the first window of wizard, complete the following elements, as shown in Figure 11-45:
a. Specify a name for the storage pool. If you do not provide a name, the SVC
automatically generates the name mdiskgrpx, where x is the ID sequence number that
is assigned by the SVC internally.

Storage pool name: You can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The name can be 1 - 63 characters. The name is
case-sensitive. The name cannot start with a number or the pattern “MDiskgrp”
because this prefix is reserved for SVC internal assignment only.

b. Optional: Change the icon that is associated with this storage pool, as shown in
Figure 11-45.
c. In addition, you can specify the following information and then click Next:
• Extent Size under the Advanced Settings section. The default is 1 GiB.
• Encryption settings (Advanced settings). Default depends if your SVC is
encryption-enabled.

Figure 11-45 Create the storage pool (step 1 of 2)

Important: If you have enabled software encryption on your SVC, each newly
defined storage pool has preset encryption enabled. If you intentionally do not wish
to have pool encrypted, you have to disable this option by unticking the checkbox
during pool definition.

Chapter 11. Operations using the GUI 737


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

3. In the next window (as shown in Figure 11-46), complete the following steps to specify the
MDisks that you want to associate with the new storage pool:
a. Select the MDisks that you want to add to this storage pool.

Tip: To add multiple MDisks, press and hold the Ctrl key and click selected items.

b. Click Finish to complete the creation process.

Figure 11-46 Create Pool window (step 2 of 2)

4. Close the task completion window. In the Storage Pools panel (as shown in Figure 11-47),
the new storage pool is displayed.

Figure 11-47 New storage pool was added successfully

All of the required tasks to create a storage pool are complete.

11.4.3 Renaming a storage pool


To rename a storage pool, complete the following steps:
1. Select the storage pool that you want to rename and then click Actions → Rename, as
shown in Figure 11-48 on page 739.

738 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-48 Renaming a storage pool

2. Enter the new name that you want to assign to the storage pool and click Rename, as
shown in Figure 11-49.

Figure 11-49 Changing the name for a storage pool

Storage pool name: The name can consist of the letters A - Z and a - z, the numbers
0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or underscore.

11.4.4 Deleting a storage pool


To delete an empty storage pool, complete the following steps:
1. Select the storage pool that you want to delete and then click Actions → Delete Pool, as
shown in Figure 11-50. Alternatively, you can right-click directly on the pool that you want
to delete and get the same options from the menu.

Figure 11-50 Delete Pool option

Chapter 11. Operations using the GUI 739


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

2. Once you have clicked the Delete Pool option, the command is directly executed. There is
no confirmation prompt. However, starting with IBM Spectrum Virtualize V7.4 it is not
possible to delete a pool with assigned and active MDisks. This option is unavailable from
the menu as indicated in Figure 11-51. First delete obsolete MDisks.

Figure 11-51 Deleting a pool

Important: IBM Spectrum Virtualize does not allow the user to directly delete pools that
contain any active volumes with past IO activities.

11.5 Working with managed disks


This section describes the various configuration and administrative tasks that you can
perform on the managed disks (MDisks) within the IBM Spectrum Virtualize environment.

11.5.1 MDisk information


From the SVC dynamic menu, select Pools and click MDisks by Pools. The MDisks panel
opens, as shown in Figure 11-52 on page 741. Click the plus sign (+) for one or more storage
pools to see the MDisks that belong to a certain pool.

To retrieve more information about a specific MDisk, complete the following steps:
1. From the expanded view of a storage pool in the MDisks panel, select an MDisk.
2. Click Properties, as shown in Figure 11-52 on page 741.

740 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-52 MDisks menu

3. For the selected MDisk, a basic overview is displayed that shows its important parameters,
as shown in Figure 11-53. For additional technical details, enlarge the section View more
details.

Figure 11-53 MDisk Overview page

4. Click the Dependent Volumes tab to display information about the volumes that are on
this MDisk, as shown in Figure 11-54. For more information about the volume panel, see
11.8, “Working with volumes” on page 759.

Figure 11-54 Dependent volumes for an MDisk

5. Click Close to return to the previous window.

Chapter 11. Operations using the GUI 741


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.5.2 Renaming an MDisk


Complete the following steps to rename an MDisk that is managed by the SVC clustered
system:
1. In the MDisk panel that is shown in Figure 11-52 on page 741, select the MDisk that you
want to rename.
2. Click Actions → Rename, as shown in Figure 11-55.
You can select multiple MDisks to rename by pressing and holding the Ctrl key and
selecting the MDisks that you want to rename.

Figure 11-55 Rename MDisk action

Alternative: You can right-click an MDisk directly, and select Rename.

3. In the Rename MDisk window (Figure 11-56), enter the new name that you want to assign
to the MDisk and click Rename.

Figure 11-56 Renaming an MDisk

MDisk name: The name can consist of the letters A - Z and a - z, the numbers 0 - 9,
the dash (-), and the underscore (_) character. The name can be 1 - 63 characters.

11.5.3 Discovering MDisks


Complete the following steps to discover newly assigned MDisks:
1. In the SVC dynamic menu, move the pointer over Pools and click External Storage.
2. Ensure that no existing controllers are selected. Click Actions.
3. Click Discover Storage, as shown in Figure 11-57.

742 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-57 Discover Storage action

The Discover Devices window opens.


4. When the task is completed, click Close.
5. Newly assigned MDisks are displayed in the Unassigned MDisks window as Unmanaged,
as shown in Figure 11-58.

Figure 11-58 Newly discovered unmanaged disks

Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers
(LUNs) from your subsystem are correctly assigned to SVC. Also, check that correct
zoning is in place. For example, ensure that SVC can see the disk subsystem.

Site awareness: Do not assign sites to the SVC nodes and external storage controllers
in a standard, normal topology. Site awareness is intended primarily for SVC Stretched
Clusters or HyperSwap. If any MDisks or controllers appear offline after detection,
remove the site assignment from the SVC node or controller and discover storage.

11.5.4 Assigning MDisks to storage pool


If empty storage pools exist or you want to assign more MDisks to your pools that already
have existing MDisks, use the following steps:

Important: You can add only unmanaged MDisks to a storage pool.

Chapter 11. Operations using the GUI 743


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

1. From the MDisks by Pools panel, select the unmanaged MDisk that you want to add to a
storage pool.
2. Click Actions → Assign, as shown in Figure 11-59.

Figure 11-59 Assign an MDisk to a pool

Alternative: You can also access the Assign action by right-clicking an MDisk.

3. From the Add MDisk to Pool window, select to which pool you want to add this MDisk, and
then, click Assign, as shown in Figure 11-60.

Figure 11-60 Adding MDisk to an existing storage pool

4. Before assigning an MDisk to a specific pool you can select the storage tier of the disk
being assigned and if the MDisk is externally encrypted. If yes, then SVC-based
encryption is disabled even if the disk is assigned to the encrypted pool by SVC. See
Figure 11-61 on page 745 for details.

744 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-61 Assigning storage tier

11.5.5 Unassigning MDisks from a storage pool


To unassign an MDisk from a storage pool, complete the following steps:
1. Select the MDisk that you want to unassign from a storage pool.
2. Click Actions → Remove, as shown in Figure 11-62 on page 745.

Figure 11-62 Actions: Unassign from Pool

Alternative: You can also access the Remove action by right-clicking an assigned
MDisk.

If volumes are using the MDisks that you are removing from the storage pool, you must
confirm that the volume data will be migrated to other MDisks in the pool.
3. Click Yes, as shown in Figure 11-63.

Figure 11-63 Removing MDisk from existing storage pool

Chapter 11. Operations using the GUI 745


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

When the migration is complete, the MDisk status changes to Unmanaged. Ensure that
the MDisk remains accessible to the system until its status becomes Unmanaged. This
process might take time. If you disconnect the MDisk before its status becomes
Unmanaged, all of the volumes in the pool go offline until the MDisk is reconnected.
An error message is displayed (as shown in Figure 11-64) if insufficient space exists to
migrate the volume data to other extents on other MDisks in that storage pool.

Figure 11-64 Unassign MDisk error message

11.6 Migration
For more information about data migration, see Chapter 6, “Data migration” on page 237.

746 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11.7 Working with hosts


In this section, we describe the various configuration and administrative tasks that you can
perform on the host object that is connected to your SVC.

Host configuration: For more information about connecting hosts to the SVC in a SAN
environment, see Chapter 5, “Host configuration” on page 173.

A host system is a computer that is connected to SVC through FC interface, Fibre Channel
over Ethernet (FCoE), or an Internet Protocol (IP) network.

A host object is a logical object in SVC that represents a list of worldwide port names
(WWPNs) and a list of Internet Small Computer System Interface (iSCSI) names that identify
the interfaces that the host system uses to communicate with the SVC. iSCSI names can be
iSCSI-qualified names (IQN) or extended unique identifiers (EUI).

A typical configuration has one host object for each host system that is attached to SVC. If a
cluster of hosts accesses the same storage, you can add host bus adapter (HBA) ports from
several hosts to one host object to simplify a configuration. A host object can have both
WWPNs and iSCSI names.

The following methods can be used to visualize and manage your SVC host objects from the
SVC GUI Hosts menu selection:
򐂰 Use the Hosts panel, as shown in Figure 11-65.

Figure 11-65 Hosts panel

򐂰 Use the Ports by Host panel, as shown in Figure 11-66.

Figure 11-66 Ports by Host panel

򐂰 Use the Host Mapping panel, as shown in Figure 11-67.

Chapter 11. Operations using the GUI 747


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-67 Host Mapping panel

򐂰 Use the Volumes by Hosts panel, as shown in Figure 11-68 on page 748.

Figure 11-68 Volumes by Hosts panel

Important: Several actions on the hosts are specific to the Ports by Host panel or the Host
Mapping panel, but all of these actions and others are accessible from the Hosts panel. For
this reason, all actions on hosts are run from the All Hosts panel.

11.7.1 Host information


To access the Hosts panel from the IBM SAN Volume Controller System panel that is shown
in Figure 11-1 on page 716, move the mouse pointer over the Hosts selection of the dynamic
menu and click Hosts.

You can add information (new columns) to the table in the Hosts panel, as shown in
Figure 11-19 on page 724. For more information, see “Table information” on page 724.

To retrieve more information about a specific host, complete the following steps:
1. In the table, select a host.
2. Click Actions → Properties, as shown in Figure 11-69.

748 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-69 Actions: Host properties

Alternative: You can also access the Properties action by right-clicking a host.

3. You are presented with information for a host in the Overview window, as shown in
Figure 11-70 on page 749.

Figure 11-70 Host details: Overview

Show Details option: To obtain more information about the hosts, select Show Details
(Figure 11-70).

4. On the Mapped Volumes tab (Figure 11-71), you can see the volumes that are mapped to
this host.

Figure 11-71 Host details: Mapped volumes

Chapter 11. Operations using the GUI 749


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

5. The Port Definitions tab, as shown in Figure 11-72 on page 750, displays attachment
information, such as the WWPNs that are defined for this host or the iSCSI IQN that is
defined for this host.

Figure 11-72 Host details: Port Definitions tab

When you finish viewing the details, click Close to return to the previous window.

11.7.2 Adding a host


Two types of connections to hosts are available: Fibre Channel (FC and FCoE) and iSCSI. In
this section, we describe the following types of connection methods:
򐂰 For FC hosts, see the steps in “Fibre Channel-attached hosts”.
򐂰 For iSCSI hosts, see the steps in “iSCSI-attached hosts” on page 752.

Note: The FCoE hosts are listed under the FC Hosts Add Menu in the SVC GUI. Click
Fire Channel Hosts to access the FCoE host options. (See Figure 11-74 on page 751.)

Fibre Channel-attached hosts


To create a host that uses the FC connection type, complete the following steps:
1. Go to the Hosts panel from the SVC System panel that is shown in Figure 11-1 on
page 716, hover cursor over the Hosts selection and click Hosts, as shown in
Figure 11-73.
2. Click Add Host, as shown in Figure 11-73.

Figure 11-73 Create Host action

3. Select Fibre Channel Host from the two available connection types, as shown in
Figure 11-74 on page 751. We recommend to expand Advanced menu with details.

750 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-74 Create a Fibre Channel host

4. In the Add Host window (Figure 11-75 on page 752), enter a name for your host in the
Host Name field.

Host name: If you do not provide a name, the SVC automatically generates the name
hostx (where x is the ID sequence number that is assigned by the SVC internally). If you
provide a name, use letters A - Z and a - z, numbers 0 - 9, or the underscore (_)
character. The host name can be 1 - 63 characters.

5. In the Fibre Channel Ports section, use the drop-down list box to select the WWPNs that
correspond to your HBA or HBAs and then click Plus icon to add more ports.

Deleting an FC port: If you added the wrong FC port, you can delete it from the list by
clicking the Minus button.

If your WWPNs do not display, click Rescan to rediscover the available WWPNs that are
new since the last scan.

WWPN still not displayed: In certain cases, your WWPNs still might not display, even
though you are sure that your adapter is functioning (for example, you see the WWPN
in the switch name server) and your SAN zones are set up correctly. To correct this
situation, enter the WWPN of your HBA or HBAs into the drop-down list box and click
Add Port to List. It is displayed as unverified.

6. If you need to modify the I/O Group or Host Type, you must select Advanced in the
Advanced Settings section to access these Advanced Settings, as shown in Figure 11-75
on page 752. Perform the following tasks:
– Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
– Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these hosts, select HP/UX (to support
more than eight LUNs for HP/UX machines) or TPGS for Sun hosts that use MPxIO.

Chapter 11. Operations using the GUI 751


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-75 Creating a Fibre Channel host

7. Click Add Host, as shown in Figure 11-75. After you return to the Hosts panel
(Figure 11-76 on page 752), you can see the newly added FC host.

Figure 11-76 New Fibre Channel host

iSCSI-attached hosts
To create a host that uses the iSCSI connection type, complete the following steps:
1. To go to the Hosts panel from the SVC System panel on Figure 11-1 on page 716, move
the mouse pointer over the Hosts selection and click Hosts.
2. Click Add Host, as shown in Figure 11-73 on page 750, and select iSCSI Host.

752 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

3. In the Add Host window (as shown in Figure 11-77), enter a name for your host in the Host
Name field.

Figure 11-77 Adding an iSCSI host

Host name: If you do not provide a name, the SVC automatically generates the name
hostx (where x is the ID sequence number that is assigned by the SVC internally). If you
want to provide a name, you can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The host name can be 1 - 63 characters.

4. In the iSCSI ports section, enter the iSCSI initiator or IQN as an iSCSI port and then click
Plus icon. This IQN is obtained from the server and generally has the same purpose as
the WWPN. Repeat this step to add more ports.

Deleting an iSCSI port: If you add the wrong iSCSI port, you can delete it from the list
by clicking the Minus icon.

If needed, select Use CHAP authentication (all ports), as shown in Figure 11-77, and
enter the Challenge Handshake Authentication Protocol (CHAP) secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole system
under the system’s properties or for each host definition. The CHAP must be identical on
the server and the system or host definition.
5. If you need to modify the I/O Group or Host Type, you must select the Advanced option in
the Advanced Settings section to access these settings, as shown in Figure 11-75 on
page 752. Perform the following tasks:
– Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
– Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these types, select HP/UX (to support
more than eight LUNs for HP/UX machines) or TPGS for Sun hosts that are using
MPxIO.

Chapter 11. Operations using the GUI 753


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.7.3 Renaming a host


Complete the following steps to rename a host:
1. In the table, select the host that you want to rename.
2. Click Actions → Rename, as shown in Figure 11-78.

Figure 11-78 Rename action

Alternatives: Two other methods can be used to rename a host. You can right-click a
host and select Rename, or you can use the method that is described in Figure 11-79
on page 754.

3. In the Rename Host window, enter the new name that you want to assign and click
Rename, as shown in Figure 11-79.

Figure 11-79 Renaming a host

11.7.4 Deleting a host


To delete a host, complete the following steps:
1. In the table, select the host or hosts that you want to delete and click Actions → Remove,
as shown in Figure 11-80 on page 755.

754 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-80 Remove action

Alternative: You can also right-click a host and select Remove.

2. The Remove Host window opens, as shown in Figure 11-81. In the Verify the number of
hosts that you are deleting field, enter the number of hosts that you want to remove. This
verification was added to help you avoid deleting the wrong hosts inadvertently.
If volumes are still associated with the host and if you are sure that you want to delete the
host even though these volumes will be no longer accessible, select Remove the hosts
even if volumes are mapped to them. These volumes will no longer be accessible to
the hosts.
3. Click Remove to complete the process, as shown in Figure 11-81.

Figure 11-81 Deleting a host

11.7.5 Creating or modifying volume mapping


To modify the volume mapping (also known as Host mapping), complete the following steps:
1. In the table, select the host.
2. Click Actions → Modify Volume Mappings, as shown in Figure 11-82 on page 756.

Chapter 11. Operations using the GUI 755


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Tip: You can also right-click a host and select Modify Volume Mappings.

Figure 11-82 Modify Mappings action

3. In the Modify Host Mappings window (Figure 11-83), select the volume or volumes that
you want to map to this host and move each volume to the table on the right by clicking the
two greater than symbols (>>). If you must remove the volumes, select the volume and
click the two less than symbols (<<).

Figure 11-83 Modify host mappings window: Adding volumes to a host

4. In the table on the right, you can edit the SCSI ID by selecting a mapping that is
highlighted in yellow, which indicates a new mapping. Click Edit SCSI ID (Figure 11-84 on
page 757).

756 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-84 Changing the SCSI ID

When you attempt to map a volume that is already mapped to another host, a warning
pop-up window appears, prompting for confirmation (Figure 11-85). Volumes that are
mapped to multiple hosts are wanted in clustered or fault-tolerant systems, for example.

Figure 11-85 Volume is already mapped to another host

Changing a SCSI ID: You can change the SCSI ID only on new mappings. To edit a
mapping SCSI ID, you must unmap the volume and re-create the map to the volume.

5. In the Edit SCSI ID window, change SCSI ID and then click OK, as shown in Figure 11-86
on page 758.

Chapter 11. Operations using the GUI 757


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-86 Modify host mappings window: Edit SCSI ID

6. After you add all of the volumes that you want to map to this host, click Map Volumes or
Apply to create the host mapping relationships.

11.7.6 Deleting a host mapping


To delete a host mapping, complete the following steps:
1. In the table, select the host for which you want to delete a host mapping.
2. Click Actions → Modify Volume Mappings, as shown in Figure 11-87.

Tip: You can also right-click a host and select Modify Volume Mappings.

3. Select the host mapping or mappings that you want to remove.


4. Click the two less than symbols (<<) in the middle of the window after you select the
volumes that you want to remove. Then, click Apply or Map Volumes to complete the
Modify Host Mapping task.

11.7.7 Deleting all host mappings


To delete all host mappings for a host, perform the following steps:
1. Select the host and click Actions → Unmap All volumes, as shown in Figure 11-87.

Figure 11-87 Unmap All Volumes option

2. From the Unmap from Host window (Figure 11-88), enter the number of mappings that you
want to remove in the “Verify the number of mappings that this operation affects” field. This
verification helps you to avoid deleting the wrong hosts unintentionally.

758 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-88 Unmap from Host window

3. Click Unmap to remove the host mapping or mappings. You are returned to the Hosts
panel.

11.8 Working with volumes


In this section, we describe the tasks that you can perform at a volume level.

You can visualize and manage your volumes by using the following methods:
򐂰 Use the Volumes panel, as shown in Figure 11-89.

Figure 11-89 Volumes panel

Chapter 11. Operations using the GUI 759


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Or use the Volumes by Pool panel, as shown in Figure 11-90.

Figure 11-90 Volumes by Pool panel

򐂰 Alternatively, use the Volumes by Host panel, as shown in Figure 11-91.

Figure 11-91 Volumes by Host panel

Important: Several actions on the hosts are specific to the Volumes by Pool panel or to the
Volumes by Host panel. However, all of these actions and others are accessible from the
Volumes panel. We run all of the actions in the following sections from the Volumes panel.

11.8.1 Volume information


To access the Volumes panel from the IBM SAN Volume Controller System panel that is
shown in Figure 11-1 on page 716, move the mouse pointer over the Volumes selection and
click Volumes, as shown in Figure 11-89 on page 759.

You can add information (new columns) to the table in the Volumes panel, as shown in
Figure 11-19 on page 724. For more information, see “Table information” on page 724.

760 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

To retrieve more information about a specific volume, complete the following steps:
1. In the table, select a volume and click Actions → Properties, as shown in Figure 11-92.

Figure 11-92 Volume Properties action

Tip: You can also access the Properties action by right-clicking a volume name.

The Overview tab shows basic information about a volume, as shown in Figure 11-93.

Figure 11-93 Volume properties: Overview tab

You can see more parameters of volume by expanding the section View more details
(Figure 11-94 on page 762).

Chapter 11. Operations using the GUI 761


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-94 Volume properties: Volume is mapped to this host

2. When you finish viewing the details, click Close to return to the Volumes panel.

11.8.2 Creating a volume


To create one of more volumes, complete the following steps:
1. Go to the IBM Spectrum Virtualize panel that is shown in Figure 11-1 on page 716, move
the mouse pointer over the Volumes selection and click Volumes.
2. Click Create Volumes, as shown in Figure 11-95.

Figure 11-95 Create Volumes action

3. Select one of the following presets, as shown in Figure 11-96 on page 763:
– Basic: Create volumes that use a fully allocated (thick) amount of capacity from the
selected storage pool.
– Mirror: Create volumes with two physical copies that provide data protection. Each
copy can belong to a separate storage pool to protect data from storage failures.
– Custom: Provides advanced menu with additional options for volumes definition:
• Thin Provision: Create volumes whose capacity is virtual (seen by the host), but that
use only the real capacity that is written by the host application. The virtual capacity
of a thin-provisioned volume often is larger than its real capacity.

762 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

• Compressed: Create volumes whose data is compressed while it is written to disk,


which saves more space.

Changing the preset: For our example, we chose the Generic preset. However,
whatever selected preset you choose, you can reconsider your decision later by
customizing the volume when you click the Advanced option.

4. After selecting a preset (in our example, Basic), you must select the storage pool on which
the data is striped from drop-down menu, as shown in Figure 11-96.

Figure 11-96 Creating a volume: Select preset and the storage pool

5. After you select the storage pool, the window is updated automatically. You must enter the
following information, as shown in Figure 11-97 on page 764:
– Enter a volume quantity. You can create multiple volumes at the same time by using an
automatic sequential numbering suffix.
– Enter a name if you want to create a single volume, or a naming prefix if you want to
create multiple volumes.

Volume name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The host name can be 1 - 63 characters.

– Enter the size of the volume that you want to create and select the capacity unit of
measurement (bytes, KiB, MiB, GiB, or TiB) from the list.

Tip: An entry of 1 GiB uses 1,024 MiB.

– Choose type of capacity savings (None, Compression, Thin Provisioning)


– I/O Group for systems built from multiple I/O Groups.
An updated summary automatically appears in the bottom of the window to show the
amount of space that is used and the amount of free space that remains in the pool.

Chapter 11. Operations using the GUI 763


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-97 Create Volume: Volume details

Naming: When you create more than one volume, the wizard does not prompt you for a
name for each volume that is created. Instead, the name that you use here becomes
the prefix and a number (starting at zero) is appended to this prefix as each volume is
created. You can modify a starting suffix to any numeric value that you prefer (whole
non-negative numbers). Modifying the ending value increases or decreases the volume
quantity that is based on the whole number count.

Creating custom volumes


Most of the advanced features and other parameters of volumes are accessible from
Advanced section of volume creation window (Figure 11-98). This option has been
significantly redesigned in IBM Spectrum Virtualize V7.6.

Figure 11-98 Create custom volume

On each volume tab, you can set the following options:


򐂰 Volume Details: Provide the quantity of volumes to be defined, their capacity and name or
name prefix for multiple volumes (Figure 11-99 on page 765).

764 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-99 Volume details

򐂰 Volume location: Specify if mirroring is desired and select the respective pool. Choose a
caching I/O Group and then select a preferred node. You can leave the default values for
SVC auto-balance. After you select a caching I/O Group, you also can add more I/O
Groups as Accessible I/O Groups ().

Figure 11-100 Volume location

򐂰 Thin Provisioning: You can set the following options after you activate thin provisioning
(Figure 11-101):
– Real Capacity: Enter the real size that you want to allocate. This size is the percentage
of the virtual capacity or a specific number in GiB of the disk space that is allocated.
– Automatically Expand: Select to allow the real disk size to grow, as required.
– Warning Threshold: Enter a percentage of the virtual volume capacity for a threshold
warning. A warning message is generated when the used disk capacity on the
space-efficient copy first exceeds the specified threshold.
– Thin-Provisioned Grain Size: Select the grain size: 32 KiB, 64 KiB, 128 KiB, or
256 KiB. Smaller grain sizes save space. Larger grain sizes produce better
performance. Try to match the FlashCopy grain size if the volume is used for
FlashCopy.

Figure 11-101 Create Volume: Advanced settings

򐂰 Compressed: You create a compressed volume by using the software or hardware


Real-time Compression feature (as shown in Figure 11-102 on page 766). As with
thin-provisioned volumes, compressed volumes have virtual, real, and used capacities.

Chapter 11. Operations using the GUI 765


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Important: Compressed and uncompressed volumes must not be mixed within the
same pool.

Figure 11-102 Compressed volume

For more information about the Real-time Compression feature, see Real-time
Compression in SAN Volume Controller and Storwize V7000, REDP-4859, and
Implementing IBM Real-time Compression in SAN Volume Controller and IBM Storwize
V7000, TIPS1083.
򐂰 General: You can format volume before use and set the caching mode (Enabled, Read
cache only, Disabled) as illustrated on Figure 11-97 on page 764.

Figure 11-103 General option

– OpenVMS only: Enter the user-defined identifier (UDID) for OpenVMS. You must
complete only this field for the OpenVMS system.

UDID: Each OpenVMS Fibre Channel-attached volume requires a user-defined


identifier or unit device identifier (UDID). A UDID is a non-negative integer that is
used when an OpenVMS device name is created. To recognize volumes, OpenVMS
issues a UDID value, which is a unique numerical number.

You can choose to create only the volumes by clicking Create, or you can create and map the
volumes by selecting Create and Map to Host. If you select to create only the volumes, you
are returned to the Volumes panel. You see that your volumes were created but not mapped,
as shown in Figure 11-104. You can map them later.

Figure 11-104 Volumes that are created without mapping

766 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

If you want to create and map the volumes on the volume creation window, click Continue
after the task finishes and another window opens. In the Modify Host Mappings window,
select the I/O Group and host to which you want to map these volumes by using the
drop-down menu (as shown Figure 11-105) and you are automatically directed to the host
mapping table.

Figure 11-105 Modify Host Mappings: Select the host to which to map your volumes

In the Modify Host Mappings window, verify the mapping. If you want to modify the mapping,
select the volume or volumes that you want to map to a host and move each of them to the
table on the right by clicking the two greater than symbols (>>), as shown in Figure 11-106. If
you must remove the mappings, click the two less than symbols (<<).

Figure 11-106 Modify Host Mappings: Host mapping table

After you add all of the volumes that you want to map to this host, click Map Volumes or
Apply to create the host mapping relationships and finalize the creation of the volumes. You
return to the main Volumes panel. You can see that your volumes were created and mapped,
as shown in Figure 11-107.

Chapter 11. Operations using the GUI 767


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-107 Volumes are created and mapped to the host

11.8.3 Renaming a volume


Complete the following steps to rename a volume:
1. In the table, select the volume that you want to rename.
2. Click Actions → Rename, as shown in Figure 11-108.

Figure 11-108 Selecting the Rename option

Tip: Two other ways are available to rename a volume. You can right-click a volume and
select Rename, or you can use the method that is described in Figure 11-377 on
page 902.

3. In the Rename Volume window, enter the new name that you want to assign to the volume
and click Rename, as shown in Figure 11-109.

Figure 11-109 Renaming a volume

Volume name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The volume name can be 1 - 63 characters.

768 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11.8.4 Deleting a volume


To delete a volume, complete the following steps:
1. In the table, select the volume or volumes that you want to delete.
2. Click Actions → Delete, as shown in Figure 11-110.

Figure 11-110 Delete a volume action

Alternative: You can also right-click a volume and select Delete.

3. The Delete Volume window opens, as shown in Figure 11-111. In the “Verify the number of
volumes that you are deleting” field, enter a value for the number of volumes that you want
to remove. This verification helps you to avoid deleting the wrong volumes.

Important: Deleting a volume is a destructive action for any user data on that volume.
A volume cannot be deleted if SVC records any I/O activity on the volume during the
defined past time interval.

If you still have a volume that is associated with a host that is used with FlashCopy or
remote copy, and you want to delete the volume, select Delete the volume even if it has
host mappings or is used in FlashCopy mappings or remote-copy relationships.
Then, click Delete, as shown in Figure 11-111.

Chapter 11. Operations using the GUI 769


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-111 Delete Volume

Note: You also can delete a mirror copy of a mirrored volume. For information about
deleting a mirrored copy, see 11.8.11, “Deleting volume copy” on page 780.

11.8.5 Deleting a host mapping

Important: Before you delete a host mapping, ensure that the host is no longer using the
volume. Unmapping a volume from a host does not destroy the volume contents.
Unmapping a volume has the same effect as powering off the computer without first
performing a clean shutdown; therefore, the data on the volume might end up in an
inconsistent state. Also, any running application that was using the disk receives I/O errors
and might not recover until a forced application or server reboot.

To delete a host mapping to a volume, complete the following steps:


1. In the table, select the volume for which you want to delete a host mapping.
2. Click Actions → View Mapped Hosts, as shown in Figure 11-112.

Figure 11-112 Volume Properties

Tip: You can also right-click a volume and select View Mapped Hosts.

770 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

3. In the Properties window, click the Host Maps tab, as shown in Figure 11-113.

Figure 11-113 Volume Details: Host Maps tab

4. Select the host mapping or mappings that you want to remove.


5. Click Unmap from Host, as shown in Figure 11-113.
6. In the “Verify the number of hosts that this action affects” field of the Unmap Host window
(Figure 11-114 on page 771), enter a value for the number of host objects that you want to
remove. This verification helps you to avoid deleting the wrong host objects.

Figure 11-114 Volume Details: Unmap Host

7. Click Unmap to remove the host mapping or mappings. You are returned to the Host Maps
window. Click Refresh to verify the results of the unmapping action, as shown in
Figure 11-115.

Chapter 11. Operations using the GUI 771


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-115 Volume Details: Volume unmapping verification

8. Click Close to return to the Volumes panel.

11.8.6 Deleting all host mappings for a volume


To delete all host mappings for a volume, complete the following steps:
1. In the table, select the volume for which you want to delete all host mappings.
2. Click Actions → Unmap All Hosts, as shown in Figure 11-116 on page 772.

Figure 11-116 Unmap All Hosts from Actions menu

Tip: You can also right-click a volume and select Unmap All Hosts.

3. In the “Verify the number of mappings that this operation affects” field in the Unmap from
Hosts window (Figure 11-117), enter the number of host objects that you want to remove.
This verification helps you to avoid deleting the wrong host objects.

772 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-117 Unmap from Hosts window

4. Click Unmap to remove the host mapping or mappings. You are returned to the All
Volumes panel.

11.8.7 Shrinking a volume


The IBM Spectrum Virtualize software shrinks a volume by removing the required number of
extents from the end of the volume. Depending on where the data is on the volume, this
action can destroy data. For example, you might have a volume that consists of 128 extents (0
- 127) of 16 MiB (2 GiB capacity), and you want to decrease the capacity to 64 extents (1 GiB
capacity). In this example, SVC removes extents 64 - 127. Depending on the operating
system, no easy way exists to ensure that your data is placed entirely on extents 0 - 63, so be
aware that you might lose data.

Although shrinking a volume is an easy task by using SVC, ensure that your operating system
supports shrinking (natively or by using third-party tools) before you use this function.

In addition, the preferred practice is to always have a consistent backup before you attempt to
shrink a volume.

Important: For thin-provisioned or compressed volumes, the use of this method to shrink a
volume results in shrinking its virtual capacity. For more information about shrinking its real
capacity, see , “Shrink or expand real capacity of thin-provisioned or compressed volume”
on page 776.

Shrinking a volume is useful under the following circumstances:


򐂰 Reducing the size of a candidate target volume of a copy relationship to make it the same
size as the source
򐂰 Releasing space from volumes to have free extents in the storage pool, if you no longer
use that space and take precautions with the remaining data

Chapter 11. Operations using the GUI 773


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Assuming that your operating system supports it, perform the following steps to shrink a
volume:
1. Perform any necessary steps on your host to ensure that you are not using the space that
you are about to remove.
2. In the volume table, select the volume that you want to shrink. Click Actions → Shrink, as
shown in Figure 11-118.

Figure 11-118 Shrink volume action

Tip: You can also right-click a volume and select Shrink.

3. The Shrink Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 11-119 on page 774.
You can enter how much you want to shrink the volume by using the Shrink by field or you
can directly enter the final size that you want to use for the volume by using the Final size
field. The other field is computed automatically. For example, if you have a 10 GiB volume
and you want it to become 6 GiB, you can specify 4 GiB in the Shrink by field or you can
directly specify 6 GiB in the Final size field, as shown in Figure 11-119 on page 774.

Figure 11-119 Shrinking a volume

4. When you are finished, click Shrink and the changes are visible on your host.

774 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11.8.8 Expanding a volume


Expanding a volume presents a larger capacity disk to your operating system. Although you
can expand a volume easily by using the SVC, you must ensure that your operating system is
prepared for it and supports the volume expansion before you use this function.

Dynamic expansion of a volume is supported only when the volume is in use by one of the
following operating systems:
򐂰 AIX 5L™ V5.2 and higher
򐂰 Windows Server 2008, and Windows Server 2012 for basic and dynamic disks
򐂰 Windows Server 2003 for basic disks, and Windows Server 2003 with Microsoft hot fix
(Q327020) for dynamic disks (however out of vendor support)

Important: For thin-provisioned volumes, the use of this method results in expanding its
virtual capacity.

If your operating system supports expanding a volume, complete the following steps:
1. In the table, select the volume that you want to expand.
2. Click Actions → Expand, as shown in Figure 11-120 on page 775.

Figure 11-120 Expand volume action

Tip: You can also right-click a volume and select Expand.

3. The Expand Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 11-121 on page 776.
You can enter how much you want to enlarge the volume by using the Expand by field, or
you can directly enter the final size that you want to use for the volume by using the Final
size field. The other field is computed automatically.
For example, if you have a 9 GiB volume and you want it to become 20 GiB, you can
specify 11 GiB in the Expand by field or you can directly specify 20 GiB in the Final size
field, as shown in Figure 11-121 on page 776. The maximum final size shows 499 GiB for
the volume.

Chapter 11. Operations using the GUI 775


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-121 Expanding a volume

Volume expansion: The following considerations are important:


򐂰 Expanding image-mode volumes is not supported.
򐂰 If you use volume mirroring, all copies must be synchronized before you expansion.
򐂰 There must be sufficient numbers of extents available in a pool.

4. When you are finished, click Expand, and verify changes on your host of the volume is
already mapped.

Shrink or expand real capacity of thin-provisioned or compressed volume


From a host’s perspective, the virtual capacity shrinkage or expansion of a volume affects the
host access. The real capacity shrinkage or expansion of a volume is not apparent to hosts.

Note: In the following sections, we demonstrate real capacity operations by using a


thin-provisioned volume as an example. However, the same actions apply to a compressed
volume preset.

To shrink or expand the real capacity of a thin-provisioned or compressed volume use the
same procedures described in sections 11.8.7, “Shrinking a volume” on page 773 and 11.8.8,
“Expanding a volume” on page 775.

11.8.9 Migrating a volume


To migrate a volume to the different storage pool, complete the following steps:
1. In the table, select the volume that you want to migrate.
2. Click Actions → Migrate to Another Pool, as shown in Figure 11-122 on page 777.

776 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-122 Migrate to Another Pool action

Tip: You can also right-click a volume and select Migrate to Another Pool.

3. The Migrate Volume Copy window opens, as shown in Figure 11-123. Select the storage
pool to which you want to reassign the volume. You are presented with a list of only the
storage pools with the same extent size.
4. When you finish making your selections, click Migrate to begin the migration process.

Figure 11-123 Migrate Volume Copy window

Important: After a migration starts, it cannot be stopped. The migration process


continues until it is complete unless the process is stopped or suspended by an error
condition or the volume that is being migrated is deleted.

5. You can check the progress of the migration by using the Running Tasks status area, as
shown in Figure 11-124 on page 777.

Figure 11-124 Running Tasks

Chapter 11. Operations using the GUI 777


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

To expand this area, click the icon, and then click Migration. Figure 11-125 shows a
detailed view of the running tasks.

Figure 11-125 Long Running Task: Volume migration

When the migration is finished, the volume is part of the new pool.

Important: Migrated volume inherits all parameters of target pool, such as extent size,
encryption, easy tier settings, etc. Consider requirements for a volume before migration.

11.8.10 Adding mirrored copy of existing volume


You can add a mirrored copy to an existing volume, which provides two copies of the
underlying disk extents.

Tip: You can also create a mirrored volume by selecting the Mirror preset during the
volume creation, as shown in Figure 11-96 on page 763.

You can use a volume copy for any operation for which you can use a volume. It is not
apparent to higher-level operations, such as Metro Mirror, Global Mirror, or FlashCopy.

Creating a volume copy from an existing volume is not restricted to the same storage pool;
therefore, this method is ideal to use to protect your data from a disk system or an array
failure. If one copy of the mirror fails, it provides continuous data access to the other copy.
When the failed copy is repaired, the copies automatically resynchronize.

You can also use a volume mirror as an alternative migration tool, where you can synchronize
the mirror before splitting off the original side of the mirror. The volume stays online, and it can
be used normally while the data is being synchronized. The copies can also be separate
structures: striped, image, sequential, or space-efficient, and separate extent sizes.

To create a mirror copy from within a volumes panel, complete the following steps:
1. In the table, select the volume to which you want to add a mirrored copy.
2. Click Actions → Add Volume Copy, as shown in Figure 11-126.

778 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-126 Add Volume Copy actions

Tip: You can also right-click a volume and select Add Volume Copy.

3. The Add Volume Copy - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 11-127.

Figure 11-127 Add copy to volume window

4. Select the appropriate pool and click Add.

Important: Volume copy inherits all parameters of target pool, such as extent size,
encryption, Easy Tier settings, etc. Consider requirements for a volume before defining
a volume copy.

5. You can check the migration by using the Running Tasks menu, as shown in
Figure 11-128. To expand this Status Area, click the icon and click Volume
Synchronization.

Chapter 11. Operations using the GUI 779


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-128 Running Tasks: Volume Synchronization

6. When the synchronization is finished, the volume is part of the new pool, as shown in
Figure 11-129.

Figure 11-129 Mirrored volume

Primary copy: As shown in Figure 11-129, the primary copy is identified with an
asterisk (*). In this example, Copy 0 is the primary version (copy) and Copy 1* is the
mirrored copy.

11.8.11 Deleting volume copy


To remove a volume copy, complete the following steps:
1. In the table, select the volume copy that you want to remove. Click Actions → Delete, as
shown in Figure 11-130 on page 781.

780 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-130 Deletion of volume copy

Tip: You can also right-click a volume and select Delete this Copy.

2. The Warning window opens, as shown in Figure 11-131. Click Yes to confirm.

Figure 11-131 Warning window

If the volume that you intend to delete is a primary copy and the secondary copy is not yet
synchronized, the attempt fails and you must wait until the synchronization completes.

11.8.12 Splitting a volume copy


To create new volume by splitting off a synchronized volume copy, complete the following
steps:
1. In the table, select the volume copy that you want to split and click Actions → Create
Volume From This Copy, as shown in Figure 11-132.

Figure 11-132 Split into New Volume action

Tip: You can also right-click a volume and select Create Volume From This Copy.

Chapter 11. Operations using the GUI 781


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

2. The Duplicate Volume window opens, as shown in Figure 11-133. In this window, enter a
name for the new volume.

Volume name: If you do not provide a name, IBM Spectrum Virtualize automatically
generates the name vdiskx (where x is the ID sequence number that is assigned by the
SVC internally). If you want to provide a name, you can use the letters A - Z and a - z,
the numbers 0 - 9, and the underscore (_) character. The host name can be 1 - 63
characters.

3. Click Duplicate, as shown in Figure 11-133.

Figure 11-133 Duplicating volume

This new volume is now being formatted and will be available to be mapped to a host. It is
assigned to the same pool as the source copy of the duplication process (Figure 11-134).

Figure 11-134 New volume formatting

Important: After you split a volume mirror, you cannot resynchronize or recombine them.
You must create a new volume copy.

11.8.13 Validating volume copies


To validate the copies of a mirrored volume, complete the following steps:
1. In the table, select a copy of this volume. Click Actions → Validate Volume Copies, as
shown in Figure 11-135.

782 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-135 Validate Volume Copies actions

2. The Validate Volume Copies window opens, as shown in Figure 11-136. In this window,
select one of the following options:
– Generate Event of Differences: Use this option if you want to verify only that the
mirrored volume copies are identical. If a difference is found, the command stops and
logs an error that includes the logical block address (LBA) and the length of the first
difference. Starting at a separate LBA each time, you can use this option to count the
number of differences on a volume.
– Overwrite Differences: Use this option to overwrite the content from the primary
volume copy to the other volume copy. The command corrects any differing sectors by
copying the sectors from the primary copy to the copies that are compared. Upon
completion, the command process logs an event, which indicates the number of
differences that were corrected. Use this option if you are sure that the primary volume
copy data is correct or that your host applications can handle incorrect data.
– Return Media Error to Host: Use this option to convert sectors on all volume copies
that contain different contents into virtual medium errors. Upon completion, the
command logs an event, which indicates the number of differences that were found,
the number of differences that were converted into medium errors, and the number of
differences that were not converted. Use this option if you are unsure what data is
correct, and you do not want an incorrect version of the data to be used.

Figure 11-136 Validate Volume Copies window

3. Click Validate. The volume is now verified.

Chapter 11. Operations using the GUI 783


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.8.14 Creating a volume in image mode


For more information about the required steps to create a volume in image mode, see
Chapter 6, “Data migration” on page 237.

11.8.15 Migrating a volume to an image mode volume


For more information about the required steps to migrate a volume to an image mode volume,
see Chapter 6, “Data migration” on page 237.

11.8.16 Creating an image mode mirrored volume


For more information about the required steps to create an image mode mirrored volume, see
Chapter 6, “Data migration” on page 237.

11.9 Copy Services and managing FlashCopy


It is often easier to work with the FlashCopy function from the GUI if you have a reasonable
number of host mappings. However, in enterprise data centers with many host mappings, we
suggest that you use the CLI to run your FlashCopy commands.

Copy Services: For more information about the functionality of Copy Services in IBM
Spectrum Virtualize environment, see Chapter 9, “Advanced Copy Services” on page 475.

In this section, we describe the tasks that you can perform at a FlashCopy level. The following
methods can be used to visualize and manage your FlashCopy:
򐂰 Use the SVC Overview panel. Move the mouse pointer over Copy Services in the dynamic
menu and click FlashCopy, as shown in Figure 11-137.

Figure 11-137 FlashCopy panel

784 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

In its basic mode, the IBM FlashCopy function copies the contents of a source volume to a
target volume. Any data that existed on the target volume is lost and that data is replaced
by the copied data.
򐂰 Use the Consistency Groups panel, as shown in Figure 11-138. A Consistency Group is a
container for mappings. You can add many mappings to a Consistency Group.

Figure 11-138 Consistency Groups panel

򐂰 Use the FlashCopy Mappings panel, as shown in Figure 11-139. A FlashCopy mapping
defines the relationship between a source volume and a target volume.

Figure 11-139 FlashCopy Mappings panel

11.9.1 Creating a FlashCopy mapping


In this section, we create FlashCopy mappings for volumes and their targets.

Complete the following steps:


1. From the SVC Overview panel, move the mouse pointer over Copy Services in the
dynamic menu and click FlashCopy. The FlashCopy panel opens, as shown in
Figure 11-140.

Figure 11-140 FlashCopy panel

2. Select the volume for which you want to create the FlashCopy relationship, as shown in
Figure 11-141 on page 786.

Chapter 11. Operations using the GUI 785


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Multiple FlashCopy mappings: To create multiple FlashCopy mappings at one time,


select multiple volumes by holding down Ctrl and clicking the entries that you want.

Figure 11-141 FlashCopy mapping: Select the volume (or volumes)

Depending on whether you created the target volumes for your FlashCopy mappings or you
want SVC to create the target volumes for you, the following options are available:
򐂰 If you created the target volumes, see “Using existing target volumes” on page 786.
򐂰 If you want the SVC to create the target volumes for you, see “Creating target volumes” on
page 790.

Using existing target volumes


Complete the following steps to use existing target volumes for the FlashCopy mappings:
1. Select the target volume that you want to use. Then, click Actions → Advanced
FlashCopy → Use Existing Target Volumes, as shown in Figure 11-142.

Figure 11-142 Using existing target volumes

786 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

2. The Create FlashCopy Mapping window opens (Figure 11-143 on page 787). In this
window, you must create the relationship between the source volume (the disk that is
copied) and the target volume (the disk that receives the copy). A mapping can be created
between any two volumes inside an SVC clustered system. Select a source volume and a
target volume for your FlashCopy mapping, and then click Add. If you must create other
copies, repeat this step.

Important: The source volume and the target volume must be of equal size. Therefore,
only targets of the same size are shown in the list for a source volume.

Figure 11-143 Create a FlashCopy Mapping by using an existing target volume

To remove a relationship that was created, click , as shown in Figure 11-144.

Volumes: The volumes do not have to be in the same I/O Group or storage pool.

3. Click Next after you create all of the relationships that you need, as shown in
Figure 11-144.

Figure 11-144 Create FlashCopy Mapping window

Chapter 11. Operations using the GUI 787


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

4. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 11-145 on page 788:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates a replica of the source volume on a target volume. The copy can be
changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
source and target volumes.

Figure 11-145 Create FlashCopy Mapping window

For each preset, you can customize various advanced options. You can access these
settings by clicking Advanced Settings.
5. The advanced setting options are shown in Figure 11-146.

Figure 11-146 Create FlashCopy Mapping Advanced Settings

788 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

If you prefer not to customize these settings, go directly to step 6 on page 789.
You can customize the following advanced setting options, as shown in Figure 11-146:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.

Incremental FlashCopy mapping: Even if the type of the FlashCopy mapping is


incremental, the first copy process copies all of the data from the source volume to
the target volume.

– Delete mapping after completion: This option automatically deletes a FlashCopy


mapping after the background copy is completed. Do not use this option when the
background copy rate is set to zero.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.
After you complete your modifications, click Next.
6. You can choose whether to add the mappings to a Consistency Group or not.
If you want to include this FlashCopy mapping in a Consistency Group, select Yes, add
the mappings to a consistency group in the window that is shown in Figure 11-147. You
also can select the Consistency Group from the drop-down list box.

Figure 11-147 Add the mappings to a Consistency Group

Or, if you do not want to include this FlashCopy mapping in a Consistency Group, select
No, do not add the mappings to a consistency group.
Click Finish, as shown in Figure 11-148.

Chapter 11. Operations using the GUI 789


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-148 Do not add the mappings to a Consistency Group

7. Check the result of this FlashCopy mapping. For each FlashCopy mapping relationship
that was created, a mapping name is automatically generated that starts with fcmapX,
where X is the next available number. If needed, you can rename these mappings, as
shown in Figure 11-149. For more information, see 11.9.11, “Renaming FlashCopy
mapping” on page 805.

Figure 11-149 FlashCopy Mapping

The FlashCopy mapping is now ready for use.

Creating target volumes


Complete the following steps to create target volumes for FlashCopy mapping:
1. If you did not create a target volume for this source volume, click Actions → Advanced
FlashCopy → Create New Target Volumes, as shown in Figure 11-150.

Target volume naming: If the target volume does not exist, the target volume is
created. The target volume name is based on its source volume and a generated
number at the end, for example, source_volume_name_XX, where XX is a number that
was generated dynamically.

790 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-150 Selecting Create New Target Volumes

2. In the Create FlashCopy Mapping window (Figure 11-151 on page 791), you must select
one FlashCopy preset. The GUI provides the following presets to simplify common
FlashCopy operations:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates an exact replica of the source volume on a target volume. The copy can
be changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
the source and target volumes.

Figure 11-151 Create FlashCopy Mapping window

For each preset, you can customize various advanced options. To access these settings,
click Advanced Settings. The Advanced Setting options show in Figure 11-152.

Chapter 11. Operations using the GUI 791


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-152 Create FlashCopy Mapping Advanced Settings

If you prefer not to customize these advanced settings, go directly to step 3 on page 792.
You can customize the advanced setting options that are shown in Figure 11-152:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.

Incremental FlashCopy mapping: Even if the type of the FlashCopy mapping is


incremental, the first copy process copies all of the data from the source volume to
the target volume.

– Delete mapping after completion: This option automatically deletes a FlashCopy


mapping after the background copy is completed. Do not use this option when the
background copy rate is set to zero.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.
3. You can choose whether to add this FlashCopy mapping to a Consistency Group or not.
If you want to include this FlashCopy mapping in a Consistency Group, select Yes, add
the mappings to a consistency group in the next window (Figure 11-153). Select the
Consistency Group from the drop-down list.
If you do not want to include this FlashCopy mapping in a Consistency Group, select No,
do not add the mappings to a consistency group.

792 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Click Finish.

Figure 11-153 Selecting the option to add the mappings to a Consistency Group

4. Check the result of this FlashCopy mapping, as shown in Figure 11-154. For each
FlashCopy mapping relationship that is created, a mapping name is automatically
generated that starts with fcmapX where X is the next available number. If necessary, you
can rename these mappings, as shown in Figure 11-154. For more information, see
11.9.11, “Renaming FlashCopy mapping” on page 805.

Figure 11-154 FlashCopy mapping

The FlashCopy mapping is ready for use.

Tip: You can start FlashCopy from the SVC GUI. However, the use of the SVC GUI might
be impractical if you plan to handle many FlashCopy mappings or Consistency Groups
periodically, or at varying times. In these cases, creating a script by using the CLI might be
more convenient.

11.9.2 Single-click snapshot


The snapshot creates a point-in-time backup of production data. The snapshot is not intended
to be an independent copy. Instead, it is used to maintain a view of the production data at the
time that the snapshot is created. Therefore, the snapshot holds only the data from regions of
the production volume that changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.
Snapshot uses the following preset parameters:
򐂰 Background copy: No
򐂰 Incremental: No
򐂰 Delete after completion: No
򐂰 Cleaning rate: No

Chapter 11. Operations using the GUI 793


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Primary copy source pool: Target pool

To create and start a snapshot, complete the following steps:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and click FlashCopy.
2. Select the volume that you want to create a snapshot of and click Actions → Create
Snapshot, as shown in Figure 11-155.

Figure 11-155 Create Snapshot option

3. A volume is created as a target volume for this snapshot in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column Status area, as shown in
Figure 11-156.

Figure 11-156 Snapshot created and started

11.9.3 Single-click clone


The clone preset creates an exact replica of the volume, which can be changed without
affecting the original volume. After the copy completes, the mapping that was created by the
preset is automatically deleted.

The clone preset uses the following parameters:


򐂰 Background copy rate: 50
򐂰 Incremental: No
򐂰 Delete after completion: Yes
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

To create and start a clone, complete the following steps:

794 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and click FlashCopy.
2. Select the volume that you want to clone.
3. Click Actions → Create Clone, as shown in Figure 11-157.

Figure 11-157 Create Clone option

4. A volume is created as a target volume for this clone in the same pool as the source
volume. The FlashCopy mapping is created and started. You can check the FlashCopy
progress in the Progress column or in the Running Tasks Status column. After the
FlashCopy clone is created, the mapping is removed and the new cloned volume becomes
available, as shown in Figure 11-158.

Figure 11-158 Clone created and FlashCopy relationship removed

11.9.4 Single-click backup


The backup creates a point-in-time replica of the production data. After the copy completes,
the backup view can be refreshed from the production data, with minimal copying of data from
the production volume to the backup volume.

The backup preset uses the following parameters:


򐂰 Background Copy rate: 50
򐂰 Incremental: Yes
򐂰 Delete after completion: No
򐂰 Cleaning rate: 50
򐂰 Primary copy source pool: Target pool

To create and start a backup, complete the following steps:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy.

Chapter 11. Operations using the GUI 795


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

2. Select the volume that you want to back up.


3. Click Actions → Create Backup, as shown in Figure 11-159.

Figure 11-159 Create Backup option

4. A volume is created as a target volume for this backup in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column, as shown in
Figure 11-160, or in the Running Tasks Status column.

Figure 11-160 Backup created and started

11.9.5 Creating a FlashCopy Consistency Group


To create a FlashCopy Consistency Group in the SVC GUI, complete the following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups. The Consistency Groups panel opens, as
shown in Figure 11-161.

Figure 11-161 Consistency Groups panel

796 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

2. Click Create Consistency Group and enter the FlashCopy Consistency Group name that
you want to use and click Create (Figure 11-162).

Figure 11-162 Create Consistency Group window

Consistency Group name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The volume name can be 1 - 63 characters.

Figure 11-163 on page 797 shows the result.

Figure 11-163 New Consistency Group

11.9.6 Creating FlashCopy mappings in a Consistency Group


In this section, we describe how to create FlashCopy mappings for volumes and their related
targets. The source and target volumes were created before this operation.

Complete the following steps:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups. The Consistency Groups panel opens, as
shown in Figure 11-163.

Chapter 11. Operations using the GUI 797


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

2. Select in which Consistency Group (Figure 11-164) you want to create the FlashCopy
mapping. If you prefer not to create a FlashCopy mapping in a Consistency Group, select
Not in a Group.

Figure 11-164 Consistency Group selection

3. If you select a new Consistency Group, click Actions → Create FlashCopy Mapping, as
shown in Figure 11-165.

Figure 11-165 Create FlashCopy Mapping action for a Consistency Group

4. If you did not select a Consistency Group, click Create FlashCopy Mapping, as shown in
Figure 11-166.

Consistency Groups: If no Consistency Group is defined, the mapping is a


stand-alone mapping. It can be prepared and started without affecting other mappings.
All mappings in the same Consistency Group must have the same status to maintain
the consistency of the group.

Figure 11-166 Create FlashCopy Mapping

5. The Create FlashCopy Mapping window opens, as shown in Figure 11-167. In this
window, you must create the relationships between the source volumes (the volumes that

798 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

are copied) and the target volumes (the volumes that receive the copy). A mapping can be
created between any two volumes in a clustered system.

Important: The source volume and the target volume must be of equal size.

Figure 11-167 Create FlashCopy Mapping window

Tip: The volumes do not have to be in the same I/O Group or storage pool.

6. Select a volume in the Source Volume column by using the drop-down list. Then, select a
volume in the Target Volume column by using the drop-down list. Click Add, as shown in
Figure 11-167. Repeat this step to create other relationships.
To remove a relationship that was created, click .

Important: The source and target volumes must be of equal size. Therefore, only the
targets with the appropriate size are shown for a source volume.

7. Click Next after all of the relationships that you want to create are shown (Figure 11-168).

Figure 11-168 Create FlashCopy Mapping with the relationships that were created

Chapter 11. Operations using the GUI 799


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

8. In the next window, you must select one FlashCopy preset. The GUI provides the following
presets to simplify common FlashCopy operations, as shown in Figure 11-169:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates an exact replica of the source volume on a target volume. The copy can
be changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
the source and target volumes.

Figure 11-169 Create FlashCopy Mapping window

Whichever preset you select, you can customize various advanced options. To access
these settings, click Advanced Settings.
If you prefer not to customize these settings, go directly to step 9.
You can customize the following advanced setting options, as shown in Figure 11-170:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.

Incremental copies: Even if the type of the FlashCopy mapping is incremental, the
first copy process copies all of the data from the source volume to the target volume.

– Delete mapping after completion: This option automatically deletes a FlashCopy


mapping after the background copy is completed. Do not use this option when the
background copy rate is set to zero.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.

800 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-170 Create FlashCopy Mapping: Advanced Settings

9. If you do not want to create these FlashCopy mappings from a Consistency Group (step 3
on page 798), you must confirm your choice by selecting No, do not add the mappings
to a consistency group, as shown in Figure 11-171 on page 801.

Figure 11-171 Do not add the mappings to a Consistency Group

10.Click Finish.
11.Check the result of this FlashCopy mapping in the Consistency Groups window, as shown
in Figure 11-172.
For each FlashCopy mapping relationship that you created, a mapping name is
automatically generated that starts with fcmapX where X is an available number. If
necessary, you can rename these mappings. For more information, see 11.9.11,
“Renaming FlashCopy mapping” on page 805.

Chapter 11. Operations using the GUI 801


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-172 Create FlashCopy mappings result

Tip: You can start FlashCopy from the SVC GUI. However, if you plan to handle many
FlashCopy mappings or Consistency Groups periodically, or at varying times, creating a
script by using the operating system shell CLI might be more convenient.

11.9.7 Showing related volumes


Complete the following steps to show related volumes for a specific FlashCopy mapping:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy, Consistency Groups, or FlashCopy Mappings.
2. Select the volume (from the FlashCopy panel only) or the FlashCopy mapping that you
want to view in this Consistency Group.
3. Click Actions → Show Related Volumes, as shown in Figure 11-173 on page 802.

Tip: You can also right-click a FlashCopy mapping and select Show Related Volumes.

Figure 11-173 Show Related Volumes

In the Related Volumes window (Figure 11-174), you can see the related mapping for a
volume. If you click one of these volumes, you can see its properties. For more information
about volume properties, see 11.8.1, “Volume information” on page 760.

802 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-174 Related Volumes

11.9.8 Moving a FlashCopy mapping to a Consistency Group


Complete the following steps to move a FlashCopy mapping to the Consistency Group:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy, Consistency Groups, or FlashCopy Mappings.
2. Select the FlashCopy mapping that you want to move to a Consistency Group or the
FlashCopy mapping for which you want to change the Consistency Group.
3. Click Actions → Move to Consistency Group, as shown in Figure 11-175 on page 803.

Tip: You can also right-click a FlashCopy mapping and select Move to Consistency
Group.

Figure 11-175 Move to Consistency Group action

4. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency
Group for this FlashCopy mapping by using the drop-down list (Figure 11-176).

Figure 11-176 Move FlashCopy mapping to Consistency Group window

5. Click Move to Consistency Group to confirm your changes.

Chapter 11. Operations using the GUI 803


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.9.9 Removing a FlashCopy mapping from a Consistency Group


Complete the following steps to remove a FlashCopy mapping from a Consistency Group:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy, Consistency Groups, or FlashCopy Mappings.
2. Select the FlashCopy mapping that you want to remove from a Consistency Group.
3. Click Actions → Remove from Consistency Group, as shown in Figure 11-177.

Tip: You can also right-click a FlashCopy mapping and select Remove from
Consistency Group.

Figure 11-177 Remove from Consistency Group action

In the Remove FlashCopy Mapping from Consistency Group window, click Remove, as
shown in Figure 11-178.

Figure 11-178 Remove FlashCopy Mapping from Consistency Group

11.9.10 Modifying a FlashCopy mapping


Complete the following steps to modify a FlashCopy mapping:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click FlashCopy, Consistency Groups, or FlashCopy Mappings.
2. In the table, select the FlashCopy mapping that you want to modify.
3. Click Actions → Edit Properties, as shown in Figure 11-179.

804 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-179 Edit Properties

Tip: You can also right-click a FlashCopy mapping and select Edit Properties.

4. In the Edit FlashCopy Mapping window, you can modify the following parameters for a
selected FlashCopy mapping, as shown in Figure 11-180 on page 805:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.

Figure 11-180 Edit FlashCopy Mapping

5. Click Save to confirm your changes.

11.9.11 Renaming FlashCopy mapping


Complete the following steps to rename a FlashCopy mapping:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups or FlashCopy Mappings.
2. In the table, select the FlashCopy mapping that you want to rename.
3. Click Actions → Rename Mapping, as shown in Figure 11-181.

Tip: You can also right-click a FlashCopy mapping and select Rename Mapping.

Chapter 11. Operations using the GUI 805


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-181 Rename Mapping action

4. In the Rename FlashCopy Mapping window, enter the new name that you want to assign
to the FlashCopy mapping and click Rename, as shown in Figure 11-182 on page 806.

Figure 11-182 Renaming a FlashCopy mapping

FlashCopy mapping name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The FlashCopy mapping name can be 1 - 63
characters.

11.9.12 Renaming Consistency Group


To rename a Consistency Group, complete the following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups.
2. From the left panel, select the Consistency Group that you want to rename. Then, select
Actions → Rename, as shown in Figure 11-183.

Figure 11-183 Renaming a Consistency Group

806 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 11-184 on page 807.

Figure 11-184 Changing the name for a Consistency Group

Consistency Group name: The name can consist of the letters A - Z and a - z, the
numbers 0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, a dash, or an underscore.

The new Consistency Group name is displayed in the Consistency Group panel.

11.9.13 Deleting FlashCopy mapping


Complete the following steps to delete a FlashCopy mapping:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings icon.
2. In the table, select the FlashCopy mapping that you want to delete.

Selecting multiple FlashCopy mappings: To select multiple FlashCopy mappings,


hold down Ctrl and click the other entries that you want to delete. This capability is only
available in the Consistency Groups panel and the FlashCopy Mappings panel.

3. Click Actions → Delete Mapping, as shown in Figure 11-185.

Tip: You can also right-click a FlashCopy mapping and select Delete Mapping.

Figure 11-185 Selecting the Delete Mapping option

Chapter 11. Operations using the GUI 807


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

4. The Delete FlashCopy Mapping window opens, as shown in Figure 11-186. In the “Verify
the number of FlashCopy mappings that you are deleting” field, you must enter the
number of volumes that you want to remove. This verification was added to help avoid
deleting the wrong mappings.
If you still have target volumes that are inconsistent with the source volumes and you want
to delete these FlashCopy mappings, select Delete the FlashCopy mapping even when
the data on the target volume is inconsistent, or if the target volume has other
dependencies.
Click Delete, as shown in Figure 11-186.

Figure 11-186 Delete FlashCopy Mapping

11.9.14 Deleting FlashCopy Consistency Group

Important: Deleting a Consistency Group does not delete the FlashCopy mappings.

Complete the following steps to delete a FlashCopy Consistency Group:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then click Consistency Groups.
2. Select the FlashCopy Consistency Group that you want to delete.
3. Click Actions → Delete, as shown in Figure 11-187 on page 809.

808 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-187 Delete Consistency Group action

4. The Warning window opens, as shown in Figure 11-188. Click Yes.

Figure 11-188 Warning window

11.9.15 Starting FlashCopy process


When the FlashCopy mapping is created, the copy process can be started. Only mappings
that are not members of a Consistency Group can be started individually. Complete the
following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and then select FlashCopy Mappings.
2. In the table, choose the FlashCopy mapping that you want to start.
3. Click Actions → Start (as shown in Figure 11-189) to start the FlashCopy process.

Tip: You can also right-click a FlashCopy mapping and select Start.

Figure 11-189 Start the FlashCopy process action

Chapter 11. Operations using the GUI 809


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

4. You can check the FlashCopy progress in the Progress column of the table or in the
Running Tasks status area. After the task completes, the FlashCopy mapping status is in a
Copied state, as shown in Figure 11-190.

Figure 11-190 Checking the FlashCopy progress

11.9.16 Stopping FlashCopy process


When a FlashCopy copy process is stopped, the target volume becomes invalid and it is set
offline by the SVC. The FlashCopy mapping copy must be retriggered to bring the target
volume online again.

Important: Stop a FlashCopy copy process only when the data on the target volume is
useless, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and it is set offline by SVC.

Complete the following steps to stop a FlashCopy copy process:


1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and select FlashCopy Mappings.
2. Choose the FlashCopy mapping that you want to stop.
3. Click Actions → Stop (as shown in Figure 11-191) to stop the FlashCopy Consistency
Group copy process.

Figure 11-191 Stopping the FlashCopy copy process

The FlashCopy Mapping status changes to Stopped, as shown in Figure 11-192 on


page 811.

810 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-192 FlashCopy Mapping status

11.10 Copy Services: Managing Remote copy


It is often easier to control working with Metro Mirror or Global Mirror by using the GUI, if you
have few mappings. When many mappings are used, run your commands by using the CLI.

In this section, we describe the tasks that you can perform at a remote copy level.

The following panels are used to visualize and manage your remote copies:
򐂰 The Remote Copy panel, as shown in Figure 11-193.
By using the Metro Mirror and Global Mirror Copy Services features, you can set up a
relationship between two volumes so that updates that are made by an application to one
volume are mirrored on the other volume. The volumes can be in the same SVC clustered
system or on two separate SVC systems.
To access the Remote Copy panel, move the mouse pointer over the Copy Services
selection and click Remote Copy.

Figure 11-193 Remote Copy panel

򐂰 The Partnerships panel, as shown in Figure 11-194.


Partnerships can be used to create a disaster recovery environment, or to migrate data
between systems that are in separate locations. Partnerships define an association
between a local system and a remote system. To access the Partnerships panel, move the
mouse pointer over the Copy Services selection and click Partnerships.

Figure 11-194 Partnerships panel

Chapter 11. Operations using the GUI 811


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.10.1 System partnership


You can create more than a one-to-one system partnership that uses Fibre Channel, FCoE,
or IP. You can have a system partnership among multiple SVC clustered systems. You can
use this partnership to create the following types of configurations that use a maximum of four
connected SVC systems:
򐂰 Star configuration, as shown in Figure 11-195.

Figure 11-195 Star configuration

򐂰 Triangle configuration, as shown in Figure 11-196.

Figure 11-196 Triangle configuration

812 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

򐂰 Fully connected configuration, as shown in Figure 11-197.

Figure 11-197 Fully connected configuration

򐂰 Daisy-chain configuration, as shown in Figure 11-198.

Figure 11-198 Daisy-chain configuration

Important: All SVC clustered systems must be at level 5.1 or higher. A system can be
partnered with up to three remote systems. No more than four systems can be in the same
connected set. Only one IP partnership is supported.

11.10.2 Creating Fibre Channel partnership


We perform this operation to create the partnership on both SVC systems by using FC.

Intra-cluster Metro Mirror: If you are creating an intra-cluster Metro Mirror, do not perform
this next step to create the SVC clustered system Metro Mirror partnership. Instead, go to
11.10.4, “Creating stand-alone remote copy relationships” on page 817.

To create an FC partnership between the SVC systems by using the GUI, complete the
following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and click Partnerships. The Partnerships panel opens, as shown in Figure 11-200
on page 814.

Chapter 11. Operations using the GUI 813


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-199 Selecting Partnerships window

2. Click Create Partnership to create a partnership with another SVC system, as shown in
Figure 11-200.

Figure 11-200 Create a partnership

3. In the Create Partnership window (Figure 11-201), complete the following information:
– Select the partnership type, either Fibre Channel or IP.

Figure 11-201 Select the type of partnership

814 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

– Select an available partner system from the drop-down list, as shown in Figure 11-202.
If no candidate is available, the following error message is displayed:
This system does not have any candidates.
– Enter a link bandwidth (Mbps) that is used by the background copy process between
the systems in the partnership. Set this value so that it is less than or equal to the
bandwidth that can be sustained by the communication link between the systems. The
link must sustain any host requests and the rate of the background copy.
– Enter the background copy rate.
– Click OK to confirm the partnership relationship.
4. As shown in Figure 11-202, our partnership is in the Partially Configured state because
this work was performed only on one side of the partnership so far.

Figure 11-202 Viewing system partnerships

To fully configure the partnership between both systems, perform the same steps on the
other SVC system in the partnership. For simplicity and brevity, we show only the two most
significant windows when the partnership is fully configured.
5. Starting the SVC GUI for ITSO SVC 5, select ITSO SVC 3 for the system partnership. We
specify the available bandwidth for the background copy (200 Mbps) and then click OK.
Now that both sides of the SVC system partnership are defined, the resulting windows
(which are shown in Figure 11-203 and Figure 11-204 on page 815) confirm that our
remote system partnership is now in the Fully Configured state. (Figure 11-202 shows the
remote system ITSO SVC 5 from the local system ITSO SVC 3.)

Figure 11-203 System ITSO SVC 3: Fully configured remote partnership

Figure 11-204 shows the remote system ITSO SVC 3 from the local system ITSO SVC 5.

Figure 11-204 System ITSO SVC 5: Fully configured remote partnership

11.10.3 Creating IP-based partnership


For more information about this feature, see the IBM Knowledge Center under the Metro
Mirror and Global Mirror > IP partnership requirements topic:
https://ibm.biz/BdEpPB

Chapter 11. Operations using the GUI 815


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

To create an IP partnership between SVC systems by using the GUI, complete the following
steps:
1. From the SVC System panel, move the mouse pointer over Copy Services and click
Partnerships. For type, select IP, as shown in Figure 11-205.

Figure 11-205 Create Partnership: Select IP

1. The Partnerships panel opens, as shown in Figure 11-199 on page 814.


2. Click Create Partnership to create a partnership with another SVC system, as shown in
Figure 11-200 on page 814.
3. In the Create Partnership window (as shown in Figure 11-206), complete the following
information:
– Select the IP partnership type.
– Enter the IP address of the remote partner system.
– Select the link bandwidth in a unit of Mbps that is used by the background copy
process between the systems in the partnership. Set this value so that it is less than or
equal to the bandwidth that can be sustained by the communication link between the
systems. The link must sustain any host requests and the rate of the background copy.
– Select the background copy rate.
– If you want, enable CHAP authentication by providing a CHAP secret.
As shown in Figure 11-206, our partnership is in the Partially Configured state because
only the work on one side of the partnership was completed so far.

Figure 11-206 Viewing system partnerships

To fully configure the partnership between both systems, we must perform the same steps
on the other SVC system in the partnership. For simplicity and brevity, we only show the
two most significant windows when the partnership is fully configured.

816 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

4. Starting the SVC GUI for ITSO SVC 5, select ITSO SVC 3 for the system partnership.
Specify the available bandwidth for the background copy (100 Mbps) and then click OK.
Now that both sides of the SVC system partnership are defined, the resulting windows (as
shown in Figure 11-207 and Figure 11-208 on page 817) confirm that our remote system
partnership is now in the Fully Configured state. Figure 11-207 shows the remote system
ITSO SVC 5 from the local system ITSO SVC 3.

Figure 11-207 System ITSO SVC 3: Fully configured remote partnership

Figure 11-208 on page 817 shows the remote system ITSO SVC 3 from the local system
ITSO SVC 5.

Figure 11-208 System ITSO SVC 5: Fully configured remote partnership

Note: The bandwidth setting definition that is used when the IP partnership is created
changed. Previously, the bandwidth setting defaulted to 50 MBs, and it was the maximum
transfer rate from the primary site to the secondary site for initial volume sync or resync.

The link bandwidth setting is now configured by using Mbits not MBs and you set this link
bandwidth setting to a value that the communication link can sustain or what is allocated
for replication. The background copy rate setting is now a percentage of the link bandwidth
and it determines the bandwidth that is available for initial sync and resync or for Global
Mirror with Change Volumes.

11.10.4 Creating stand-alone remote copy relationships


In this section, we create remote copy mappings for volumes with their respective remote
targets. The source and target volumes were created before this operation was done on both
systems. The target volume must have the same size as the source volume.

Complete the following steps to create stand-alone copy relationships:


1. From the SVC System panel, select Copy Services → Remote Copy → Actions.
2. Click Create Relationship, as shown in Figure 11-209.

Figure 11-209 Create Relationship action

Chapter 11. Operations using the GUI 817


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

3. In the Create Relationship window, select one of the following types of relationships that
you want to create (as shown in Figure 11-210 on page 818):
– Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can be on the same system or on another
system.
– Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
However, the copy might not contain the last few updates if a disaster recovery
operation is performed.
– Global Mirror with Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume. Changes can
then be copied to the remote system asynchronously. The FlashCopy relationship
exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volume is for internal use. The user cannot
manipulate it as they can with a normal FlashCopy mapping. Most svctask *fcmap
commands fail.

Figure 11-210 Select the type of relationship that you want to create

4. We want to create a Metro Mirror relationship. See Figure 11-211.

818 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-211 Selecting Metro Mirror as the type of relationship

Click Next.
5. In the next window, select the location of the auxiliary volumes, as shown in
Figure 11-212:
– On this system, which means that the volumes are local.
– On another system, which means that you select the remote system from the
drop-down list.
After you make a selection, click Next.

Figure 11-212 Specifying the location of the auxiliary volumes

6. In the New Relationship window that is shown in Figure 11-213, you can create
relationships. Select a master volume in the Master drop-down list. Then, select an
auxiliary volume in the Auxiliary drop-down list for this master and click Add. If needed,
repeat this step to create other relationships.

Chapter 11. Operations using the GUI 819


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-213 Select a volume for mirroring

Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are shown in the list box for a specific source
volume.

7. To remove a relationship that was created, click , as shown in Figure 11-214 on


page 820.

Figure 11-214 Create the relationships between the master and auxiliary volumes

After all of the relationships that you want to create are shown, click Next.
8. Specify whether the volumes are synchronized, as shown in Figure 11-215. Then, click
Next.

820 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-215 Volumes are already synchronized

9. In the last window, select whether you want to start to copy the data, as shown in
Figure 11-216. Click Finish.

Figure 11-216 Synchronize now

10.Figure 11-217 shows that the task to create the relationship is complete.

Figure 11-217 Creation of Remote Copy relationship complete

Chapter 11. Operations using the GUI 821


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

The relationships are visible in the Remote Copy panel. If you selected to copy the data,
you can see that the status is Consistent Copying. You can check the copying progress in
the Running Tasks status area.
After the copy is finished, the relationship status changes to Consistent synchronized.
Figure 11-218 on page 822 shows the Consistent Synchronized status.

Figure 11-218 Consistent copy of the mirrored volumes

11.10.5 Creating Consistency Group


To create a Consistency Group, complete the following steps:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. Click Create Consistency Group, as shown in Figure 11-219.

Figure 11-219 Selecting the Create Consistency Group option

3. Enter a name for the Consistency Group, and then, click Next, as shown in Figure 11-220.

Consistency Group name: If you do not provide a name, the SVC automatically
generates the name rccstgrpX, where X is the ID sequence number that is assigned by
the SVC internally. You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The Consistency Group name can be 1 - 15 characters. No
blanks are allowed.

822 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-220 Enter a Consistency Group name

4. In the next window, select where the auxiliary volumes are located, as shown in
Figure 11-221:
– On this system, which means that the volumes are local
– On another system, which means that you select the remote system in the drop-down
list
After you make a selection, click Next.

Figure 11-221 Location of auxiliary volumes

5. Select whether you want to add relationships to this group, as shown in Figure 11-222.
The following options are available:
– If you select Yes, click Next to continue the wizard and go to step 6.
– If you select No, click Finish to create an empty Consistency Group that can be used
later.

Chapter 11. Operations using the GUI 823


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-222 Add relationships to this group

6. Select one of the following types of relationships to create, as shown in Figure 11-223 on
page 825:
– Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can be on the same system or on another
system.
– Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated,
but the copy might not contain the last few updates if a disaster recovery operation is
performed.
– Global Mirror with Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume.
Changes can then be copied to the remote system asynchronously. The FlashCopy
relationship exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volumes is for internal use. The user cannot
manipulate this type of mapping like a normal FlashCopy mapping.
Most svctask *fcmap commands fail.
Click Next.

824 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-223 Select the type of relationship that you want to create

7. As shown in Figure 11-224 on page 825, you can optionally select existing relationships to
add to the group. Click Next.

Note: To select multiple relationships, hold down Ctrl and click the entries that you want
to include.

Figure 11-224 Select existing relationships to add to the group

8. In the window that is shown in Figure 11-225, you can create relationships. Select a
volume in the Master drop-down list. Then, select a volume in the Auxiliary drop-down list
for this master. Click Add, as shown in Figure 11-225. Repeat this step to create other
relationships, if needed.

Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are displayed for a specific source volume.

To remove a relationship that was created, click (Figure 11-225). After all of the
relationships that you want to create are displayed, click Next.

Chapter 11. Operations using the GUI 825


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-225 Create relationships between the master and auxiliary volumes

9. Specify whether the volumes are already synchronized. Then, click Next, as shown in
Figure 11-226.

Figure 11-226 Volumes are already synchronized

10.In the last window, select whether you want to start to copy the data. Then, click Finish, as
shown in Figure 11-227.

Figure 11-227 Synchronize now

11.The relationships are visible in the Remote Copy panel. If you selected to copy the data,
you can see that the status of the relationships is Inconsistent copying. You can check the

826 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

copying progress in the Running Tasks status area, as shown in Figure 11-228 on
page 827.

Figure 11-228 Consistency Group created with relationship in copying and synchronized status

After the copies are completed, the relationships and the Consistency Group change to the
Consistent Synchronized status.

11.10.6 Renaming Consistency Group


To rename a Consistency Group, complete the following steps:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the panel, select the Consistency Group that you want to rename. Then, select
Actions → Rename, as shown in Figure 11-229.

Figure 11-229 Renaming a Consistency Group

3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 11-230 on page 828.

Chapter 11. Operations using the GUI 827


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-230 Changing the name for a Consistency Group

Consistency Group name: The Consistency Group name can consist of the letters
A - Z and a - z, the numbers 0 - 9, the dash (-), and the underscore (_) character. The
name can be 1 - 15 characters. However, the name cannot start with a number, dash,
or an underscore character. No blanks are allowed.

The new Consistency Group name is displayed on the Remote Copy panel.

11.10.7 Renaming remote copy relationship


Complete the following steps to rename a remote copy relationship:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the table, select the remote copy relationship mapping that you want to rename. Click
Actions → Rename, as shown in Figure 11-231.

Tip: You can also right-click a remote copy relationship and select Rename.

Figure 11-231 Rename remote copy relationship action

3. In the Rename Relationship window, enter the new name that you want to assign to the
FlashCopy mapping and click Rename, as shown in Figure 11-232 on page 829.

828 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-232 Renaming a remote copy relationship

Remote copy relationship name: You can use the letters A - Z and a - z, the numbers
0 - 9, and the underscore (_) character. The remote copy name can be 1 - 15
characters. No blanks are allowed.

11.10.8 Moving stand-alone remote copy relationship to Consistency Group


Complete the following steps to move a remote copy relationship to a Consistency Group:
1. From the SVC System panel, click Copy Services → Remote Copy.
2. Expand the Not in a Group column.
3. Select the relationship that you want to move to the Consistency Group.
4. Click Actions → Add to Consistency Group, as shown in Figure 11-233.

Tip: You can also right-click a remote copy relationship and select Add to Consistency
Group.

Figure 11-233 Add to Consistency Group action

5. In the Add Relationship to Consistency Group window, select the Consistency Group for
this remote copy relationship by using the drop-down list, as shown in Figure 11-234 on
page 830. Click Add to Consistency Group to confirm your changes.

Chapter 11. Operations using the GUI 829


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-234 Adding a relationship to a Consistency Group

11.10.9 Removing remote copy relationship from Consistency Group


Complete the following steps to remove a remote copy relationship from a Consistency
Group:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. Select a Consistency Group.
3. Select the remote copy relationship that you want to remove from the Consistency Group.
4. Click Actions → Remove from Consistency Group, as shown in Figure 11-235.

Tip: You can also right-click a remote copy relationship and select Remove from
Consistency Group.

Figure 11-235 Remove from Consistency Group action

5. In the Remove Relationship From Consistency Group window, click Remove, as shown in
Figure 11-236 on page 831.

830 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-236 Remove a relationship from a Consistency Group

11.10.10 Starting remote copy relationship


When a remote copy relationship is created, the remote copy process can be started. Only
relationships that are not members of a Consistency Group, or the only relationship in a
Consistency Group, can be started individually.

Complete the following steps to start a remote copy relationship:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. Expand the Not in a Group column.
3. In the table, select the remote copy relationship that you want to start.
4. Click Actions → Start to start the remote copy process, as shown in Figure 11-237.

Tip: You can also right-click a relationship and select Start from the list.

Figure 11-237 Starting the remote copy process

5. After the task is complete, the remote copy relationship status has a Consistent
Synchronized state, as shown in Figure 11-238.

Chapter 11. Operations using the GUI 831


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-238 Consistent Synchronized remote copy relationship

11.10.11 Starting remote copy Consistency Group


All of the mappings in a Consistency Group are brought to the same state. To start the remote
copy Consistency Group, complete the following steps:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. Select the Consistency Group that you want to start, as shown in Figure 11-239 on
page 832.

Figure 11-239 Remote Copy Consistency Groups view

3. Click Actions → Start (Figure 11-240) to start the remote copy Consistency Group.

Figure 11-240 Start action

4. You can check the remote copy Consistency Group progress, as shown in Figure 11-241.

832 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-241 Checking the remote copy Consistency Group progress

5. After the task completes, the Consistency Group and all of its relationships are in a
Consistent Synchronized state, as shown in Figure 11-242 on page 833.

Figure 11-242 Consistent Synchronized Consistency Group

11.10.12 Switching copy direction


When a remote copy relationship is in the Consistent synchronized state, the copy direction
for the relationship can be changed. Only relationships that are not a member of a
Consistency Group (or the only relationship in a Consistency Group) can be switched
individually. These relationships can be switched from master to auxiliary or from auxiliary to
master, depending on the case.

Important: When the copy direction is switched, no outstanding I/O can exist to the
volume that changes from primary to secondary because all I/O is inhibited to that volume
when it becomes the secondary. Therefore, careful planning is required before you switch
the copy direction for a remote copy relationship.

Complete the following steps to switch a remote copy relationship:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. Expand the Not in a Group column.
3. In the table, select the remote copy relationship that you want to switch.
4. Click Actions → Switch (Figure 11-243 on page 834) to start the remote copy process.

Tip: You can also right-click a relationship and select Switch.

Chapter 11. Operations using the GUI 833


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-243 Switch copy direction action

5. The Warning window that is shown in Figure 11-244 opens. A confirmation is needed to
switch the remote copy relationship direction. The remote copy is switched from the
master volume to the auxiliary volume. Click Yes.

Figure 11-244 Warning window

Figure 11-245 on page 834 shows the command-line output about this task.

Figure 11-245 Command-line output for switch relationship action

The copy direction is now switched, as shown in Figure 11-246. The auxiliary volume is
now accessible and shown as the primary volume. Also, the auxiliary volume is now
synchronized to the master volume.

834 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-246 Checking remote copy synchronization direction

11.10.13 Switching the copy direction for a Consistency Group


When a Consistency Group is in the Consistent Synchronized state, the copy direction for this
Consistency Group can be changed.

Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that changes from primary to secondary because all of the I/O is inhibited to
that volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a Consistency Group.

Complete the following steps to switch a Consistency Group:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. Select the Consistency Group that you want to switch.
3. Click Actions → Switch (as shown in Figure 11-247) to start the remote copy process.

Tip: You can also right-click a relationship and select Switch.

Figure 11-247 Switch action

4. The warning window that is shown in Figure 11-248 opens. A confirmation is needed to
switch the Consistency Group direction. In the example that is shown in Figure 11-248, the
Consistency Group is switched from the master group to the auxiliary group. Click Yes.

Chapter 11. Operations using the GUI 835


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-248 Warning window for ITSO SVC 3

The remote copy direction is now switched, as shown in Figure 11-249 on page 836. The
auxiliary volume is now accessible and shown as a primary volume.

Figure 11-249 Checking Consistency Group synchronization direction

11.10.14 Stopping remote copy relationship


After it is started, the remote copy process can be stopped, if needed. Only relationships that
are not a member of a Consistency Group (or the only relationship in a Consistency Group)
can be stopped individually. You can also use this command to enable write access to a
consistent secondary volume.

Complete the following steps to stop a remote copy relationship:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. Expand the Not in a Group column.
3. In the table, select the remote copy relationship that you want to stop.
4. Click Actions → Stop (as shown in Figure 11-250) to stop the remote copy process.

Tip: You can also right-click a relationship and select Stop from the list.

836 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-250 Stop action

5. The Stop Remote Copy Relationship window opens, as shown in Figure 11-251 on
page 837. To allow secondary read/write access, select Allow secondary read/write
access. Then, click Stop Relationship.

Figure 11-251 Stop Remote Copy Relationship window

6. Figure 11-252 shows the command-line text for the stop remote copy relationship.

Figure 11-252 Stop remote copy relationship command-line output

The new relationship status can be checked, as shown in Figure 11-253 on page 838. The
relationship is now Consistent Stopped.

Chapter 11. Operations using the GUI 837


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-253 Checking remote copy synchronization status

11.10.15 Stopping Consistency Group


After it is started, the Consistency Group can be stopped, if necessary. You can also use this
task to enable write access to consistent secondary volumes.

Perform the following steps to stop a Consistency Group:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the table, select the Consistency Group that you want to stop.
3. Click Actions → Stop (as shown in Figure 11-254) to stop the remote copy Consistency
Group.

Tip: You can also right-click a relationship and select Stop from the list.

Figure 11-254 Selecting the Stop option

4. The Stop Remote Copy Consistency Group window opens, as shown in Figure 11-255 on
page 839. To allow secondary read/write access, select Allow secondary read/write
access. Then, click Stop Consistency Group.

838 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-255 Stop Remote Copy Consistency Group window

The new relationship status can be checked, as shown in Figure 11-256. The relationship
is now Consistent Stopped.

Figure 11-256 Checking remote copy synchronization status

11.10.16 Deleting stand-alone remote copy relationships


Complete the following steps to delete a stand-alone remote copy mapping:
1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the table, select the remote copy relationship that you want to delete.

Multiple remote copy mappings: To select multiple remote copy mappings, hold down
Ctrl and click the entries that you want.

3. Click Actions → Delete, as shown in Figure 11-257 on page 840.

Tip: You can also right-click a remote copy mapping and select Delete.

Chapter 11. Operations using the GUI 839


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-257 Selecting the Delete Relationship option

4. The Delete Relationship window opens (Figure 11-258). In the “Verify the number of
relationships that you are deleting” field, enter the number of volumes that you want to
remove. This verification was added to help to avoid deleting the wrong relationships.
Click Delete, as shown in Figure 11-258.

Figure 11-258 Delete remote copy relationship

11.10.17 Deleting Consistency Group

Important: Deleting a Consistency Group does not delete its remote copy mappings.

Complete the following steps to delete a Consistency Group:


1. From the SVC System panel, select Copy Services → Remote Copy.
2. In the left column, select the Consistency Group that you want to delete.
3. Click Actions → Delete, as shown in Figure 11-259 on page 841.

840 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-259 Selecting the Delete Consistency Group option

4. The warning window that is shown in Figure 11-260 opens. Click Yes.

Figure 11-260 Confirmation message

11.11 Managing SVC clustered system from GUI


This section describes the various configuration and administrative tasks that you can
perform on the SVC clustered system.

11.11.1 System status information


From the System panel, complete the following steps to display the system and node
information:
1. On the SVC System panel, move the mouse pointer over the Monitoring selection and
click System.
The System Status panel opens, as shown in Figure 11-261 on page 842.

Chapter 11. Operations using the GUI 841


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-261 System status panel

On the System status panel (beneath the SVC nodes), you can view the global storage
usage, as shown in Figure 11-262. By using this method, you can monitor the physical
capacity and the allocated capacity of your SVC system. You can change between the
Allocation view and the Compression view to see the capacity usage and space savings of
the Real-time Compression feature, as shown in Figure 11-263.

Figure 11-262 Physical capacity information: Allocation view

Figure 11-263 Physical capacity information: Compression view

842 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11.11.2 View I/O Groups and their associated nodes


The System status panel shows an overview of the SVC system with its I/O Groups and their
associated nodes. As shown in Figure 11-264, the node status can be checked by using a
color code that represents its status.

Figure 11-264 System view with node status

You can click an individual node. You can right-click the node, as shown in Figure 11-265, to
open the list of actions.

Figure 11-265 System view with node properties

If you click Properties, you see the following view, as shown in Figure 11-266 on page 844.

Chapter 11. Operations using the GUI 843


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-266 Properties for a node

Under View in the list of actions, you can see information about the Fibre Channel Ports, as
shown in Figure 11-267.

Figure 11-267 View Fibre Channel ports

Figure 11-268 shows the Fibre Channel ports of an active node.

Figure 11-268 Fibre Channel ports of an active node

844 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11.11.3 View SVC clustered system properties


Complete the following steps to view the SVC clustered system properties:
1. To see more information about the system, select one node and right-click. The options
are shown in Figure 11-269.

Figure 11-269 Options for more information about that system

2. You can see the following information:


򐂰 Rename
򐂰 Modify Site
򐂰 Power Off
򐂰 Remove
򐂰 View
򐂰 Show Dependent Volumes
򐂰 Properties

11.11.4 Renaming SVC clustered system


All objects in the SVC system have names that are user defined or system generated.
Choose a meaningful name when you create an object. If you do not choose a name for the
object, the system generates a name for you. A well-chosen name serves not only as a label
for an object, but also as a tool for tracking and managing the object. Choosing a meaningful
name is important if you decide to use configuration backup and restore.

Naming rules
When you choose a name for an object, the following rules apply:
򐂰 Names must begin with a letter.

Important: Do not start names by using an underscore (_) character even though it is
possible. The use of the underscore as the first character of a name is a reserved
naming convention that is used by the system configuration restore process.

򐂰 The first character cannot be numeric.


򐂰 The name can be a maximum of 63 characters with the following exceptions: The name
can be a maximum of 15 characters for Remote Copy relationships and groups. The
lsfabric command displays long object names that are truncated to 15 characters for
nodes and systems. Version 5.1.0 systems display truncated volume names when they
are partnered with a version 6.1.0 or later system that has volumes with long object names
(lsrcrelationshipcandidate or lsrcrelationship commands).

Chapter 11. Operations using the GUI 845


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9), the
underscore (_) character, a period (.), a hyphen (-), and a space.
򐂰 Names must not begin or end with a space.
򐂰 Object names must be unique within the object type. For example, you can have a volume
named ABC and an MDisk called ABC, but you cannot have two volumes called ABC.
򐂰 The default object name is valid (object prefix with an integer).
򐂰 Objects can be renamed to their current names.

To rename the system from the System panel, complete the following steps:
1. Click Actions in the upper-left corner of the SVC System panel, as shown in
Figure 11-270.

Figure 11-270 Actions on the System panel

2. From the panel, select Rename System, as shown in Figure 11-271 on page 846.

Figure 11-271 Select Rename System

3. The panel opens, as shown in Figure 11-272. Specify a new name for the system and click
Rename.

846 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-272 Rename the system

System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The clustered system name can be 1 - 63 characters.

4. The Warning window opens, as shown in Figure 11-273 on page 847. If you are using the
iSCSI protocol, changing the system name or the iSCSI Qualified Name (IQN) also
changes the IQN of all of the nodes in the system. Changing the system name or the IQN
might require the reconfiguration of all iSCSI-attached hosts. This reconfiguration might be
required because the IQN for each node is generated by using the system and node
names.

Figure 11-273 System rename warning

5. Click Yes.

11.11.5 Renaming site information of the nodes


The SVC supports configuring site settings that describe the location of the nodes and
storage systems that are deployed in a stretched system configuration. This site information
configuration is only part of the configuration process for stretched systems. The site
information makes it possible for the SVC to manage and reduce the amount of data that is
transferred between the two sides of the system, which reduces the costs of maintaining the
system.

Three site objects are automatically defined by the SVC and numbered 1, 2, and 3. The SVC
creates the corresponding default names, site1, site2, and site3, for each of the site
objects. Site1 and site2 are the two sites that make up the two halves of the stretched
system, and site3 is the quorum disk. You can rename the sites to describe your data center
locations.

Chapter 11. Operations using the GUI 847


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

To rename the sites, follow these steps:


1. On the System panel, select Actions in the upper-left corner.
2. The Actions menu opens. Click Rename Sites, as shown in Figure 11-274.

Figure 11-274 Rename sites action

3. The Rename Sites panel with the site information opens, as shown in Figure 11-275.

Figure 11-275 Rename Sites default panel

4. Enter the appropriate site information. Figure 11-276 shows the updated Rename Sites
panel. Click Rename.

848 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-276 Rename the sites

11.11.6 Rename a node


To rename a node, follow these steps:
1. Right-click one of the nodes. The Properties panel for this node opens, as shown in
Figure 11-277 on page 849.
2. Click Rename.

Figure 11-277 Information panel for a node

3. Enter the new name of the node and click Rename (Figure 11-278).

Figure 11-278 Enter the new name of the node

Chapter 11. Operations using the GUI 849


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.11.7 Shutting down SVC clustered system


If all input power to the SVC clustered system is removed for more than a few minutes (for
example, if the machine room power is shut down for maintenance), it is important that you
shut down the system before you remove the power. Shutting down the system while it is still
connected to the main power ensures that the uninterruptible power supply unit’s batteries or
the batteries of the DH8 nodes are still fully charged when the power is restored.

Important: Starting with model 2145-DH8 nodes, we no longer have separate


uninterruptible power supplies (UPS). The batteries are now included in the system.

If you remove the main power while the system is still running, the uninterruptible power
supply units or internal batteries detect the loss of power and instruct the nodes to shut down.
This shutdown can take several minutes to complete. Although the uninterruptible power
supply units or internal batteries have sufficient power to perform the shutdown, you
unnecessarily drain a unit’s batteries.

When power is restored, the SVC nodes start. However, one of the first checks that is
performed by the SVC node is to ensure that the batteries have sufficient power to survive
another power failure, which enables the node to perform a clean shutdown.

(You do not want the batteries to run out of power when the node’s shutdown activities did not
complete). If the batteries are not charged sufficiently, the node does not start.

It can take up to 3 hours to charge the batteries sufficiently for a node to start.

Important: When a node shuts down because of a power loss, the node dumps the cache
to an internal hard disk drive so that the cached data can be retrieved when the system
starts. With 2145-8F2/8G4 nodes, the cache is 8 GiB. With 2145-CF8/CG8 nodes, the
cache is 24 GiB. With 2145-DH8 nodes, the cache is up to 64 GiB. Therefore, this process
can take several minutes to dump to the internal drive.

The SVC uninterruptible power supply units or internal batteries are designed to survive at
least two power failures in a short time. After that time, the nodes do not start until the
batteries have sufficient power to survive another immediate power failure.

During maintenance activities, if the uninterruptible power supply units or batteries detect
power and then detect a loss of power multiple times (the nodes start and shut down more
than once in a short time), you might discover that you unknowingly drained the batteries. You
must wait until they are charged sufficiently before the nodes start.

Important: Before a system is shut down, quiesce all I/O operations that are directed to
this system because you lose access to all of the volumes that are serviced by this
clustered system. Failure to quiesce all I/O operations might result in failed I/O operations
that are reported to your host operating systems.

You do not need to quiesce all I/O operations if you are shutting down only one SVC node.

Begin the process of quiescing all I/O activity to the system by stopping the applications on
your hosts that are using the volumes that are provided by the system.

If you are unsure which hosts are using the volumes that are provided by the SVC system,
follow the procedure that is described in 10.6.23, “Showing the host to which the volume is
mapped” on page 617, and repeat this procedure for all volumes.

850 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

From the System status panel, complete the following steps to shut down your system:
1. Click Actions, as shown in Figure 11-279 on page 851. Select Power Off System.

Figure 11-279 Action panel to power off the system

2. A confirmation window opens, as shown in Figure 11-280.

Figure 11-280 Confirmation window to confirm the shutdown of the system

3. Ensure that you stopped all FlashCopy mappings, remote copy relationships, data
migration operations, and forced deletions before you continue.

Important: Pay special attention when encryption is enabled on some storage pool.
You have to have inserted USB drive with stored encryption keys, otherwise the data
will not be readable after restart!

4. Click Power Off to begin the shutdown process.

You completed the required tasks to shut down the system. You can now shut down the
uninterruptible power supply units by pressing the power buttons on their front panels. The
internal batteries of the 2145-DH8 nodes will shut down automatically with the nodes.

Note: Since IBM Spectrum Virtualize V7.6 you are no longer able to power off single node
from the management GUI. Even when selecting single node and choosing Power Off
System from the context menu, the whole cluster will be powered off.

Chapter 11. Operations using the GUI 851


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.12 Upgrading software


The following sections describe software updates.

11.12.1 Updating system software


From the System status panel, complete the following steps to upgrade the software of your
SVC clustered system.
1. Click Actions.
2. Click Update → System, as shown in Figure 11-281 on page 852.

Figure 11-281 Action tab to update system software

3. Follow the process that is described in 11.17.8, “Upgrading IBM Spectrum Virtualize
software” on page 896.

11.12.2 Update drive software


You can update drives by downloading and applying firmware updates. By using the
management GUI, you can update drives that are on the system. Drives that are attached to
the system that are on external system storage systems cannot be updated by using this
panel. The management GUI can update individual drives or update all drives that have
available updates. A drive is not updated if it meets the following conditions:
򐂰 The drive is offline.
򐂰 The drive is failed.
򐂰 The RAID array to which the drive belongs is not redundant.
򐂰 The drive firmware is the same as the update firmware.
򐂰 The drive has dependent volumes that will go offline during the update.
򐂰 The drive is used as a boot drive for the system.

To update drives, complete the following steps:


1. Go to http://www.ibm.com/storage/support/2145 to locate and download the firmware
update file to the system.
2. From the Actions menu, select Update → Drives to update all drives, as shown in
Figure 11-282 on page 853.

852 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-282 Update drive software

3. The Upgrade All Drives panel opens as shown in Figure 11-283.


4. Select the folder icon. An Explorer window opens, where you can select the upgrade
package, which is stored on your local disk.
5. Click Upgrade, as shown in Figure 11-283.

Figure 11-283 Enter the location of the upgrade package

To update the internal drives, select Pools → Internal Storage in the management GUI.
To update specific drives, select the drive or drives and select Actions → Update. Click
Browse and select the directory where you downloaded the firmware update file. Click
Upgrade. Depending on the number of drives and the size of the system, drive updates
can take up to 10 hours to complete.
6. To monitor the progress of the update, click the Running Tasks icon on the bottom center
of the management GUI window and then click Drive Update Operations. You can also
use the Monitoring → Events panel to view any completion or error messages that relate
to the update.

Chapter 11. Operations using the GUI 853


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.13 Managing I/O Groups


In SVC terminology, the I/O Group is represented by a pair of SVC nodes combined in a
clustered system. Nodes in each I/O Group must consist of similar hardware; however,
different I/O Groups of a system can be built from different devices, which is illustrated in
Figure 11-284.

Figure 11-284 Combination of hardware in different I/O Groups

In our lab environment, io_grp0 is built from the 2145-DH8 nodes and io_grp1 consists of a
previous model 2145-CF8. This configuration is typical when you are upgrading your data
center storage virtualization infrastructure to a newer SVC platform.

To see the I/O Group details, move the mouse pointer over Actions and click Properties. The
Properties are shown in Figure 11-285. Alternatively, hover the mouse pointer over the I/O
Group name and right-click to open a menu and navigate to Properties.

Figure 11-285 I/O Group information

The following information is shown in the panel:


򐂰 Name
򐂰 Version
򐂰 ID
򐂰 I/O Groups
򐂰 Topology

854 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

򐂰 Control enclosures
򐂰 Expansion enclosures
򐂰 Internal capacity

11.14 Managing nodes


In this section, we describe how to manage SVC nodes.

11.14.1 Viewing node properties


From the Monitoring → System panel, complete the following steps to review SVC node
properties:
1. Move the mouse over one of the nodes. Right-click this node and select Properties.
2. The Properties panel opens, as shown in Figure 11-286.

Figure 11-286 Properties of one node

3. The following tasks are available for this node (Figure 11-287).

Figure 11-287 Node tasks

Chapter 11. Operations using the GUI 855


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

The following tasks are shown:


– Rename
– Modify Site
– Identify
– Power Off System (not just single node!)
– Remove
– View → Fibre Channel Ports
– Show Dependent Volumes
– Properties
4. To view node hardware properties, move the mouse over the hardware parts of the node
(Figure 11-289). You must “turn” or rotate the machine in the GUI by clicking the Rotate
arrow with the mouse, as shown in Figure 11-288.

Figure 11-288 Rotate arrow

5. The System window (Figure 11-289) shows how to obtain additional information about
certain hardware parts.

Figure 11-289 Node hardware information

6. Right-click the FC adapter to open the Properties view (Figure 11-290).

856 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-290 Properties view of the FC adapter

7. Figure 11-291 shows the properties for an FC adapter.

Figure 11-291 Properties of FC adapter

11.14.2 Renaming a node


For information, see 11.11.6, “Rename a node” on page 849.

11.14.3 Adding node to SVC system


Before you add a node to a system, ensure that you configure the switch zoning so that the
node that you add is in the same zone as all other nodes in the system. If you are replacing a
node and the switch is zoned by worldwide port name (WWPN) rather than by switch port,
you must follow the service instructions carefully to continue to use the same WWPNs.
Complete the following steps to add a node to the SVC clustered system:
1. If the switch setting is correct, you see the additional I/O Group as a gray empty frame on
the System panel. Figure 11-292 on page 858 shows this empty frame.

Chapter 11. Operations using the GUI 857


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-292 Available I/O Group or nodes on the System panel

2. Hover the mouse cursor over the empty gray frame, click there and the Action panel for the
system opens, as shown in Figure 11-293. Click Add Nodes.

Figure 11-293 Panel to add nodes

3. In the Add Nodes window (Figure 11-294), you see the available nodes, which are in
candidate mode and able to join the cluster.

858 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-294 Available nodes

Important: You must have at least two nodes in an I/O Group.

4. Select available nodes to be added and Click Next. You will be prompted to enable
encryption on selected nodes. Encryption licenses need to be installed in the system. See
Figure 11-295.

Figure 11-295 The process to add one node

5. Click Next and the summary of action is displayed as shown in Figure 11-296 on
page 860.

Chapter 11. Operations using the GUI 859


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-296 Summary of Add action

Click Finish and the SVC system will add the node to the cluster.

Important: When a node is added to a system, it displays a state of “Adding” and a yellow
warning triangle with an exclamation point. The process to add a node to the system can
take up to 30 minutes, particularly if the software version of the node changes. The added
nodes are updated to the code version of the running cluster.

11.14.4 Removing node from SVC clustered system


From the System panel, complete the following steps to remove a node:
1. Select a node and right-click it, as shown in Figure 11-297. Select Remove.

Figure 11-297 Remove a node from the SVC clustered system action

860 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

2. The Warning window that is shown in Figure 11-298 opens.

Figure 11-298 Warning window when you remove a node

By default, the cache is flushed before the node is deleted to prevent data loss if a failure
occurs on the other node in the I/O Group.
In certain circumstances, such as when the system is degraded, you can take the
specified node offline immediately without flushing the cache or ensuring that data loss
does not occur. Select Bypass check for volumes that will go offline, and remove the
node immediately without flushing its cache.
3. Click Yes to confirm the removal of the node. See the System Details panel to verify a
node removal, as shown in Figure 11-299.

Figure 11-299 System Details panel with one SVC node removed

If this node is the last node in the system, the warning message differs, as shown in
Figure 11-300 on page 862. Before you delete the last node in the system, ensure that you
want to destroy the system. The user interface and any open CLI sessions are lost.

Chapter 11. Operations using the GUI 861


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-300 Warning window for the removal of the last node in the cluster

After you click OK, the node is a candidate to be added back into this system or into
another system.

11.15 Troubleshooting
The events that are detected by the system are saved in a system event log. When an entry is
made in this event log, the condition is analyzed and classified to help you diagnose
problems.

11.15.1 Events panel


In the Monitoring actions selection menu, the Events panel (Figure 11-301) displays the event
conditions that require action and the procedures to diagnose and fix them.

To access this panel from the SVC System panel, move a mouse pointer over Monitoring in
the dynamic menu and select Events.

Figure 11-301 Monitoring: Events selection

862 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

The list of system events opens with the highest-priority event indicated and information
about how long ago the event occurred. Click Close to return to the Recommended Actions
panel.

Note: If an event is reported, you must select the event and run a fix procedure.

Running the fix procedure


To run a procedure to fix an event, complete the following steps:
1. In the table, select an event.
2. Click Actions → Run Fix Procedure, as shown in Figure 11-302.

Tip: You can also click Run Fix at the top of the panel (Figure 11-302) to solve the most
critical event.

Figure 11-302 Run Fix Procedure action

3. The Directed Maintenance Procedure window opens, as shown in Figure 11-303. Follow
the steps in the wizard to fix the event.

Sequence of steps: We do not describe all of the possible steps here because the
steps that are involved depend on the specific event. The process is always interactive
and you are guided through the entire process.

Figure 11-303 Directed Maintenance Procedure wizard

4. Click Cancel to return to the Recommended Actions panel.

Chapter 11. Operations using the GUI 863


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.15.2 Event log


In the Events panel (Figure 11-304), you can choose to display the SVC event log by
Recommended Actions, Unfixed Messages and Alerts, or Show All events.

To access this panel from the SVC System panel that is shown in Figure 11-1 on page 716,
move the mouse pointer over the Monitoring selection in the dynamic menu and click Events.
Then, in the upper-left corner of the panel, select Recommended actions, Unfixed messages
and alerts, or Show all.

Figure 11-304 SVC event log

Certain alerts have a four-digit error code and a fix procedure that helps you fix the problem.
Other alerts also require an action, but they do not have a fix procedure. Messages are fixed
when you acknowledge reading them.

Filtering events
You can filter events in various ways. Filtering can be based on event status, as described in
“Basic filtering”, or over a period, as described in “Time filtering” on page 865. You can also
search the event log for a specific text string by using table filtering, as described in “Overview
window” on page 722.

Certain events require a specific number of occurrences in 25 hours before they are displayed
as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired.
Monitoring events are beneath the coalesce threshold and are transient.

You can also sort events by time or error code. When you sort by error code, the most serious
events (those events with the lowest numbers) are displayed first.

Basic filtering
You can filter the Event log display in one of the following ways by using the drop-down menu
in the upper-left corner of the panel (Figure 11-305 on page 865):
򐂰 Display all unfixed alerts and messages: Select Recommended Actions to show all
events that require your attention.
򐂰 Show all alerts and messages: Select Unfixed Messages and Alerts.
򐂰 Display all event alerts, messages, monitoring, and expired events: Select Show All,
which includes the events that are under the threshold.

864 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-305 Filter Event Log display

Time filtering
You can use the following methods to perform time filtering:
򐂰 Select a start date and time, and an end date and time frame filter. Complete the following
steps to use this method:
a. Click Actions → Filter by Date, as shown in Figure 11-306.

Figure 11-306 Filter by Date action

Tip: You can also access the Filter by Date action by right-clicking an event.

b. The Date/Time Filter window opens, as shown in Figure 11-307. From this window,
select a start date and time and an end date and time.

Figure 11-307 Date/Time Filter window

c. Click Filter and Close. Your panel is now filtered based on the time frame.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 11-308 on page 866.

Chapter 11. Operations using the GUI 865


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-308 Reset Date Filter action

򐂰 Select an event and show the entries within a certain period of this event.
To use this time frame filter, complete the following steps:
a. In the table, select an event.
b. Click Actions → Show entries within. Select minutes, hours, or days, and select a
value, as shown in Figure 11-309.

Figure 11-309 Show entries within a certain amount of time after this event

Tip: You can also access the Show entries within action by right-clicking an event.

c. Now, your window is filtered based on the time frame, as shown in Figure 11-310.

Figure 11-310 Time frame filtering

To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 11-311 on page 867.

866 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-311 Reset Date Filter action

Marking an event as fixed


To mark one or more events as fixed, complete the following steps:
1. In the table, select one or more entries.

Tip: To select multiple events, hold down Ctrl and click the entries that you want to
select.

2. Click Actions → Mark as Fixed, as shown in Figure 11-312.

Figure 11-312 Mark as Fixed action

Tip: You can also access the Mark as Fixed action by right-clicking an event.

3. The Warning window that is shown in Figure 11-313 opens. Click Yes.

Figure 11-313 Warning window

Chapter 11. Operations using the GUI 867


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Exporting event log entries


You can export event log entries to a comma-separated values (CSV) file for further
processing and enhanced filtering with external applications. You can export a full event log or
a filtered result that is based on your requirements. To export an event log entry, complete the
following steps:
1. From the Events panel, show and sort or filter the table to provide the results that you want
to export into a CSV file.
2. Click the diskette icon () and save the file to your workstation, as shown in
Figure 11-314.

Figure 11-314 Export event log to a CSV file

3. You can view the file by using Notepad or another program, as shown in Figure 11-315.

Figure 11-315 Viewing the CSV file in Notepad

Clearing the log


To clear the logs, complete the following steps:
1. Click Actions → Clear Log, as shown in Figure 11-316 on page 869.

868 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-316 Clear log

2. The Warning window that is shown in Figure 11-317 opens. From this window, you must
confirm that you want to clear all entries from the error log.

Figure 11-317 Warning window

3. Click Yes.

11.15.3 Support panel


From the support panel that is shown in Figure 11-318 on page 870, you can download a
support package that contains log files and information that can be sent to support personnel
to help troubleshoot the system. You can download individual log files or download statesaves,
which are dumps or livedumps of system data.

Chapter 11. Operations using the GUI 869


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-318 Support panel

Downloading the support package


To download the support package, complete the following steps:
1. Click Download Support Package, as shown in Figure 11-319.

Figure 11-319 Download Support Package

The Download Support Package window opens, as shown in Figure 11-320 on page 870.

Figure 11-320 Download Support Package window

The duration varies: Depending on your choice, this action can take several minutes
to complete.

870 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

From this window, select the following types of logs that you want to download:
– Standard logs
These logs contain the most recent logs that were collected for the system. These logs
are the most commonly used by support to diagnose and solve problems.
– Standard logs plus one existing statesave
These logs contain the standard logs for the system and the most recent statesaves
from any of the nodes in the system. Statesaves are also known as dumps or
livedumps.
– Standard logs plus most recent statesave from each node
These logs contain the standard logs for the system and the most recent statesaves
from each node in the system. Statesaves are also known as dumps or livedumps.
– Standard logs plus new statesaves
These logs generate new statesaves (livedumps) for all the nodes in the system and
package the statesaves with the most recent logs.
2. Click Download, as shown in Figure 11-320.
3. Select where you want to save the logs, as shown in Figure 11-321.

Figure 11-321 Save the log file on your personal workstation

Download individual packages


To download packages manually, complete the following tasks:
1. Activate the individual log files view by clicking Show full log listing, as shown in
Figure 11-322.

Figure 11-322 Show full log listing link

2. In the detailed view, select the node from which you want to download the logs by using
the drop-down menu that is in the upper-left corner of the panel, as shown in
Figure 11-323.

Chapter 11. Operations using the GUI 871


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-323 Node selection

3. Select the package or packages that you want to download, as shown in Figure 11-324 on
page 872.

Figure 11-324 Selecting individual packages

Tip: To select multiple packages, hold down Ctrl and click the entries that you want to
include.

4. Click Actions → Download, as shown in Figure 11-325.

Figure 11-325 Download packages

5. Select where you want to save these logs on your workstation.

Tip: You can also delete packages by clicking Delete in the Actions menu.

CIMOM logging level


Select this option to include the Common Information Model (CIM) object manager (CIMOM)
tracing components and logging details.

872 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Maximum logging level: The maximum logging level can have a significant effect on the
performance of the CIMOM interface.

To change the CIMOM logging level to high, medium, or low, use the drop-down menu in the
upper-right corner of the panel, as shown in Figure 11-326.

Figure 11-326 Change the CIMOM logging level

11.16 User management


Users are managed from within the Access selection section of the dynamic menu in the SVC
GUI, as shown in Figure 11-327.

Figure 11-327 Users panel

Each user account has a name, role, and password assigned to it, which differs from the
Secure Shell (SSH) key-based role approach that is used by the CLI. Starting with version
6.3, you can access the CLI with a password and no SSH key.

Note: Use the default superuser account only for initial configuration and emergency
access. Change its default passw0rd. Always define individual accounts for the users.

The role-based security feature organizes the SVC administrative functions into groups,
which are known as roles, so that permissions to run the various functions can be granted
differently to the separate administrative users. Table 11-1 on page 874 lists the four major
roles and one special role.

Chapter 11. Operations using the GUI 873


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Table 11-1 Authority roles


Role Allowed commands User

Security Administrator All commands Superusers

Administrator All commands except the Administrators that control the


following commands: SVC
svctask, chauthservice,
mkuser, rmuser, chuser,
mkusergrp, rmusergrp,
chusergrp, and setpwdreset

Copy Operator All svcinfo commands and the For users that control all copy
following svctask commands: functionality of the cluster
prestartfcconsistgrp,
startfcconsistgrp,
stopfcconsistgrp,
chfcconsistgrp,
prestartfcmap, startfcmap,
stopfcmap, chfcmap,
startrcconsistgrp,
stoprcconsistgrp,
switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship,
stoprcrelationship,
switchrcrelationship,
chrcrelationship, and
chpartnership

Service All svcinfo commands and the For users that perform service
following svctask commands: maintenance and other
applysoftware, setlocale, hardware tasks on the cluster
addnode, rmnode, cherrstate,
writesernum, detectmdisk,
includemdisk, clearerrlog,
cleardumps, settimezone,
stopcluster, startstats,
stopstats, and settime

Monitor All svcinfo commands and the For users that need view
following svctask commands: access only
finderr, dumperrlog,
dumpinternallog,
chcurrentuser, and the
svcconfig command: backup

VASA Provider All commands related to virtual For users and system accounts
volumes or VVOLs used by needed to manage virtual
VMware vSphere volumes and VVOLs used by
VMware vSphere and managed
by IBM Spectrum Virtualize.

The superuser user is a built-in account that has the Security Admin user role permissions.
You cannot change permissions or delete this superuser account; you can only change the
password. You can also change this password manually on the front panels of the clustered
system nodes.

An audit log tracks actions that are issued through the management GUI or CLI. For more
information, see 11.16.9, “Audit log information” on page 884.

874 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11.16.1 Creating a user


Complete the following steps to create a user:
1. From the SVC System panel, move the mouse pointer over the Access selection in the
dynamic menu and click Users.
2. On the Users panel, click Create User, as shown in Figure 11-328.

Figure 11-328 Create User

3. The Create User window opens, as shown in Figure 11-329.

Figure 11-329 Create User window

4. Enter a new user name in the Name field.

Chapter 11. Operations using the GUI 875


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

User name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The user name can be 1 - 256 characters.

The following types of authentication are available in the Authentication Mode section:
– Local
The authentication method is on the system. Users must be part of a user group that
authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 11-1 on page 874) to which you want this user to belong.
– Remote
Remote authentication allows users of the SVC clustered system to authenticate to the
system by using the external authentication service. The external authentication
service can be IBM Tivoli Integrated Portal or a supported Lightweight Directory
Access Protocol (LDAP) service. Ensure that the remote authentication service is
supported by the SVC clustered system. For more information about remote user
authentication, see 2.12, “User authentication” on page 59.
The following types of local credentials can be configured in the Local Credentials section,
depending on your needs:
– Password authentication
The password authenticates users to the management GUI. Enter the password in the
Password field. Verify the password.

Password: The password can be 6 - 64 characters and it cannot begin or end with a
space.

– SSH public/private key authentication


The SSH public key authenticates users to the CLI. Use Browse to locate and upload
the SSH public key. If you did not create an SSH key pair, you can still access the SVC
system by using your user name and password.
5. To create the user, click Create, as shown in Figure 11-329 on page 875.

11.16.2 Modifying the user properties


Complete the following steps to change the user properties:
1. From the SVC System panel, move the pointer over the Access selection in the dynamic
menu and click Users.
2. In the left column, select a User Group.
3. Select a user.
4. Click Actions → Properties, as shown in Figure 11-330 on page 877.

Tip: You can also change user properties by right-clicking a user and selecting
Properties from the list.

876 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-330 User Properties action

The User Properties window opens, as shown in Figure 11-331.

Figure 11-331 User Properties window

5. From the User Properties window, you can change the authentication mode and the local
credentials. For the authentication mode, choose the following type of authentication:
– Local
The authentication method is on the system. Users must be part of a user group that
authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 11-1 on page 874) of which you want the user to be part.
– Remote
Remote authentication allows users of the SVC clustered system to authenticate to the
system by using the external authentication service. The external authentication
service can be IBM Tivoli Integrated Portal or a supported LDAP service. Ensure that
the remote authentication service is supported by the SVC clustered system.

Chapter 11. Operations using the GUI 877


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

For the local credentials, the following types of local credentials can be configured in this
section, depending on your needs:
– Password authentication: The password authenticates users to the management GUI.
You must enter the password in the Password field. Verify the password.

Password: The password can be 6 - 64 characters and it cannot begin or end with a
space.

– SSH public/private key authentication: The SSH key authenticates users to the CLI.
Use Browse to locate and upload the SSH public key.
6. To confirm the changes, click OK (Figure 11-331 on page 877).

11.16.3 Removing a user password

Important: To remove the password for a specific user, the SSH public key must be
defined. Otherwise, this action is not available.

Complete the following steps to remove a user password:


1. From the SVC System panel, move the pointer over the Access selection in the dynamic
menu and click Users.
2. Select the user.
3. Click Actions → Remove Password, as shown in Figure 11-332.

Tip: You can also remove the password by right-clicking a user and selecting Remove
Password.

Figure 11-332 Remove Password action

4. The Warning window that is shown in Figure 11-333 on page 878 opens. Click Yes.

Figure 11-333 Warning window

878 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

11.16.4 Removing a user SSH public key

Important: To remove the SSH public key for a specific user, the password must be
defined. Otherwise, this action is not available.

Complete the following steps to remove an SSH public key:


1. From the SVC System panel, move your mouse pointer over the Access selection in the
dynamic menu, and then click Users.
2. Select the user.
3. Click Actions → Remove SSH Key, as shown in Figure 11-334.

Tip: You can also remove the SSH public key by right-clicking a user and selecting
Remove SSH Key.

Figure 11-334 Remove SSH Key action

4. The Warning window that is shown in Figure 11-335 opens. Click Yes.

Figure 11-335 Warning window

11.16.5 Deleting a user


Complete the following steps to delete a user:
1. From the SVC System panel, move the mouse pointer over the Access selection, and then
click Users.
2. Select the user.

Important: To select multiple users to delete, hold down Ctrl and click the entries that
you want to delete.

Chapter 11. Operations using the GUI 879


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

3. Click Actions → Delete, as shown in Figure 11-336.

Tip: You can also delete a user by right-clicking the user and selecting Delete.

Figure 11-336 Delete a user action

11.16.6 Creating a user group


Five user groups are created, by default, on the SVC. If necessary, you can create other user
groups.

Complete the following steps to create a user group:


1. From the SVC System panel, move the pointer over the Access selection on the dynamic
menu, and then click Users. Click Create User Group, as shown in Figure 11-333 on
page 878.

Figure 11-337 Create User Group

The Create User Group window opens, as shown in Figure 11-338.

880 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-338 Create User Group window

2. Enter a name for the group in the Group Name field.

Group name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The group name can 1 - 63 characters.

A role must be selected among Monitor, Copy Operator, Service, Administrator, or


Security Administrator. For more information about these roles, see Table 11-1 on
page 874.

3. To create the user group name, click Create.


4. You can verify the user group creation in the User Groups panel, as shown in
Figure 11-339 on page 881.

Figure 11-339 Verify user group creation

11.16.7 Modifying the user group properties

Important: For preset user groups (SecurityAdmin, Administrator, CopyOperator, Service,


and Monitor), you cannot change the related roles.

Chapter 11. Operations using the GUI 881


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Complete the following steps to change user group properties:


1. From the SVC System panel, move the mouse pointer over the Access selection on the
dynamic menu and click Users.
2. In the left column, select the User Group.
3. Click Actions → Properties, as shown in Figure 11-340.

Figure 11-340 Modify user group properties

4. The User Group Properties window opens (Figure 11-341 on page 882).

Figure 11-341 User Group Properties window

From this window, you can change the role. You must select a role among Monitor, Copy
Operator, Service, Administrator, Security Administrator, or VASA Provider. For more
information about these roles, see Table 11-1 on page 874.
5. To confirm the changes, click OK, as shown in Figure 11-341.

11.16.8 Deleting a user group


Complete the following steps to delete a user group:
1. From the SVC System panel, move the mouse pointer over the Access selection on the
dynamic menu, and then click Users.

882 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

2. In the left column, select the User Group.


3. Click Actions → Delete, as shown in Figure 11-342 on page 883.

Important: You cannot delete the following preset user groups:


򐂰 SecurityAdmin
򐂰 Administrator
򐂰 CopyOperator
򐂰 Service
򐂰 Monitor
򐂰 VASA Provider

Figure 11-342 Delete User Group action

4. The following options are available:


– If you do not have any users in this group, the Delete User Group window opens, as
shown in Figure 11-343. Click Delete to complete the operation.

Figure 11-343 Delete User Group window

– If you have users in this group, the Delete User Group window opens, as shown in
Figure 11-344. The users of this group are moved to the Monitor user group.

Chapter 11. Operations using the GUI 883


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-344 Delete User Group window

11.16.9 Audit log information


An audit log tracks actions that are issued through the management GUI or the CLI. You can
use the audit log to monitor the user activity on your SVC clustered system.

To view the audit log, from the SVC System panel, move the pointer over the Access selection
on the dynamic menu and click Audit Log, as shown in Figure 11-345.

Figure 11-345 Audit Log entries

The audit log entries provide the following types of information:


򐂰 Time and date when the action or command was issued on the system
򐂰 Name of the user who performed the action or command
򐂰 IP address of the system where the action or command was issued
򐂰 Parameters that were issued with the command
򐂰 Results of the command or action
򐂰 Sequence number
򐂰 Object identifier that is associated with the command or action

884 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Time filtering
The following methods are available to perform time filtering on the audit log:
򐂰 Select a start date and time and an end date and time.
To use this time frame filter, complete the following steps:
a. Click Actions → Filter by Date, as shown in Figure 11-346.

Figure 11-346 Audit log time filter

Tip: You can also access the Filter by Date action by right-clicking an entry.

b. The Date/Time Filter window opens (Figure 11-347). From this window, select a start
date and time and an end date and time.

Figure 11-347 Date/Time Filter window

c. Click Filter and Close. Your audit log panel is now filtered based on its time frame.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 11-348.

Figure 11-348 Reset Date Filter action

򐂰 Select an entry and show the entries within a certain period of this event.
To use this time frame filter, complete the following steps:
a. In the table, select an entry.
b. Click Actions → Show entries within. Select minutes, hours, or days. Then, select a
value, as shown in Figure 11-349.

Chapter 11. Operations using the GUI 885


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-349 Show entries within action

Tip: You can also access the Show entries within action by right-clicking an entry.

Your panel is now filtered based on the time frame.


To disable this time frame filter, click Actions → Reset Date Filter.

11.17 Configuration
In this section, we describe how to configure various properties of the SVC system.

11.17.1 Configuring the network


The procedure to set up and configure SVC network interfaces is described in Chapter 4,
“Initial configuration” on page 133.

11.17.2 iSCSI configuration


From the iSCSI panel in the Settings menu, you can configure parameters for the system to
connect to iSCSI-attached hosts, as shown in Figure 11-350.

Figure 11-350 iSCSI Configuration

886 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

The following parameters can be updated:


򐂰 System Name
It is important to set the system name correctly because it is part of the iSCSI qualified
name (IQN) for the node.

Important: If you change the name of the system after iSCSI is configured, you might
need to reconfigure the iSCSI hosts.

To change the system name, click the system name and specify the new name.

System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The name can be 1 - 63 characters.

򐂰 iSCSI Aliases (Optional)


An iSCSI alias is a user-defined name that identifies the node to the host. Complete the
following steps to change an iSCSI alias:
a. Click an iSCSI alias.
b. Specify a name for it.
Each node has a unique iSCSI name that is associated with two IP addresses. After the
host starts the iSCSI connection to a target node, this IQN from the target node is visible in
the iSCSI configuration tool on the host.
򐂰 iSNS and CHAP
You can specify the IP address for the iSCSI Storage Name Service (iSNS). Host systems
use the iSNS server to manage iSCSI targets and for iSCSI discovery.
You can also enable Challenge Handshake Authentication Protocol (CHAP) to
authenticate the system and iSCSI-attached hosts with the specified shared secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole system
under the system properties or for each host definition. The CHAP must be identical on
the server and the system/host definition. You can create an iSCSI host definition without
the use of a CHAP.

11.17.3 Fibre Channel information


As shown in Figure 11-351, you can use the Fibre Channel Connectivity panel to display the
FC connectivity between nodes and other storage systems and hosts that attach through the
FC network. You can filter by selecting one of the following fields:
򐂰 All nodes, storage systems, and hosts
򐂰 Systems
򐂰 Nodes
򐂰 Storage systems
򐂰 Hosts

Chapter 11. Operations using the GUI 887


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-351 Fibre Channel

11.17.4 Event notifications


The SVC can use Simple Network Management Protocol (SNMP) traps, syslog messages,
and Call Home email to notify you and the IBM Support Center when significant events are
detected. Any combination of these notification methods can be used simultaneously.

Notifications are normally sent immediately after an event is raised. However, events can
occur because of service actions that are performed. If a recommended service action is
active, notifications about these events are sent only if the events are still unfixed when the
service action completes.

11.17.5 Email notifications


The Call Home feature transmits operational and event-related data to you and IBM through a
Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification
email. When configured, this function alerts IBM service personnel about hardware failures
and potentially serious configuration or environmental issues.

Complete the following steps to configure email event notifications:


1. From the SVC System panel, move the mouse pointer over the Settings selection and
click Notifications.
2. In the left column, select Email.
3. Click Enable Notifications, as shown in Figure 11-352.

Figure 11-352 Email event notification

888 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

4. A wizard opens, as shown in Figure 11-353. In the Email Event Notifications System
Location window, you must first define the system location information (Company name,
Street address, City, State or province, Postal code, and Country or region). Click Next
after you provide this information.

Figure 11-353 Define the system location

5. In the Contact Details window, you must enter contact information to enable IBM Support
personnel to contact the person in your organization to assist with problem resolution
(Contact name, Email address, Telephone (primary), Telephone (alternate), and Machine
location). Ensure that all contact information is valid and click Next, as shown in
Figure 11-354 on page 889.

Figure 11-354 Define the company contact information

6. In the Email Event Notifications Email Servers window (Figure 11-355), configure at least
one email server that is used by your site. Enter a valid IP address and a server port for
each server that is added. Ensure that the email servers are valid. Use Ping to verify the

Chapter 11. Operations using the GUI 889


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

accessibility to your email server. If the destination is not reachable, the system will not let
you finish the configuration. You have to insert correct and accessible server.

Figure 11-355 Configure email servers and inventory reporting window

7. The last window displays a summary of your Email Event Notifications wizard. Click
Finish to complete the setup. The wizard is now closed. More information was added to
the panel, as shown on Figure 11-356. You can edit or disable email notification from this
window.

Figure 11-356 Email Event Notifications window configured

8. Once the initial configuration is done, you can edit all mentioned settings and in addition
you can define exact messages that you want to report to the call home. Click Edit button
from the Email window and commit such changes (Figure 11-357 on page 891).

890 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-357 Edit and modify initial settings

11.17.6 SNMP notifications


Simple Network Management Protocol (SNMP) is a standard protocol for managing networks
and exchanging messages. The system can send SNMP messages that notify personnel
about an event. You can use an SNMP manager to view the SNMP messages that are sent by
the SVC.

You can configure an SNMP server to receive various informational, error, or warning
notifications by entering the following information (Figure 11-358 on page 892):
򐂰 IP Address
The address for the SNMP server.
򐂰 Server Port
The remote port number for the SNMP server. The remote port number must be a value of
1 - 65535.
򐂰 Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong.
򐂰 Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

Chapter 11. Operations using the GUI 891


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

– Select Info if you want the user to receive messages about expected events. No action
is required for these events.

Figure 11-358 SNMP configuration

To remove an SNMP server, click the minus sign (-).


To add another SNMP server, click the plus sign (+).

Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event.

You can configure a syslog server to receive log messages from various systems and store
them in a central repository by entering the following information (Figure 11-359 on
page 893):
򐂰 IP Address
The IP address for the syslog server.
򐂰 Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
򐂰 Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard detail about the event.
– The expanded format provides more details about the event.
򐂰 Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.

Important: Browse to Recommended Actions to run the fix procedures on these


notifications.

892 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

– Select Info if you want the user to receive messages about expected events. No action
is required for these events.

Figure 11-359 Syslog configuration

To remove a syslog server, click the minus sign (-).


To add another syslog server, click the plus sign (+).

The syslog messages can be sent in concise message format or expanded message format.

Example 11-2 shows a compact format syslog message.

Example 11-2 Compact syslog message example


IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node
CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2014 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100

Example 11-3 shows an expanded format syslog message.

Example 11-3 Full format syslog message example


IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node
CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2014 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2
#NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0
(build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234
#AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000#Additional
Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000

11.17.7 System options


Use the System panel to change time and date settings, work with licensing options,
download configuration settings, work with VMware VVOLs, or download software upgrade
packages.

Date and time


Complete the following steps to configure the date and time settings:
1. From the SVC System panel, move the pointer over Settings and click System.
2. In the left column, select Date and Time, as shown in Figure 11-360.

Chapter 11. Operations using the GUI 893


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-360 Date and Time window

3. From this panel, you can modify the following information:


– Time zone
Select a time zone for your system by using the drop-down list.
– Date and time
The following options are available:
• If you are not using a Network Time Protocol (NTP) server, select Set Date and
Time, and then manually enter the date and time for your system, as shown in
Figure 11-361 on page 894. You can also click Use Browser Settings to
automatically adjust the date and time of your SVC system with your local
workstation date and time.

Figure 11-361 Set Date and Time window

• If you are using a Network Time Protocol (NTP) server, select Set NTP Server IP
Address and then enter the IP address of the NTP server, as shown in
Figure 11-362.

Figure 11-362 Set NTP Server IP Address window

894 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

4. Click Save.

Licensing
Complete the following steps to configure the licensing settings:
1. From the SVC Settings panel, move the pointer over Settings and click System.
2. In the left column, select License Functions, as shown in Figure 11-363.

Figure 11-363 Licensing window

3. In the Select Your License section, you can choose between the following licensing
options for your SVC system:
– Standard Edition: Select the number of terabytes that are available for your license for
virtualization and for Copy Services functions for this license option.
– Entry Edition: This type of licensing is based on the number of the physical disks that
you are virtualizing and whether you selected to license the FlashCopy function, the
Metro Mirror and Global Mirror function, or both.
4. Set the licensing options for the SVC for the following elements:
– Virtualization Limit
Enter the capacity of the storage that will be virtualized by this system.
– FlashCopy Limit
Enter the capacity that is available for FlashCopy mappings.

Important: The Used capacity for FlashCopy mapping is the sum of all of the
volumes that are the source volumes of a FlashCopy mapping.

– Remote Mirroring Limit


Enter the capacity that is available for Metro Mirror and Global Mirror relationships.

Chapter 11. Operations using the GUI 895


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Important: The Used capacity for Global Mirror and Metro Mirror is the sum of the
capacities of all of the volumes that are in a Metro Mirror or Global Mirror
relationship; both master volumes and auxiliary volumes are included.

– Real-time Compression Limit


Enter the total number of terabytes of virtual capacity that are licensed for
compression.
– Virtualization Limit (Entry Edition only)
Enter the total number of physical drives that you are authorized for virtualization.
– Encryption Licenses
Add the license keys for each node that manages or needs to manage encrypted pools
and their volumes.

11.17.8 Upgrading IBM Spectrum Virtualize software


In this section, we describe the operations to upgrade your IBM Spectrum Virtualize software
from version 7.4 to version 7.6.

The format for the software upgrade package name ends in four positive integers that are
separated by dots. For example, a software upgrade package might have the name that is
shown in the following example:
IBM_2145_INSTALL_7.6.0.0

Precautions before the upgrade


In this section, we describe the precautions that you need to take before you attempt an
upgrade.

Important: Before you attempt any SVC code update, read and understand the SVC
concurrent compatibility and code cross-reference matrix. For more information, see the
following website and click Latest IBM SAN Volume Controller code:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707

During the upgrade, each node in your SVC clustered system is automatically shut down and
restarted by the upgrade process. Because each node in an I/O Group provides an
alternative path to volumes, use the Subsystem Device Driver (SDD) to ensure that all I/O
paths between all hosts and SANs work.

If you do not perform this check, certain hosts might lose connectivity to their volumes and
experience I/O errors when SVC node that provides that access is shut down during the
upgrade process. You can check the I/O paths by using SDD datapath query commands.

SVC upgrade test utility


It is important to verify that all system requirements are met before the upgrade itself. The
software upgrade test utility is an SVC tool that checks for known issues that can cause
problems during a software upgrade. The SVC software upgrade test utility is available at this
website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

896 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

You can use the svcupgradetest utility to check for known issues that might cause problems
during an IBM Spectrum Virtualize software upgrade.

The software upgrade test utility can be downloaded in advance of the upgrade process, or it
can be downloaded and run directly during the software upgrade, as guided by the upgrade
wizard.

You can run the utility multiple times on the same SVC system to perform a readiness
check-in preparation for a software upgrade. We strongly advise that you run this utility for a
final time immediately before you apply the upgrade to ensure that no new releases of the
utility were available since you originally downloaded it.

The installation and use of this utility is non-disruptive and the utility does not require the
restart of any SVC node; therefore, host I/O is not interrupted. The utility is only installed on
the current configuration node.

System administrators must continue to check whether the version of code that they plan to
install is the latest version. You can obtain the latest information at this website:
https://ibm.biz/BdE8Pe

This utility is intended to supplement rather than duplicate the existing tests that are
performed by the SVC upgrade procedure (for example, checking for unfixed errors in the
error log).

Upgrade procedure
To upgrade the IBM Spectrum Virtualize software from version 7.4 to version 7.6, complete
the following steps:
1. Log in with your administrator user ID and password. The SVC management home page
opens. Click Settings → System → Update System.
2. The window that is shown in Figure 11-364 opens.

Figure 11-364 Upgrade Software

From the window that is shown in Figure 11-364, you can select the following options:
– Check for updates: Use this option to check, on the IBM website, whether an SVC
software version is available that is newer than the version that you installed on your
SVC. You need an Internet connection to perform this check.
– Update: Use this option to start the software upgrade process.
3. Click Update to start the upgrade process. The window that is shown in Figure 11-365
opens.

Chapter 11. Operations using the GUI 897


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-365 Upgrade Package test utility

From the Upgrade Package window, download the upgrade test utility from the IBM
website. Click the folder icons and choose the both packages as outlined in Figure 11-366.
The new code level detected from the package will appear.

Figure 11-366 Upload upgrade test utility completed

4. When you click Update, the selection window opens. Choose either to update the system
automatically or manually. The differences are explained:
– Updating the system automatically
During the automatic update process, each node in a system is updated one at a time,
and the new code is staged on the nodes. While each node restarts, degradation in the
maximum I/O rate that can be sustained by the system can occur. After all the nodes in
the system are successfully restarted with the new code level, the new level is
automatically committed.
During an automatic code update, each node of a working pair is updated sequentially.
The node that is being updated is temporarily unavailable and all I/O operations to that
node fail. As a result, the I/O error counts increase and the failed I/O operations are
directed to the partner node of the working pair. Applications do not see any I/O
failures. When new nodes are added to the system, the update package is
automatically downloaded to the new nodes from the SVC system.
The update can normally be done concurrently with typical user I/O operations.
However, performance might be affected. If any restrictions apply to the operations that
can be done during the update, these restrictions are documented on the product

898 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

website that you use to download the update packages. During the update procedure,
most configuration commands are not available.
– Updating the system manually
During an automatic update procedure, the system updates each of the nodes
systematically. The automatic method is the preferred procedure for updating the code
on nodes; however, to provide more flexibility in the update process, you can also
update each node manually.
During this manual procedure, you prepare the update, remove a node from the
system, update the code on the node, and return the node to the system. You repeat
this process for the remaining nodes until the last node is removed from the system.
Every node must be updated to the same code level. You cannot interrupt the update
and switch to installing a different level.
After all of the nodes are updated, you must confirm the update to complete the
process. The confirmation restarts each node in order and takes about 30 minutes to
complete
We selected an Automatic update (Figure 11-367).

Figure 11-367 Select the type of update

5. When you click Finish, the IBM Spectrum Virtualize software upgrade starts. The window
that is shown in Figure 11-368 opens. The system starts with the upload of the test utility
and the SVC system firmware.

Figure 11-368 Uploading the test utility and the code

6. After a while, the system starts automatically to run the update test utility.

Chapter 11. Operations using the GUI 899


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

Figure 11-369 Running the update test utility

7. When the system detects an issue or an error, you are guided by the GUI. Click Read
more, as shown in Figure 11-370. These issues are typically caused by
non-recommended configuration and we advise users to fix them before proceeding with
upgrade.

Figure 11-370 Issues that are detected by the update test utility

If you decide that issues or warnings are marginal and do not affect the upgrade process,
confirm the resumption of the upgrade as shown in Figure 11-371.

Figure 11-371 Resuming upgrade

8. The Update Test Utility Results panel opens and describes the results, as shown in
Figure 11-372.

900 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-372 Description of the warning from the test utility

9. In our case, we received a warning because we did not enable email notification. So, we
can click Close and proceed with the update. As shown in Figure 11-373, we click
Resume.

Figure 11-373 Resume the installation of the SVC firmware

10.The update process starts, as shown in Figure 11-374.

Figure 11-374 Update process starts

Chapter 11. Operations using the GUI 901


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.When the update for the first node completes, the system is paused for approximately 30
minutes to ensure that all paths are reestablished to the newly updated node
(Figure 11-375 on page 902).

Figure 11-375 System paused to reestablish the paths

12.SVC updates each node in sequence, the active configuration node at latest. Once the
system is updated, the failover usually happens and you lose access to web console (it
goes offline). Click Yes to reestablish the web session, as shown in Figure 11-376.

Figure 11-376 Node failover

13.After a refresh, you can see that the system is updated.

Figure 11-377 System is updated

11.18 VMware Virtual Volumes


IBM Spectrum Virtualize V7.6 and higher is able to manage VMware vSphere Virtual
Volumes (VVols) directly in cooperation with VMware. It allows VMware virtual machines to

902 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

get assigned disk capacity directly from SVC instead from ESXi datastore. That allows
storage administrators to control appropriate usage of storage capacity and to enable
enhanced features of storage virtualization directly to the virtual machine (such as replication,
thin-provisioning, compression, encryption, etc.).

The VVols management is enables in SVC in System section as shown in Figure 11.18.1 on
page 903.

The NTP server has to be configured prior to enabling VVols management. It is highly
recommended to use the same NTP server for ESXi and for SVC.

Figure 11-378 Enabling VVOLs management

A quick-start guide to VVols, Quick-start Guide to Configuring VMware Virtual Volumes for
Systems Powered by IBM Spectrum Virtualize, REDP-5321is available at:

https://www.redbooks.ibm.com/Redbooks.nsf/RedpieceAbstracts/redp5321.html?Open

There is an IBM Redbook in development that will go into more depth about VVols and that
will be released on the IBM Redbooks website shortly.

11.18.1 Resources
IBM Spectrum Virtualize V7.6 introduces advanced management and allocation of system
resources. You can limit the specific amount of system memory to various SVC operations as
shown in

Figure 11-379 Allocating system resources to SVC tasks

Chapter 11. Operations using the GUI 903


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

11.18.2 Setting GUI preferences


The menu GUI Preferences consists of three options:
򐂰 Navigation
򐂰 Login Message
򐂰 General

Navigation
This option enables or disables animated dynamic menu of the GUI. You can either have
static icons on the left side with fixed size or icons with dynamically changing their size once
hovering mouse cursor over them. To enable animated dynamic function of icons, tick the
checkbox as shown in Figure 11-380 on page 904.

Figure 11-380 Enabling animated dock

Login Message
The login message is displayed to anyone logging into GUI or CLI session. It can be defined
and enabled either from GUI or from CLI once the text file with the message content is loaded
to the system (Figure ).

Figure 11-381 Enabling login message

The details how to define login message from CLI and how it looks like after its enablement
are provided in “Welcome banner” on page 718.

General settings
Complete the following steps to configure general GUI preferences:
1. From the SVC Settings window, move the pointer over Settings and click GUI Preferences
(Figure 11-382 on page 905).

904 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm

Figure 11-382 General GUI Preferences window

2. You can configure the following elements:


– Refresh GUI cache
This option causes the GUI to refresh all of its views and clears the GUI cache. The
GUI looks up every object again.

– Clear Customization
This option deletes all GUI preferences that are stored in the browser and restores the
default preferences.
– Knowledge Center
You can change the URL of IBM Spectrum Virtualize Knowledge Center.
– The accessibility option enables Low graphic mode when the system is connected
through a slower network.
– Advanced pool settings allows you to select the extent size during storage pool
creation.
– Default logout time in minutes after inactivity in the established session.

Chapter 11. Operations using the GUI 905


7933 10 GUI Operations Libor.fm Draft Document for Review February 4, 2016 8:01 am

906 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm

Appendix A. Performance data and statistics


gathering
In this appendix, we provide a brief overview of the performance analysis capabilities of the
IBM System Storage SAN Volume Controller and IBM Spectrum Virtualize 7.6. We also
describe a method that you can use to collect and process IBM Spectrum Virtualize
performance statistics.

It is beyond the intended scope of this book to provide an in-depth understanding of


performance statistics or explain how to interpret them. For more information about the
performance of the SVC, see SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

© Copyright IBM Corp. 2015. All rights reserved. 907


7933 11 APPENDIX A Performance Anhony_Lev.fm Draft Document for Review February 4, 2016 8:01 am

SAN Volume Controller performance overview


Although storage virtualization with the IBM Spectrum Virtualize provides many
administrative benefits, it can also provide a substantial increase in performance for various
workloads. The caching capability of the IBM Spectrum Virtualize and its ability to stripe
volumes across multiple disk arrays can provide a significant performance improvement over
what can otherwise be achieved when midrange disk subsystems are used.

To ensure that the performance levels of your system that you want are maintained, monitor
performance periodically to provide visibility to potential problems that exist or are developing
so that they can be addressed in a timely manner.

Performance considerations
When you are designing the IBM Spectrum Virtualize infrastructure or maintaining an existing
infrastructure, you must consider many factors in terms of their potential effect on
performance. These factors include, but are not limited to dissimilar workloads competing for
the same resources, overloaded resources, insufficient available resources, poor performing
resources, and similar performance constraints.

Remember the following high-level rules when you are designing your storage area network
(SAN) and IBM Spectrum Virtualize layout:
򐂰 Host-to-SVC inter-switch link (ISL) oversubscription
This area is the most significant I/O load across ISLs. The recommendation is to maintain
a maximum of 7-to-1 oversubscription. A higher ratio is possible, but it tends to lead to I/O
bottlenecks. This suggestion also assumes a core-edge design, where the hosts are on
the edges and the SVC is the core.
򐂰 Storage-to-SVC ISL oversubscription
This area is the second most significant I/O load across ISLs. The maximum
oversubscription is 7-to-1. A higher ratio is not supported. Again, this suggestion assumes
a multiple-switch SAN fabric design.
򐂰 Node-to-node ISL oversubscription
This area is the least significant load of the three possible oversubscription bottlenecks. In
standard setups, this load can be ignored. Although this area is not entirely negligible, it
does not contribute significantly to the ISL load. However, node-to-node ISL
oversubscription is mentioned here in relation to the split-cluster capability that was made
available with version 6.3. When the system is running in this manner, the number of ISL
links becomes more important. As with the storage-to-SVC ISL oversubscription, this load
also requires a maximum of 7-to-1 oversubscription. Exercise caution and careful planning
when you determine the number of ISLs to implement. If you need assistance, we
recommend that you contact your IBM representative and request technical assistance.
򐂰 ISL trunking/port channeling
For the best performance and availability, we highly recommend that you use ISL trunking
or port channeling. Independent ISL links can easily become overloaded and turn into
performance bottlenecks. Bonded or trunked ISLs automatically share load and provide
better redundancy in a failure.

908 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm

򐂰 Number of paths per host multipath device


The maximum supported number of paths per multipath device that is visible on the host is
eight. Although the Subsystem Device Driver Path Control Module (SDDPCM), related
products, and most vendor multipathing software can support more paths, the SVC
expects a maximum of eight paths. In general, you see only an effect on performance from
more paths than eight. Although the IBM Spectrum Virtualize can work with more than
eight paths, this design is technically unsupported.
򐂰 Do not intermix dissimilar array types or sizes
Although the IBM Spectrum Virtualize supports an intermix of differing storage within
storage pools, it is best to always use the same array model, RAID mode, RAID size
(RAID 5 6+P+S does not mix well with RAID 6 14+2), and drive speeds.

Rules and guidelines are no substitution for monitoring performance. Monitoring performance
can provide a validation that design expectations are met and identify opportunities for
improvement.

IBM Spectrum Virtualize performance perspectives


The software was developed by the IBM Research Group and was designed to run on
commodity hardware (mass-produced Intel-based CPUs with mass-produced expansion
cards) and to provide distributed cache and a scalable cluster architecture. One of the main
goals of this design was to use refreshes in hardware. Currently, the SVC cluster is scalable
up to eight nodes and these nodes can be swapped for newer hardware while online. This
capability provides a great investment value because the nodes are relatively inexpensive
and a node swap can be done online. This capability provides an instant performance boost
with no license changes. Newer nodes, such as 2145-CG8 or 2145-DH8 models, which
dramatically increased cache of 8 GB - 64 GB per node, provides an extra benefit on top of
the typical refresh cycle. For more information about the node replacement and swap and
instructions about adding nodes, see this website:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437

The performance is near linear when nodes are added into the cluster until performance
eventually becomes limited by the attached components. Also, although virtualization with
provides significant flexibility in terms of the components that are used, it does not diminish
the necessity of designing the system around the components so that it can deliver the level
of performance that you want.

The key item for planning is your SAN layout. Switch vendors have slightly different planning
requirements, but the end goal is that you always want to maximize the bandwidth that is
available to the SVC ports. The SVC is one of the few devices that can drive ports to their
limits on average, so it is imperative that you put significant thought into planning the SAN
layout.

Essentially, performance improvements are gained by spreading the workload across a


greater number of back-end resources and more caching that are provided by the SVC
cluster. However, the performance of individual resources eventually becomes the limiting
factor.

Appendix A. Performance data and statistics gathering 909


7933 11 APPENDIX A Performance Anhony_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Performance monitoring
In this section, we highlight several performance monitoring techniques.

Collecting performance statistics


IBM Spectrum Virtualize is constantly collecting performance statistics. The default frequency
by which files are created is 5-minute intervals. Before version 4.3.0, the default was
15-minute intervals, with a supported range of 15 - 60 minutes. The collection interval can be
changed by using the startstats command.

The statistics files (Volume, MDisk, and Node) are saved at the end of the sampling interval
and a maximum of 16 files (each) are stored before they are overlaid in a rotating log fashion.
This design provides statistics for the most recent 80-minute period if the default 5-minute
sampling interval is used. IBM Spectrum Virtualize supports user-defined sampling intervals
of 1 - 60 minutes.

The maximum space that is required for a performance statistics file is 1,153,482 bytes. Up to
128 (16 per each of the three types across eight nodes) different files can exist across eight
SVC nodes. This design makes the total space requirement a maximum of 147,645,694 bytes
for all performance statistics from all nodes in an SVC cluster.

Note: Remember this maximum of 147,645,694 bytes for all performance statistics from all
nodes in an SVC cluster when you are in time-critical situations. The required size is not
otherwise important because SVC node hardware can map the space.

You can define the sampling interval by using the startstats -interval 2 command to
collect statistics at 2-minute intervals. For more information, see 10.8.8, “Starting statistics
collection” on page 630.

Collection intervals: Although more frequent collection intervals provide a more detailed
view of what happens within IBM Spectrum Virtualize and SVC, they shorten the amount of
time that the historical data is available on the IBM Spectrum Virtualize. For example,
instead of an 80-minute period of data with the default five-minute interval, if you adjust to
2-minute intervals, you have a 32-minute period instead.

Since software version 5.1.0, cluster-level statistics are no longer supported. Instead, use the
per node statistics that are collected. The sampling of the internal performance counters is
coordinated across the cluster so that when a sample is taken, all nodes sample their internal
counters at the same time. It is important to collect all files from all nodes for a complete
analysis. Tools, such as Tivoli Storage Productivity Center, perform this intensive data
collection for you.

Statistics file naming


The statistics files that are generated are written to the /dumps/iostats/ directory. The file
name is in the following formats:
򐂰 Nm_stats_<node_frontpanel_id>_<date>_<time> for managed disk (MDisk) statistics
򐂰 Nv_stats_<node_frontpanel_id>_<date>_<time> for virtual disks (Volumes) statistics
򐂰 Nn_stats_<node_frontpanel_id>_<date>_<time> for node statistics
򐂰 Nd_stats_<node_frontpanel_id>_<date>_<time> for disk drive statistics, not used for the
IBM Spectrum Virtualize on SVC

910 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm

The node_frontpanel_id is of the node on which the statistics were collected. The date is in
the form <yymmdd> and the time is in the form <hhmmss>. The following example shows an
MDisk statistics file name:

Nm_stats_113986_141031_214932

Example A-1 shows typical MDisk, volume, node, and disk drive statistics file names.

Example A-1 File names of per node statistics


IBM_2145:ITSO_SVC3:superuser>svcinfo lsiostatsdumps
id iostat_filename
1 Nd_stats_113986_141031_214932
2 Nv_stats_113986_141031_214932
3 Nv_stats_113986_141031_215132
4 Nd_stats_113986_141031_215132
5 Nd_stats_113986_141031_215332
6 Nv_stats_113986_141031_215332
7 Nv_stats_113986_141031_215532
8 Nd_stats_113986_141031_215532
9 Nv_stats_113986_141031_215732
10 Nd_stats_113986_141031_215732
11 Nv_stats_113986_141031_215932
12 Nd_stats_113986_141031_215932
13 Nm_stats_113986_141031_215932

Tip: The performance statistics files can be copied from the SVC nodes to a local drive on
your workstation by using the pscp.exe (included with PuTTY) from an MS-DOS command
line, as shown in this example:
C:\Program Files\PuTTY>pscp -unsafe -load ITSO_SVC3
[email protected]:/dumps/iostats/* c:\statsfiles

Use the -load parameter to specify the session that is defined in PuTTY.

Specify the -unsafe parameter when you use wildcards.

You can obtain PuTTY at this website:


http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

qperf
qperf is an unofficial (no-charge and unsupported) collection of awk scripts. qperf was made
available for download from IBM Techdocs. It was written by Christian Karpp. qperf is
designed to provide a quick performance overview by using the command-line interface (CLI)
and a UNIX Korn shell. (It can also be used with Cygwin.)

qperf is available for download from this website:


http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105947

svcmon
svcmon is not longer available.

The performance statistics files are in .xml format. They can be manipulated by using various
tools and techniques. Figure A-1 on page 912 shows an example of the type of chart that you
can produce by using the IBM Spectrum Virtualize performance statistics.

Appendix A. Performance data and statistics gathering 911


7933 11 APPENDIX A Performance Anhony_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure A-1 Spreadsheet example

Real-time performance monitoring


Starting with software version 6.2.0, the SVC supports real-time performance monitoring.
Real-time performance statistics provide short-term status information for the SVC. The
statistics are shown as graphs in the management GUI or can be viewed from the CLI. With
system-level statistics, you can quickly view the CPU usage and the bandwidth of volumes,
interfaces, and MDisks. Each graph displays the current bandwidth in megabytes per second
(MBps) or I/Os per second (IOPS), and a view of bandwidth over time.

Each node collects various performance statistics, mostly at 5-second intervals, and the
statistics that are available from the config node in a clustered environment. This information
can help you determine the performance effect of a specific node. As with system statistics,
node statistics help you to evaluate whether the node is operating within normal performance
metrics.

Real-time performance monitoring gathers the following system-level performance statistics:


򐂰 CPU utilization
򐂰 Port utilization and I/O rates
򐂰 Volume and MDisk I/O rates
򐂰 Bandwidth
򐂰 Latency

Real-time statistics are not a configurable option and cannot be disabled.

Real-time performance monitoring with the CLI


The lsnodestats and lssystemstats commands are available for monitoring the statistics
through the CLI. Next, we show you examples of how to use them.

912 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm

The lsnodestats command provides performance statistics for the nodes that are part of a
clustered system, as shown in Example A-2 (the output is truncated and shows only part of
the available statistics). You can also specify a node name in the command to limit the output
for a specific node.

Example A-2 lsnodestats command output


IBM_2145:ITSO_SVC3:admin>lsnodestats
node_id node_name stat_name stat_current stat_peak stat_peak_time
1 Node 1 compression_cpu_pc 0 0 141031225017
1 Node 1 cpu_pc 2 2 141031225017
1 Node 1 fc_mb 0 9 141031224722
1 Node 1 fc_io 1086 1089 141031224857
1 Node 1 sas_mb 0 0 141031225017
1 Node 1 sas_io 0 0 141031225017
1 Node 1 iscsi_mb 0 0 141031225017
1 Node 1 iscsi_io 0 0 141031225017
1 Node 1 write_cache_pc 0 0 141031225017
1 Node 1 total_cache_pc 0 0 141031225017
1 Node 1 vdisk_mb 0 0 141031225017
1 Node 1 vdisk_io 0 0 141031225017
1 Node 1 vdisk_ms 0 0 141031225017
1 Node 1 mdisk_mb 0 9 141031224722
1 Node 1 mdisk_io 0 66 141031224722
1 Node 1 mdisk_ms 0 5 141031224722
1 Node 1 drive_mb 0 0 141031225017
1 Node 1 drive_io 0 0 141031225017
1 Node 1 drive_ms 0 0 141031225017
1 Node 1 vdisk_r_mb 0 0 141031225017
.....
2 Node 2 compression_cpu_pc 0 0 141031225016
2 Node 2 cpu_pc 0 1 141031225006
2 Node 2 fc_mb 0 0 141031225016
2 Node 2 fc_io 1029 1051 141031224806
2 Node 2 sas_mb 0 0 141031225016
2 Node 2 sas_io 0 0 141031225016
2 Node 2 iscsi_mb 0 0 141031225016
2 Node 2 iscsi_io 0 0 141031225016
2 Node 2 write_cache_pc 0 0 141031225016
2 Node 2 total_cache_pc 0 0 141031225016
2 Node 2 vdisk_mb 0 0 141031225016
2 Node 2 vdisk_io 0 0 141031225016
2 Node 2 vdisk_ms 0 0 141031225016
2 Node 2 mdisk_mb 0 0 141031225016
2 Node 2 mdisk_io 0 1 141031224941
2 Node 2 mdisk_ms 0 20 141031224741
2 Node 2 drive_mb 0 0 141031225016
2 Node 2 drive_io 0 0 141031225016
2 Node 2 drive_ms 0 0 141031225016
2 Node 2 vdisk_r_mb 0 0 141031225016
...

Appendix A. Performance data and statistics gathering 913


7933 11 APPENDIX A Performance Anhony_Lev.fm Draft Document for Review February 4, 2016 8:01 am

The previous example shows statistics for the two node members of cluster ITSO_SVC3. For
each node, the following columns are displayed:
򐂰 stat_name: Provides the name of the statistic field
򐂰 stat_current: The current value of the statistic field
򐂰 stat_peak: The peak value of the statistic field in the last 5 minutes
򐂰 stat_peak_time: The time that the peak occurred

On the other side, the lssystemstats command lists the same set of statistics that is listed
with the lsnodestats command, but representing all nodes in the cluster. The values for these
statistics are calculated from the node statistics values in the following way:
򐂰 Bandwidth: Sum of bandwidth of all nodes
򐂰 Latency: Average latency for the cluster, which is calculated by using data from the whole
cluster, not an average of the single node values
򐂰 IOPS: Total IOPS of all nodes
򐂰 CPU percentage: Average CPU percentage of all nodes

Example A-3 shows the resulting output of the lssystemstats command.

Example A-3 lssystemstats command output


IBM_2145:ITSO_SVC3:admin>lssystemstats
stat_name stat_current stat_peak stat_peak_time
compression_cpu_pc 0 0 141031230031
cpu_pc 0 1 141031230021
fc_mb 0 9 141031225721
fc_io 1942 2175 141031225836
sas_mb 0 0 141031230031
sas_io 0 0 141031230031
iscsi_mb 0 0 141031230031
iscsi_io 0 0 141031230031
write_cache_pc 0 0 141031230031
total_cache_pc 0 0 141031230031
vdisk_mb 0 0 141031230031
vdisk_io 0 0 141031230031
vdisk_ms 0 0 141031230031
mdisk_mb 0 9 141031225721
...

Table A-1 has a brief description of each of the statistics that are presented by the
lssystemstats and lsnodestats commands.

Table A-1 lssystemstats and lsnodestats statistics field name descriptions


Field name Unit Description

cpu_pc Percentage Utilization of node CPUs.

fc_mb MBps Fibre Channel bandwidth.

fc_io IOPS Fibre Channel throughput.

sas_mb MBps Serial-attached SCSI (SAS) bandwidth.

sas_io IOPS SAS throughput.

914 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm

Field name Unit Description

iscsi_mb MBps Internet Small Computer System Interface (iSCSI)


bandwidth.

iscsi_io IOPS iSCSI throughput.

write_cache_pc Percentage Write cache fullness. Updated every 10 seconds.

total_cache_pc Percentage Total cache fullness. Updated every 10 seconds.

vdisk_mb MBps Total Volume bandwidth.

vdisk_io IOPS Total Volume throughput.

vdisk_ms Milliseconds Average Volume latency.

mdisk_mb MBps MDisk (SAN and RAID) bandwidth.

mdisk_io IOPS MDisk (SAN and RAID) throughput.

mdisk_ms Milliseconds Average MDisk latency.

drive_mb MBps Drive bandwidth.

drive_io IOPS Drive throughput.

drive_ms Milliseconds Average drive latency.

vdisk_w_mb MBps Volume write bandwidth.

vdisk_w_io IOPS Volume write throughput.

vdisk_w_ms Milliseconds Average Volume write latency.

mdisk_w_mb MBps MDisk (SAN and RAID) write bandwidth.

mdisk_w_io IOPS MDisk (SAN and RAID) write throughput.

mdisk_w_ms Milliseconds Average MDisk write latency.

drive_w_mb MBps Drive write bandwidth.

drive_w_io IOPS Drive write throughput.

drive_w_ms Milliseconds Average drive write latency.

vdisk_r_mb MBps Volumeread bandwidth.

vdisk_r_io IOPS Volume read throughput.

vdisk_r_ms Milliseconds Average Volume read latency.

mdisk_r_mb MBps MDisk (SAN and RAID) read bandwidth.

mdisk_r_io IOPS MDisk (SAN and RAID) read throughput.

mdisk_r_ms Milliseconds Average MDisk read latency.

drive_r_mb MBps Drive read bandwidth.

drive_r_io IOPS Drive read throughput.

drive_r_ms Milliseconds Average drive read latency.

Appendix A. Performance data and statistics gathering 915


7933 11 APPENDIX A Performance Anhony_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Real-time performance statistics monitoring with the GUI


Use real-time statistics to monitor CPU utilization, volume, interface, and MDisk bandwidth of
your system and nodes. Each graph represents five minutes of collected statistics and
provides a means of assessing the overall performance of your system.

The real-time statistics are available from the IBM Spectrum Virtualize GUI. Click
Monitoring → Performance (as shown in Figure A-2) to open the Performance Monitoring
window.

Figure A-2 IBM SAN Volume Controller Monitoring menu

As shown in Figure A-3 on page 917, the Performance monitoring window is divided into the
following sections that provide utilization views for the following resources:
򐂰 CPU Utilization: The CPU utilization graph shows the current percentage of CPU usage
and peaks in utilization. It can also display compression CPU usage for systems with
compressed volumes.
򐂰 Volumes: Shows four metrics on the overall volume utilization graphics:
– Read
– Write
– Read latency
– Write latency
򐂰 Interfaces: The Interfaces graph displays data points for Fibre Channel (FC), iSCSI,
serial-attached SCSI (SAS), and IP Remote Copy interfaces. You can use this information
to help determine connectivity issues that might impact performance.
– Fibre Channel
– iSCSI
– SAS
– IP Remote Copy
򐂰 MDisks: Also shows four metrics on the overall MDisks graphics:
– Read
– Write
– Read latency
– Write latency

916 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm

You can use these metrics to help determine the overall performance health of the volumes
and MDisks on your system. Consistent unexpected results can indicate errors in
configuration, system faults, or connectivity issues.

Figure A-3 Performance monitoring window

You can also select to view performance statistics for each of the available nodes of the
system, as shown in Figure A-4.

Figure A-4 Select a system node

You can also change the metric between MBps or IOPS, as shown in Figure A-5.

Figure A-5 Changing to MBps or IOPS

On any of these views, you can select any point with your cursor to know the exact value and
when it occurred. When you place your cursor over the timeline, it becomes a dotted line with
the various values gathered, as shown in Figure A-6 on page 918.

Appendix A. Performance data and statistics gathering 917


7933 11 APPENDIX A Performance Anhony_Lev.fm Draft Document for Review February 4, 2016 8:01 am

Figure A-6 Detailed resource utilization

For each of the resources, various values exist that you can view by selecting the value. For
example, as shown in Figure A-7, the four available fields are selected for the MDisks view:
Read, Write, Read latency, and Write latency. In our example, Read is not selected.

Figure A-7 Detailed resource utilization

Performance data collection and Tivoli Storage Productivity Center for Disk
Although you can obtain performance statistics in standard .xml files, the use of .xml files is a
less practical and more complicated method to analyze the IBM Spectrum Virtualize
performance statistics. Tivoli Storage Productivity Center for Disk is the supported IBM tool to
collect and analyze SVC performance statistics.

Tivoli Storage Productivity Center for Disk is installed separately on a dedicated system and it
is not part of the IBM Spectrum Virtualize bundle.

For more information about the use of Tivoli Storage Productivity Center to monitor your
storage subsystem, see SAN Storage Performance Management Using Tivoli Storage
Productivity Center, SG24-7364, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open

918 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm

SVC port quality statistics: Tivoli Storage Productivity Center for Disk Version 4.2.1
supports the SVC port quality statistics that are provided in SVC versions 4.3 and later.

Monitoring these metrics and the performance metrics can help you to maintain a stable
SAN environment.

11.18.3 Port Masking


Figure A-8 shows back panel of svc 2145 DH8 with 2 Fibre channel installed in slots 1 and 2.

Figure A-8 SAN Volume Controller 2145-DH8 With Fibre Channel host interface adapers

Benefits of Fibre Channel port masking


You can use a port mask to control the node target ports that a host can access, which
satisfies the following requirements:
򐂰 As part of a security policy to limit the set of WWPNs that can obtain access to any
volumes through an SVC port.
򐂰 As part of a scheme to limit the number of logins with mapped volumes visible to a host
multipathing driver, such as SDD, and therefore limit the number of host objects that are
configured without resorting to switch zoning.

The port mask is an optional parameter of the mkhost and chhost commands. The port mask
is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports
enabled). For example, a mask of 0011 enables port 1 and port 2. The default value is 1111
(all ports enabled).

Setting up Fibre Channel port masks is particularly useful when you have more than four
Fibre Channel ports on any node in the system, as it saves setting up a large number of SAN
zones.

Fibre Channel IO ports are logical ports, which can exist on Fibre Channel platform ports or
on FCoE platform ports.

There are two Fibre Channel port masks on a system. The local port mask controls
connectivity to other nodes in the same system, and the partner port mask control
connectivity to nodes in remote, partnered systems. By default, all ports are enabled for both
local and partner connectivity.

The port masks apply to all nodes on a system; a different port mask cannot be set on nodes
in the same system. You do not have to have the same port mask on partnered systems.

Note: If all devices are zoned correctly during configuration, the use of port masking can
be optional.

The Fibre Channel paths correspond to the following bit number and WWNN port format:

Appendix A. Performance data and statistics gathering 919


7933 11 APPENDIX A Performance Anhony_Lev.fm Draft Document for Review February 4, 2016 8:01 am

HBA 1:
Fibre Channel port 1 = bit1 = 50:07:68:01:4x:xx:xx
Fibre Channel port 2 = bit 2 = 50:07:68:01:3x:xx:xx
Fibre Channel port 3 = bit 3 = 50:07:68:01:1x:xx:xx
Fibre Channel port 4 = bit 4 = 50:07:68:01:2x:xx:xx

HBA 2:
Fibre Channel port 5 = bit5 = 50:07:68:01:5x:xx:xx
Fibre Channel port 6 = bit6 = 50:07:68:01:6x:xx:xx
Fibre Channel port 7 = bit7 = 50:07:68:01:7x:xx:xx
Fibre Channel port 8 = bit 8 = 50:07:68:01:8x:xx:xx

Before executing the port masking procedure, verify the current status of port masks. They
should all show 11111111 as highlighted in the end. See Example A-4 for details.

Example A-4
IBM_2076 superuser>lssystem
# non-relevant output lines removed for clarity #
local_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111

Assuming you need to use and isolate 2 WWNN ports one for Local to node communication
and the other for Remote communication, use the procedure below.:
1. Identify the ports you want to use for each local node communication
a. (Port “:7x” and “:8x” ) as below:
b. Port “:7x” (WWN: 50:07:68:01:7x:xx:xx ) Intended for: used for local node
communication
c. Port “:8x” (WWN 50:07:68:01:8x:xx:xx) Intended for: used for local node
communication
2. Also Identify the ports you want to use for each partnership communication
a. (Port “:5x” and “:6x”) as below:
b. Port “:5x” (50:07:68:01:5x:xx:xx) Intended for: used for partnership communication
c. Port “:6x” (50:07:68:01:6x:xx:xx Intended for: used for partnership communication

Note: The intending results will set the ports as follows:


򐂰 Local (internode) communication will be only across Port (“:7x” ) 50:07:68:01:7x:xx:xx
and Port (“:8x”) 50:07:68:01:8x:xx:xx
򐂰 Remote partnersip communication will be only across Port “:5x” and “:6x like Port “:5x”
(50:07:68:01:5x:xx:xx) and Port “:6x” (50:07:68:01:6x:xx:xx)

3. Then proceed to set the port masking, for Local node communication by issuing the
following command:

chsystem -localfcportmask 11000000


4. For the Remote (partnership) communication for GM and inter-cluster communication
across
a. Port (“:5x” ) 50:07:68:01:5x:xx:xx and Port (“:6x” ) 50:07:68:01:6x:xx:xx will be used by
issuing the command below:

chsystem -partnerfcportmask 00110000

920 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm

5. Now Once you run the two commands below, to set the port masking for local node
communication and partnershp communication :

chchsystem -partnerfcportmask 00110000

chsystem -localfcportmask 11000000


6. You can issue the lsysystem command again to see the output results and effect of the
change.

The output is ending with 11000000 as highlighted and shown below in Example A-5.

Example A-5
IBM_2076 superuser>lssystem
# non-relevant output lines removed for clarity #
local_fc_port_mask
0000000000000000000000000000000000000000000000000000000011000000
partner_fc_port_mask
0000000000000000000000000000000000000000000000000000000000110000

7. This concludes your steps for masking 2 ports , one for local node communication and one
for remote partnership communication.

Appendix A. Performance data and statistics gathering 921


7933 11 APPENDIX A Performance Anhony_Lev.fm Draft Document for Review February 4, 2016 8:01 am

922 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm

Appendix B. Terminology

In this appendix, we define the IBM System Storage SAN Volume Controller (SVC) terms that
are commonly used in this book.

To see the complete set of terms that relate to the SAN Volume Controller, see the Glossary
section of the IBM SAN Volume Controller Knowledge Center, which is available at this
website:
http://www.ibm.com/support/knowledgecenter/STPVGU/landing/SVC_welcome.html

© Copyright IBM Corp. 2015. All rights reserved. 923


7933 12 APPENDIX B GLOSSARY Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Commonly encountered terms


This appendix includes the following SVC terminology.

Array
An ordered collection, or group, of physical devices (disk drive modules) that are used to
define logical volumes or devices. An array is a group of drives designated to be managed
with a Redundant Array of Independent Disks (RAID).

Asymmetric virtualization
Asymmetric virtualization is a virtualization technique in which the virtualization engine is
outside the data path and performs a metadata-style service. The metadata server contains
all the mapping and locking tables, and the storage devices contain only data. See also
“Symmetric virtualization” on page 936.

Asynchronous replication
Asynchronous replication is a type of replication in which control is given back to the
application as soon as the write operation is made to the source volume. Later, the write
operation is made to the target volume. See also “Synchronous replication” on page 936.

Automatic data placement mode


Automatic data placement mode is an Easy Tier operating mode in which the host activity on
all the volume extents in a pool are “measured”, a migration plan is created, and then
automatic extent migration is performed.

Back end
See “Front end and back end” on page 929.

Caching I/O Group


The caching I/O Group is the I/O Group in the system that performs the cache function for a
volume.

Call home
Call home is a communication link that is established between a product and a service
provider. The product can use this link to call IBM or another service provider when the
product requires service. With access to the machine, service personnel can perform service
tasks, such as viewing error and problem logs or initiating trace and dump retrievals.

Canister
A canister is a single processing unit within a storage system.

Capacity licensing
Capacity licensing is a licensing model that licenses features with a price-per-terabyte model.
Licensed features are FlashCopy, Metro Mirror, Global Mirror, and virtualization. See also
“FlashCopy” on page 928, “Metro Mirror” on page 932, and “Virtualization” on page 937.

Chain
A set of enclosures that are attached to provide redundant access to the drives inside the
enclosures. Each control enclosure can have one or more chains.

924 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm

CHAP (Challenge Handshake Authentication Protocol)


An authentication protocol that protects against eavesdropping by encrypting the user name
and password.

Channel extender
A channel extender is a device that is used for long-distance communication that connects
other storage area network (SAN) fabric components. Generally, channel extenders can
involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or
another long-distance communication protocol.

Child pool
Administrators can use child pools to control capacity allocation for volumes that are used for
specific purposes. Instead of being created directly from managed disks (MDisks), child pools
are created from existing capacity that is allocated to a parent pool. As with parent pools,
volumes can be created that specifically use the capacity that is allocated to the child pool.
Child pools are similar to parent pools with similar properties. Child pools can be used for
volume copy operation. Also, see “Parent pool” on page 932.

Clustered system (SAN Volume Controller)


A clustered system, which was known as a cluster, is a group of up to eight SVC nodes that
presents a single configuration, management, and service interface to the user.

Cold extent
A cold extent is an extent of a volume that does not get any performance benefit if it is moved
from a hard disk drive (HDD) to a Flash disk. A cold extent also refers to an extent that needs
to be migrated onto an HDD if it is on a Flash disk drive.

Compression
Compression is a function that removes repetitive characters, spaces, strings of characters,
or binary data from the data that is being processed and replaces characters with control
characters. Compression reduces the amount of storage space that is required for data.

Compression accelerator
A compression accelerator is hardware onto which the work of compression is offloaded from
the microprocessor.

Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages the data that describes the
clustered-system configuration and provides a focal point for configuration commands. If the
configuration node fails, another node in the cluster transparently assumes that role.

Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets
that are maintained with the same time reference so that all copies are consistent in time. A
Consistency Group can be managed as a single entity.

Container
A container is a software object that holds or organizes other software objects or entities.

Appendix B. Terminology 925


7933 12 APPENDIX B GLOSSARY Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Contingency capacity
For thin-provisioned volumes that are configured to automatically expand, the unused real
capacity that is maintained. For thin-provisioned volumes that are not configured to
automatically expand, the difference between the used capacity and the new real capacity.

Copied state
Copied is a FlashCopy state that indicates that a copy was triggered after the copy
relationship was created. The Copied state indicates that the copy process is complete and
the target disk has no further dependency on the source disk. The time of the last trigger
event is normally displayed with this status.

Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN
provides all of the connectivity of the redundant SAN, but without the 100% redundancy. SVC
nodes are typically connected to a “redundant SAN” that is made up of two counterpart SANs.
A counterpart SAN is often called a SAN fabric.

Cross-volume consistency
A consistency group property that guarantees consistency between volumes when an
application issues dependent write operations that span multiple volumes.

Data consistency
Data consistency is a characteristic of the data at the target site where the dependent write
order is maintained to guarantee the recoverability of applications.

Data migration
Data migration is the movement of data from one physical location to another physical
location without the disruption of application I/O operations.

Dependent write operation


A write operation that must be applied in the correct order to maintain cross-volume
consistency.

Directed Maintenance Procedures


The fix procedures, which are also known as Directed Maintenance Procedures (DMPs),
ensure that you fix any outstanding errors in the error log. To fix errors, from the Monitoring
panel, click Events. The Next Recommended Action is displayed at the top of the Events
window. Select Run This Fix Procedure and follow the instructions.

Discovery
The automatic detection of a network topology change, for example, new and deleted nodes
or links.

Disk tier
MDisks (logical unit numbers (LUNs)) that are presented to the SVC cluster likely have
different performance attributes because of the type of disk or RAID array on which they are
installed. The MDisks can be on 15 K RPM Fibre Channel (FC) or serial-attached SCSI (SAS)
disk, Nearline SAS, or Serial Advanced Technology Attachment (SATA), or even Flash Disks.
Therefore, a storage tier attribute is assigned to each MDisk and the default is generic_hdd.
SVC 6.1 introduced a new disk tier attribute for Flash Disk, which is known as generic_ssd.

926 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm

Distributed RAID
An alternative RAID scheme where the number of drives that are used to store the array can
be greater than the equivalent, typical RAID scheme. The same data stripes are distributed
across a greater number of drives, which increases the opportunity for parallel I/O and

Easy Tier
Easy Tier is a volume performance function within the SVC that provides automatic data
placement of a volume’s extents in a multitiered storage pool. The pool normally contains a
mix of Flash Disks and HDDs. Easy Tier measures host I/O activity on the volume’s extents
and migrates hot extents onto the Flash Disks to ensure the maximum performance.

Encryption deadlock
The inability to access encryption keys to decrypt data. See also encryption recovery key.

Encryption key server / encryption key manager


An internal or external system that receives and then serves existing encryption keys or
certificates to a storage system.

Encryption of data at rest


Encryption of data at rest is the encryption of the data that is on the storage system.

Encryption recovery key


An encryption key that allows a method to recover from an encryption deadlock situation
where the normal encryption key servers are not available. See also encryption deadlock.

Enhanced Stretched Systems


A stretched system is an extended high availability (HA) method that is supported by the SVC
to enable I/O operations to continue after the loss of half of the system. Enhanced Stretched
Systems provide the following primary benefits. In addition to the automatic failover that
occurs when a site fails in a standard stretched system configuration, an Enhanced Stretched
System provides a manual override that can be used to choose which one of two sites
continues operation. Enhanced Stretched Systems intelligently route I/O traffic between
nodes and controllers to reduce the amount of I/O traffic between sites, and to minimize the
impact to host application I/O latency. Enhanced Stretched Systems include an
implementation of additional policing rules to ensure that the correct configuration of a
standard stretched system is used.

Evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the
volume extents in a pool are “measured” only. No automatic extent migration is performed.

Event (error)
An event is an occurrence of significance to a task or system. Events can include the
completion or failure of an operation, user action, or a change in the state of a process.
Before SVC V6.1, this situation was known as an error.

Event code
An event code is a value that is used to identify an event condition to a user. This value might
map to one or more event IDs or to values that are presented on the service panel. This value
is used to report error conditions to IBM and to provide an entry point into the service guide.

Appendix B. Terminology 927


7933 12 APPENDIX B GLOSSARY Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Event ID
An event ID is a value that is used to identify a unique error condition that was detected by the
SVC. An event ID is used internally in the cluster to identify the error.

Excluded condition
The excluded condition is a status condition. It describes an MDisk that the SVC decided is
no longer sufficiently reliable to be managed by the cluster. The user must issue a command
to include the MDisk in the cluster-managed storage.

Extent
An extent is a fixed-size unit of data that is used to manage the mapping of data between
MDisks and volumes. The size of the extent can range 16 MB - 8 GB in size.

External storage
External storage refers to managed disks (MDisks) that are SCSI logical units that are
presented by storage systems that are attached to and managed by the clustered system.

Failback
Failback is the restoration of an appliance to its initial configuration after the detection and
repair of a failed network or component.

Failover
Failover is an automatic operation that switches to a redundant or standby system or node in
a software, hardware, or network interruption. See also “Failback”.

Feature activation code


An alphanumeric code that activates a licensed function on a product.

Fibre Channel (FC) port logins


Fibre Channel (FC) port logins refer to the number of hosts that can see any one SVC node
port. The SVC has a maximum limit per node port of FC logins that are allowed.

Field-replaceable units
Field-replaceable units (FRUs) are individual parts that are replaced entirely when any one of
the unit’s components fails. They are held as spares by the IBM service organization.

FlashCopy
FlashCopy refers to a point-in-time copy where a virtual copy of a volume is created. The
target volume maintains the contents of the volume at the point in time when the copy was
established. Any subsequent write operations to the source volume are not reflected on the
target volume.

FlashCopy mapping
A FlashCopy mapping is a continuous space on a direct-access storage volume, which is
occupied by or reserved for a particular data set, data space, or file.

FlashCopy relationship
See “FlashCopy mapping” on page 928.

928 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm

FlashCopy service
FlashCopy service is a copy service that duplicates the contents of a source volume on a
target volume. In the process, the original contents of the target volume are lost. See also
“Point-in-time copy” on page 933.

Flash drive
A data storage device that uses solid-state memory to store persistent data.

Flash module
A modular hardware unit containing flash memory, one or more flash controllers, and
associated electronics.

Front end and back end


The SVC takes MDisks to create pools of capacity from which volumes are created and
presented to application servers (hosts). The MDisks are in the controllers at the back end of
the SVC and in the SVC to the back-end controller zones. The volumes that are presented to
the hosts are in the front end of the SVC.

Global Mirror
Global Mirror is a method of asynchronous replication that maintains data consistency across
multiple volumes within or across multiple systems. Global Mirror is generally used where
distances between the source site and target site cause increased latency beyond what the
application can accept.

Global Mirror with Change Volumes


Change volumes are used to record changes to the primary and secondary volumes of a
remote copy relationship. A FlashCopy mapping exists between a primary and its change
volume and a secondary and its change volume.

Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KiB or
256 KiB) in the SVC. A grain is also the unit to extend the real size of a thin-provisioned
volume (32 KiB, 64 KiB, 128 KiB, or 256 KiB).

Hop
One segment of a transmission path between adjacent nodes in a routed network.

Host bus adapter (HBA)


A host bus adapter (HBA) is an interface card that connects a server to the SAN environment
through its internal bus system, for example, PCI Express.

Host ID
A host ID is a numeric identifier that is assigned to a group of host FC ports or Internet Small
Computer System Interface (iSCSI) host names for LUN mapping. For each host ID, SCSI IDs
are mapped to volumes separately. The intent is to have a one-to-one relationship between
hosts and host IDs, although this relationship cannot be policed.

Host mapping
Host mapping refers to the process of controlling which hosts have access to specific
volumes within a cluster. (Host mapping is equivalent to LUN masking.) Before SVC V6.1, this
process was known as VDisk-to-host mapping.

Appendix B. Terminology 929


7933 12 APPENDIX B GLOSSARY Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Hot extent
A hot extent is a frequently accessed volume extent that gets a performance benefit if it is
moved from an HDD onto a Flash Disk.

HyperSwap
Pertaining to a function that provides continuous, transparent availability against storage
errors and site failures, and is based on synchronous replication.

Image mode
Image mode is an access mode that establishes a one-to-one mapping of extents in the
storage pool (existing LUN or (image mode) MDisk) with the extents in the volume.

Image volume
An image volume is a volume in which a direct block-for-block translation exists from the
managed disk (MDisk) to the volume.

I/O Group
Each pair of SVC cluster nodes is known as an input/output (I/O) Group. An I/O Group has a
set of volumes that are associated with it that are presented to host systems. Each SVC node
is associated with exactly one I/O Group. The nodes in an I/O Group provide a failover and
failback function for each other.

Internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in
enclosures and in nodes that are part of the SVC cluster.

Internet Small Computer System Interface (iSCSI) qualified name (IQN)


Internet Small Computer System Interface (iSCSI) qualified name (IQN) refers to special
names that identify both iSCSI initiators and targets. IQN is one of the three name formats
that is provided by iSCSI. The IQN format is iqn.yyyy-mm.{reversed domain name}. For
example, the default for an SVC node is:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>.

Internet storage name service (iSNS)


iSNS refers to the Internet storage name service (iSNS) protocol that is used by a host
system to manage iSCSI targets and the automated iSCSI discovery, management, and
configuration of iSCSI and FC devices. It was defined in Request for Comments (RFC) 4171.

Inter-switch link (ISL) hop


An inter-switch link (ISL) is a connection between two switches and counted as one ISL hop.
The number of hops is always counted on the shortest route between two N-ports (device
connections). In an SVC environment, the number of ISL hops is counted on the shortest
route between the pair of nodes that are farthest apart. The SVC supports a maximum of
three ISL hops.

I/O group
A collection of volumes and node relationships that present a common interface to host
systems. Each pair of nodes is known as an input/output (I/O) group.

930 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm

Latency
The time interval between the initiation of a send operation by a source task and the
completion of the matching receive operation by the target task. More generally, latency is the
time between a task initiating data transfer and the time that transfer is recognized as
complete at the data destination.

Least recently used (LRU)


Pertaining to an algorithm used to identify and make available the cache space that contains
the data that was least recently used.

Licensed capacity
The amount of capacity on a storage system that a user is entitled to configure.

License key
An alphanumeric code that activates a licensed function on a product.

License key file


A file that contains one or more licensed keys.

Lightweight Directory Access Protocol (LDAP)


Lightweight Directory Access Protocol (LDAP) is an open protocol that uses TCP/IP to
provide access to directories that support an X.500 model and that does not incur the
resource requirements of the more complex X.500 Directory Access Protocol (DAP). For
example, LDAP can be used to locate people, organizations, and other resources in an
Internet or intranet directory.

Local and remote fabric interconnect


The local fabric interconnect and the remote fabric interconnect are the SAN components that
are used to connect the local and remote fabrics. Depending on the distance between the two
fabrics, they can be single-mode optical fibers that are driven by long wave (LW) gigabit
interface converters (GBICs) or small form-factor pluggables (SFPs), or more sophisticated
components, such as channel extenders or special SFP modules that are used to extend the
distance between SAN components.

Local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect
the components (nodes, hosts, and switches) of the local cluster together.

Logical unit (LU) and logical unit number (LUN)


The logical unit (LUN) is defined by the SCSI standards as a logical unit number (LUN). LUN
is an abbreviation for an entity that exhibits disk-like behavior, for example, a volume or an
MDisk.

Machine signature
A string of characters that identifies a system. A machine signature might be required to
obtain a license key.

Managed disk (MDisk)


A managed disk (MDisk) is a SCSI disk that is presented by a RAID controller and managed
by the SVC. The MDisk is not visible to host systems on the SAN.

Appendix B. Terminology 931


7933 12 APPENDIX B GLOSSARY Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Managed disk group (storage pool)


See “Storage pool (managed disk group)” on page 936.

Metro Global Mirror


Metro Mirror Global is a cascaded solution where Metro Mirror synchronously copies data to
the target site. This Metro Mirror target is the source volume for Global Mirror that
asynchronously copies data to a third site. This solution has the potential to provide disaster
recovery with no data loss at Global Mirror distances when the intermediate site does not
participate in the disaster that occurs at the production site.

Metro Mirror
Metro Mirror is a method of synchronous replication that maintains data consistency across
multiple volumes within the system. Metro Mirror is generally used when the write latency that
is caused by the distance between the source site and target site is acceptable to application
performance.

Mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary
physical copy is known within the SVC as copy 0 and the secondary copy is known within the
SVC as copy 1.

Node
An SVC node is a hardware entity that provides virtualization, cache, and copy services for
the cluster. The SVC nodes are deployed in pairs that are called I/O Groups. One node in a
clustered system is designated as the configuration node.

Node canister
A node canister is a hardware unit that includes the node hardware, fabric and service
interfaces, and serial-attached SCSI (SAS) expansion ports.

Node rescue
The process by which a node that has no valid software installed on its hard disk drive can
copy software from another node connected to the same Fibre Channel fabric.

Oversubscription
Oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections
to the traffic on the most heavily loaded ISLs, where more than one connection is used
between these switches. Oversubscription assumes a symmetrical network, and a specific
workload that is applied equally from all initiators and sent equally to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.

Parent pool
Parent pools receive their capacity from MDisks. All MDisks in a pool are split into extents of
the same size. Volumes are created from the extents that are available in the pool. You can
add MDisks to a pool at any time either to increase the number of extents that are available
for new volume copies or to expand existing volume copies. The system automatically
balances volume extents between the MDisks to provide the best performance to the
volumes.

932 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm

Partnership
In Metro Mirror or Global Mirror operations, the relationship between two clustered systems.
In a clustered-system partnership, one system is defined as the local system and the other
system as the remote system.

Point-in-time copy
A point-in-time copy is the instantaneous copy that the FlashCopy service makes of the
source volume. See also “FlashCopy service” on page 929.

Preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The
preparing phase flushes a volume’s data from cache in preparation for the FlashCopy
operation.

Primary volume
In a stand-alone Metro Mirror or Global Mirror relationship, the target of write operations
issued by the host application.

Private fabric
Configure one SAN per fabric so that it is dedicated for node-to-node communication. This
SAN is referred to as a private SAN.

Public fabric
Configure one SAN per fabric so that it is dedicated for host attachment, storage system
attachment, and remote copy operations. This SAN is referred to as a public SAN. You can
configure the public SAN to allow SVC node-to-node communication also. You can optionally
use the -localportfcmask parameter of the chsystem command to constrain the node-to-node
communication to use only the private SAN.

Quorum disk
A disk that contains a reserved area that is used exclusively for system management. The
quorum disk is accessed when it is necessary to determine which half of the clustered system
continues to read and write data. Quorum disks can either be MDisks or drives.

Quorum index
The quorum index is the pointer that indicates the order that is used to resolve a tie. Nodes
attempt to lock the first quorum disk (index 0), followed by the next disk (index 1), and finally
the last disk (index 2). The tie is broken by the node that locks them first.

RACE engine
The RACE engine compresses data on volumes in real time with minimal impact on
performance. See “Compression” on page 925 or “Real-time Compression” on page 933.

Real capacity
Real capacity is the amount of storage that is allocated to a volume copy from a storage pool.

Real-time Compression
Real-time Compression is an IBM integrated software function for storage space efficiency.
The RACE engine compresses data on volumes in real time with minimal impact on
performance.

Appendix B. Terminology 933


7933 12 APPENDIX B GLOSSARY Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Redundant Array of Independent Disks (RAID)


RAID stands for a Redundant Array of Independent Disks, with two or more physical disk
drives that are combined in an array in a certain way, which incorporates a RAID level for
failure protection or better performance. The most common RAID levels are 0, 1, 5, 6, and 10.

RAID 0
RAID 0 is a data striping technique that is used across an array and no data protection is
provided.

RAID 1
RAID 1 is a mirroring technique that is used on a storage array in which two or more identical
copies of data are maintained on separate mirrored disks.

RAID 10
RAID 10 is a combination of a RAID 0 stripe that is mirrored (RAID 1). Therefore, two identical
copies of striped data exist; no parity exists.

RAID 5
RAID 5 is an array that has a data stripe, which includes a single logical parity drive. The
parity check data is distributed across all the disks of the array.

RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, which are calculated with
different algorithms. Therefore, this level can continue to process read and write requests to
all of the array’s virtual disks in the presence of two concurrent disk failures.

Rebuild area
Reserved capacity that is distributed across all drives in a redundant array of drives. If a drive
in the array fails, the lost array data is systematically restored into the reserved capacity,
returning redundancy to the array. The duration of the restoration process is minimized
because all drive members simultaneously participate in restoring the data. See also
distributed RAID.

Redundant storage area network (SAN)


A redundant storage area network (SAN) is a SAN configuration in which there is no single
point of failure (SPoF); therefore, data traffic continues no matter what component fails.
Connectivity between the devices within the SAN is maintained (although possibly with
degraded performance) when an error occurs. A redundant SAN design is normally achieved
by splitting the SAN into two independent counterpart SANs (two SAN fabrics), so that if one
path of the counterpart SAN is destroyed, the other counterpart SAN path keeps functioning.

Relationship
In Metro Mirror or Global Mirror, a relationship is the association between a master volume
and an auxiliary volume. These volumes also have the attributes of a primary or secondary
volume.

Reliability, availability, and serviceability (RAS)


Reliability, availability, and serviceability (RAS) are a combination of design methodologies,
system policies, and intrinsic capabilities that, when taken together, balance improved
hardware availability with the costs that are required to achieve it.

934 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm

Reliability is the degree to which the hardware remains free of faults. Availability is the ability
of the system to continue operating despite predicted or experienced faults. Serviceability is
how efficiently and nondisruptively broken hardware can be fixed.

Remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that
connect the components (nodes, hosts, and switches) of the remote cluster together.
Significant distances can exist between the components in the local cluster and those
components in the remote cluster.

Secondary volume
Pertinent to remote copy, the volume in a relationship that contains a copy of data written by
the host application to the primary volume. See also relationship

Serial-attached Small Computer System Interface (SCSI) (SAS)


Serial-attached Small Computer System Interface (SCSI) (SAS) is a method that is used in
accessing computer peripheral devices that employs a serial (one bit at a time) means of
digital data transfer over thin cables. The method is specified in the American National
Standard Institute standard called SAS. In the business enterprise, SAS is useful for access
to mass storage devices, particularly external hard disk drives.

Service Location Protocol


The Service Location Protocol (SLP) is an Internet service discovery protocol that allows
computers and other devices to find services in a local area network (LAN) without prior
configuration. It was defined in the request for change (RFC) 2608.

Small Computer Systems Interface (SCSI)


Small Computer Systems Interface (SCSI) is an ANSI-standard electronic interface with
which personal computers can communicate with peripheral hardware, such as disk drives,
tape drives, CD-ROM drives, printers, and scanners, faster and more flexibly than with
previous interfaces.

Snapshot
A snapshot is an image backup type that consists of a point-in-time view of a volume.

Solid-state disk (SSD)


A solid-state disk (SSD) or Flash Disk is a disk that is made from solid-state memory and
therefore has no moving parts. Most SSDs use NAND-based flash memory technology. It is
defined to the SVC as a disk tier generic_ssd.

Space efficient
See “Thin provisioning” on page 937.

Spare
An extra storage component, such as a drive or tape, that is predesignated for use as a
replacement for a failed component.

Spare goal
The optimal number of spares that are needed to protect the drives in the array from failures.
The system logs a warning event when the number of spares that protect the array drops
below this number.

Appendix B. Terminology 935


7933 12 APPENDIX B GLOSSARY Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Space-efficient virtual disk (VDisk)


See “Thin-provisioned volume” on page 936.

Stand-alone relationship
In FlashCopy, Metro Mirror, and Global Mirror, relationships that do not belong to a
consistency group and that have a null consistency-group attribute.

Statesave
Binary data collection that is used in problem determination.

Storage area network (SAN)


A storage area network (SAN) is a dedicated storage network that is tailored to a specific
environment, which combines servers, systems, storage products, networking products,
software, and services.

Storage area network (SAN) Volume Controller (SVC)


The IBM Storage System SAN Volume Controller (SVC) is an appliance that is designed for
attachment to various host computer systems. The SVC performs block-level virtualization of
disk storage.

Storage pool (managed disk group)


A storage pool is a collection of storage capacity, which is made up of managed disks
(MDisks), that provides the pool of storage capacity for a specific set of volumes. A storage
pool can contain more than one tier of disk, which is known as a multitier storage pool and a
prerequisite of Easy Tier automatic data placement. Before SVC V6.1, this storage pool was
known as a managed disk group (MDG).

Stretched system
A stretched system is an extended high availability (HA) method that is supported by SVC to
enable I/O operations to continue after the loss of half of the system. A stretched system is
also sometimes referred to as a split system. One half of the system and I/O Group is usually
in a geographically distant location from the other, often 10 kilometers (6.2 miles) or more. A
third site is required to host a storage system that provides a quorum disk.

Striped
Pertaining to a volume that is created from multiple managed disks (MDisks) that are in the
storage pool. Extents are allocated on the MDisks in the order specified.

Symmetric virtualization
Symmetric virtualization is a virtualization technique in which the physical storage, in the form
of a Redundant Array of Independent Disks (RAID), is split into smaller chunks of storage
known as extents. These extents are then concatenated, by using various policies, to make
volumes. See also “Asymmetric virtualization” on page 924.

Synchronous replication
Synchronous replication is a type of replication in which the application write operation is
made to both the source volume and target volume before control is given back to the
application. See also “Asynchronous replication” on page 924.

Thin-provisioned volume
A thin-provisioned volume is a volume that allocates storage when data is written to it.

936 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm

Thin provisioning
Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with
a “logical” capacity size that is larger than the actual physical capacity that is assigned to that
pool or volume. Therefore, a thin-provisioned volume is a volume with a virtual capacity that
differs from its real capacity. Before SVC V6.1, this thin-provisioned volume was known as
space efficient.

T10 DIF
T10 DIF is a “Data Integrity Field” extension to SCSI to allow for end-to-end protection of data
from host application to physical media.

Unique identifier (UID)


A unique identifier is an identifier that is assigned to storage-system logical units when they
are created. It is used to identify the logical unit regardless of the logical unit number (LUN),
the status of the logical unit, or whether alternate paths exist to the same device. Typically, a
UID is used only once.

Virtualization
In the storage industry, virtualization is a concept in which a pool of storage is created that
contains several storage systems. Storage systems from various vendors can be used. The
pool can be split into volumes that are visible to the host systems that use them. See also
“Capacity licensing” on page 924.

Virtualized storage
Virtualized storage is physical storage that has virtualization techniques applied to it by a
virtualization engine.

Virtual local area network (VLAN)


Virtual local area network (VLAN) tagging separates network traffic at the layer 2 level for
Ethernet transport. The system supports VLAN configuration on both IPv4 and IPv6
connections.

Virtual storage area network (VSAN)


A virtual storage area network (VSAN) is a fabric within the storage area network (SAN).

Vital product data (VDP or VPD)


Vital product data (VDP or VPD) is information that uniquely defines system, hardware,
software, and microcode elements of a processing system.

Volume
A volume is an SVC logical device that appears to host systems that are attached to the SAN
as a SCSI disk. Each volume is associated with exactly one I/O Group. A volume has a
preferred node within the I/O Group. Before SVC 6.1, this volume was known as a VDisk or
virtual disk.

Volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes
have two copies. Non-mirrored volumes have one copy.

Appendix B. Terminology 937


7933 12 APPENDIX B GLOSSARY Hartmut.fm Draft Document for Review February 4, 2016 8:01 am

Volume protection
To prevent active volumes or host mappings from inadvertent deletion, the system supports a
global setting that prevents these objects from being deleted if the system detects that they
have recent I/O activity. When you delete a volume, the system checks to verify whether it is
part of a host mapping, FlashCopy mapping, or remote-copy relationship. In these cases, the
system fails to delete the volume, unless the -force parameter is specified. Using the -force
parameter can lead to unintentional deletions of volumes that are still active. Active means
that the system detected recent I/O activity to the volume from any host.

Write-through mode
Write-through mode is a process in which data is written to a storage device at the same time
that the data is cached.

938 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

Appendix C. Stretched Cluster


In this appendix, we briefly describe the IBM SAN Volume Controller Stretched Cluster
(formerly known as a Split I/O Group) concepts and main functions.

We also explain the term Enhanced Stretched Cluster (ESC), and the HyperSwap technology
introduced in the IBM Spectrum Virtualize V7.5, and how they differ from each other.

Technical details or implementation guidelines are not presented in this appendix as they are
described in separate publications. For more information, consult the topic “Technical
guidelines and Publications” on page 964, which contains references for implementation with
VMware environments, implementing SVC Stretched cluster with AIX virtualized or clustered
environments, and Storwize V7000 HyperSwap implementation.

Also, see:

IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware implementation,


SG24-8317

© Copyright IBM Corp. 2015. All rights reserved. 939


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Stretched cluster overview


Business continuity and continuous application availability are among the top requirements
for many organizations. Advances in virtualization, storage, and networking have made
enhanced business continuity possible. Information technology solutions can now manage
both planned and unplanned outages, and provide the flexibility and cost efficiencies that are
available from cloud-computing models. Within standard implementations of SAN Volume
Controller, all the I/O Group nodes are physically installed in the same location.

To supply the different high availability needs customers have, the stretched system
configuration was introduced, where each node (from the same I/O Group) on the system is
physically on a different site. When implemented with mirroring technologies, such as volume
mirroring or Copy Services, these configurations can be used to maintain access to data on
the system in the event of power failures or site-wide outages.

Stretched Clusters are considered High Availability solutions as both sites works as instances
of production environment (there is no stand by location), and combined with application and
infrastructure layers of redundancy, it can provide enough protection for data which requires
availability and resiliency.

When SAN Volume Controller was first introduced, the maximum supported distance
between nodes within an I/O Group was 100 meters and with the evolution of code and
introduction of new features, SVC V5.1 introduced support for the Stretched Cluster
configuration, where nodes within an I/O Group can be separated by a distance of up to 10
km using specific configurations.

With V6.3 we began supporting Stretched Cluster configurations, where nodes can be
separated by a distance of up to 300 km, in specific configurations using FC switch Inter
Switch Links (ISLs) between different locations.

V7.2 introduced the Enhanced Stretched Cluster feature that further improved the stretched
cluster configurations introducing the site awareness concept for nodes and external storage,
and the Disaster Recovery (DR) feature that allows to manage effectively rolling disaster
scenarios.

Within Spectrum Virtualize V7.5, the site awareness concept has been extended to hosts
allowing more efficiency for host I/O traffic through the SAN as well as an easier host path
management.

The Spectrum Virtualize V7.6 introduces a new feature for stretched systems, the IP Quorum
application. Using an IP-based quorum application as the quorum device for the third site, no
Fibre Channel connectivity is required. Java applications run on hosts at the third site.
However, there are strict requirements on the IP network and some disadvantages with using
IP quorum applications. Unlike quorum disks, all IP quorum applications must be reconfigured
and redeployed to hosts when certain aspects of the system configuration change.

IP Quorum details can be found in the IBM SAN Volume Controller Knowledge Center:

https://ibm.biz/BdHnKF

Table C-1 provides an overview of features by the SVC stretched cluster in each code
version.

940 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

Table C-1 Stretched cluster features in SVC versions

Features/code level 5.1 6.2 6.3 6.4 7.1 7.2 7.3 7.4 7.5 7.6

Non-ISL stretched cluster; Y Y Y Y Y Y Y Y Y Y


separate links between SVC
nodes and remote SAN
switches; up to 10 km (6.2
miles); passive CWDM and
passive dense wavelength
division multiplexing
(DWDM)

Dynamic quorum disk V2 N Y Y Y Y Y Y Y Y Y

Non-ISL stretched cluster up N Per Y Y Y Y Y Y Y Y


to 40 km (24.8 miles) quote

ISL stretched cluster with N N Y Y Y Y Y Y Y Y


private and public fabric: up
to 300 km (186.4 miles)

Active DWDMs and CWDMs N N Y Y Y Y Y Y Y Y


for non-ISL and ISL
stretched cluster

Metro Mirror support with N N Y Y Y Y Y Y Y Y


stretched cluster

ISL stretched cluster using N N N Y Y Y Y Y Y Y


Fibre Channel over Ethernet
(FCoE and FCIP) ports for
private fabrics

Support of eight FC ports N N N Per Y Y Y Y Y Y


per SVC node quotea

Enhanced mode N N N N N Y Y Y Y Y

Manual failover capacity N N N N N Y Y Y Y Y

Host site awareness N N N N N N N N Y Y

IP quorum device N N N N N N N N N Y
a. Available only on version 6.4.1 (6.4.0 not supported)

With stretched cluster, the two nodes in an I/O Group are separated by distance between two
locations and a copy of the volume is stored at each location. This configuration means that
you can lose either the SAN or power at one location and access to the disks remains
available at the alternative location. Using this configuration requires clustering software at
the application and server layers to fail over to a server at the alternative location and resume
access to the disks. The SAN Volume Controller keeps both copies of the storage in
synchronization, and the cache is mirrored between both nodes. Therefore, the loss of one
location causes no disruption to the alternative location.

As with any clustering solution, avoiding a split-brain situation (where nodes no longer can
communicate with each other) requires a tie-break mechanism. SAN Volume Controller is no
exception. The SAN Volume Controller uses a tie-break mechanism that is facilitated through
the implementation of a quorum disk. The SAN Volume Controller uses three quorum disks
from the Managed Disks that are attached to the cluster to be used for this purpose. Usually
the management of the quorum disks is transparent to the SAN Volume Controller users. In
an Enhanced Stretched Cluster configuration, the location of the quorum disks are done

Appendix C. Stretched Cluster 941


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

automatically, and if needed you can assign them manually to ensure the active quorum disk
is in a third location. This configuration must be done to ensure the survival of one location if
a failure occurs at another location.

Starting in V6.3, there were significant enhancements for a Stretched System in the following
configurations:
򐂰 No inter-switch link (ISL) configuration:
– Passive Wavelength Division Multiplexing (WDM) devices can be used between both
sites.
– Each SVC node should be attached in the local and remote failure domain directly
(local and remote sites).
– No ISLs can be used between the SVC nodes (similar to the version 5.1 supported
configuration).
– The distance extension is to up to 40 km (24.8 miles).
򐂰 ISL configuration:
– ISLs are allowed between the SVC nodes (not allowed with releases earlier than V6.3).
– Each SVC node should be attached only to local Fibre Channel switches and ISLs
configured between failure domains (node-to-node traffic)
– Use of two separate SANs must be considered (Private and Public SANs).
– The maximum distance is similar to Metro Mirror (MM) distances (up to 300 km).
– The physical requirements are similar to MM requirements.
– ISL distance extension is allowed with active and passive WDM devices.
– Failure domain 3 (quorum site) must be either Fibre Channel or FCIP attached but the
response time to the quorum disk cannot exceed 80 ms
򐂰 FCIP configuration:
– FCIP links are used between failure domains (FCIP support was introduced in version
6.4).
– You must have at least two FCIP tunnels between the failure domains.
– Use of two separate SANs must be considered (Private and Public SANs).
– Failure domain 3 (quorum site) must be either Fibre Channel or FCIP attached but the
response time to the quorum disk cannot exceed 80 ms.
– A guaranteed minimum bandwidth of 2 MBps is required for node-to-quorum traffic.
– No more than one ISL hop is supported for connectivity between failure domains.

Common terms used in Stretched Cluster Configurations


In an SVC Stretched cluster implementation you will find several terms used in the
configuration steps and documentation. In this section we briefly explain the most common
ones.

Topology
The topology is the parameter which defines how the SVC cluster is designed and which
functions and features are available in the system. Topology can be set as:

942 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

򐂰 Standard (default):
There are two types of standard topology, one is when SVC nodes are deployed in the
same physical location and are not stretched (usually when remote high availability is not
required) and the second application is when SVC nodes are stretched in different
locations (sites) and are members of the same I/O group. When the SVC nodes are
physically stretched but the topology is still set as standard, the configuration is called a
Standard Stretched Cluster, which means the enhanced features are not enabled. For
more details about the comparison of standard stretched cluster and enhanced stretched
cluster, refer to topic “Standard and Enhanced Stretched Cluster comparison” on
page 951.
򐂰 Stretched (enhanced)
In a stretched topology configuration, all enhanced features are enabled and each site is
defined as an independent failure domain. This physicall separation means that if one site
experiences a failure the other site can continue to operate without disruption. You must
also configure a third site to host a quorum device that provides an automatic tie-break in
the event of a potential link failure between the two main sites. The main site can be in the
same room or across rooms in the data center, buildings on the same campus, or
buildings in different cities. Different kinds of sites protect against different types of failures.
򐂰 HyperSwap
Introduced in Spectrum Virtualize V7.5, in the HyperSwap topology, each I/O Group (node
pair) is at a different site (or failure domain). This feature combined with Remote Copy
services can be deployed to provide redundancy in a different level where a volume can
be active on two different I/O Groups.

Note: IBM Storwize HyperSwap requires additional license features to enable Remote
Copy Services. For more details, refer to IBM Storwize V7000, Spectrum Virtualize,
HyperSwap, and VMware implementation, SG24-8317

The topology parameter can be checked in the GUI: System → Action → Properties as
shown in Figure 11-383 on page 943.

Figure 11-383 System topology set as Stretch

Use can also use the command lssystem to show the topology parameter as shown in
Example 11-4.

Appendix C. Stretched Cluster 943


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Example 11-4 Example of lssystem output showing the topology parameter


IBM_2145:ITSO_SVC_DH8:superuser>lssystem
id 000002007FA02102
name ITSO_SVC_DH8
...
lines omitted for brevity
...
local_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology stretched
topology_status dual_site
rc_auth_method none
vdisk_protection_time 15
vdisk_protection_enabled no
product_name IBM SAN Volume Controller
odx off
max_replication_delay 0

Sites and Failure Domains


In a stretched system configuration, each site is defined as an independent failure domain,
which means that if one site experiences a failure the other site can continue to operate
without disruption. These failure domains are also referred as failure sites, which are limited
by boundaries, so if there is a failure in one component of the site, it will not propagate to
others keeping the system available.

The components that comprise an ESC configuration must span three independent failure
domains. Two failure domains contain SVC nodes and the storage controllers that contain
customer data. The third failure domain contains a storage controller where the active quorum
disk is located.

V7.2 introduced a site awareness concept for nodes and controller. V7.5 introduced the site
awareness concept for hosts too.
򐂰 Site awareness can be used only when topology is set to stretched.
򐂰 The default names for the sites are site1, site2, and site3. Sites 1 and 2 are where the two
halves of the ESC are located. Site 3 is the optional third site for a quorum tie-breaker
disk. You can set a name for each site if you prefer.
򐂰 A site field is added to nodes, controllers, and hosts. The nodes and controller must have
sites set in advance, before you set topology to stretched, and must have a site assigned.
򐂰 Nodes and hosts can be assigned only to sites 1 or 2. Controllers can be assigned to any
of the 3 sites.
򐂰 The site property for a controller adjusts the I/O routing and error reporting for connectivity
between nodes and the associated MDisks. These changes are effective for any MDisk
controller that has a site defined, even if the DR feature is disabled.
򐂰 The site property for a host adjusts the I/O routing and error reporting for connectivity
between hosts and the nodes in the same site. These changes are effective only at SAN
login time, meaning any changes potentially will require a host reboot or FC HBA rescan,
depending on the operating system used.

944 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

To check sites configuration, go to System Overview → Action → Rename Sites.


Figure 11-384 shows the rename sites window.

Figure 11-384 Rename site window

Use can also use the command lssite to show the site names as shown in Example 11-5.

Example 11-5 List site command output


IBM_2145:ITSO_SVC_DH8:superuser>lssite
id site_name
1 ITSO_DC1
2 ITSO_DC2
3 ITSO_DC3
IBM_2145:ITSO_SVC_DH8:superuser>

Disaster Recovery (DR) feature


In stretched cluster configurations, the DR feature is enabled when the topology is set to
stretched.

As discussed in the previous topics of this book, stretched cluster is configured with half the
nodes at each site and a quorum device at a third location. If an outage occurs at either site,
then the other nodes at the other site will access the quorum device and continue operation
without any intervention. If connectivity between the two sites is lost, then whichever nodes
access the quorum device first will continue operation. For disaster recovery purposes a user
may want to enable access to the storage at the site which lost the race to access the quorum
device.

Use the satask overridequorum command to enable access to the storage at the secondary
site. This feature is only available if the system was configured by assigning sites to nodes
and storage controllers, and changing the system topology to stretched.

If the user executed disaster recovery on one site and then powered up the remaining, failed
site (which had the configuration node at the time of power down (i.e., disaster)), then the
cluster would assert itself as designed. The user should:
򐂰 Remove the connectivity of the nodes from the site experiencing the outage.
򐂰 Power up or recover those nodes.
򐂰 Execute satask leavecluster-force or svctask rmnode command for all the nodes in the
cluster.

Appendix C. Stretched Cluster 945


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Bring the nodes into candidate state, and then connect them to the site on which the site
disaster recovery feature was executed.

Volume Mirroring
Volume mirroring is a feature of Spectrum Virtualize software, which can be used without
additional licensing and it is independent of stretched cluster configurations.

The ESC configuration uses the benefits of volume mirroring function, which allows the
creation of one volume with two copies of MDisk extents. If the two copies are placed in
different Pools (as well within different controllers), the two data copies allow volume mirroring
to eliminate impact to volume availability if one or more MDisks (or controller) fails. The
resynchronization between both copies is incremental and is started automatically. A mirrored
volume has the same functions and behavior as a standard volume.

All operations that can be run on non-mirrored volumes can also be run on mirrored volumes.
These operations include migration and expand or shrink operations.

As with non-mirrored volumes, each mirrored volume is owned by the preferred node within
the I/O group. Therefore, the mirrored volume goes offline if the I/O group goes offline.

Spectrum Virtualize software volume mirroring functionality implements a read algorithm with
one copy that is designated as the primary (copy 0) for all read operations. Spectrum
Virtualize software reads the data from the primary copy and does not automatically distribute
the read requests across both copies. The first copy that is created becomes the primary by
default.

Starting with V7.2 the primary copy concepts are overridden. Read operations run locally,
accordingly, with site attributes assigned to each SVC node, controller and host (V7.5).

Write operations run on both mirrored copies. The storage controller with the lowest
performance determines the response time between the SVC and the storage controller
back-end. The SVC cache can hide high back-end response times from the host up to a
certain level.

If a back-end write fails or a copy goes offline, a bitmap file is used to track out-of-sync grains.
As soon as the missing copy is back online, Spectrum Virtualize software evaluates the
changed bitmap and automatically re-synchronizes both copies.

Volume access is not affected by the re-synchronization process and is run concurrently with
host I/O.

Public and Private SAN


When using stretched cluster configurations with ISLs, it is highly recommended the use of
separate SAN fabrics.

Using ISLs for node-to-node communication requires configuring two separate SANs, each of
them with two separate redundant fabrics.

Each SAN consists of at least one fabric that spans both production sites. At least one fabric
of the public SAN includes also the quorum site. You can use different approaches to
configure private and public SANs:
򐂰 Use dedicated Fibre Channel switches for each SAN.
򐂰 Use separate virtual fabrics or virtual SANs for each SAN.

946 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

Note: ISLs must not be shared between private and public virtual fabrics.

Private SAN:
򐂰 SAN fabric dedicated for SVC node-to-node communication
򐂰 It must contain at least two ports per node (one per fabric)
򐂰 Must meet the bandwidth requirements if using Ethernet layers (FCoE or FCIP)
򐂰 Maximum latency between nodes of 80 ms (round trip)

Public SAN:
򐂰 SAN fabric dedicated for host and storage controller attachment (including quorum).
򐂰 Must have physical redundancy (dedicated switches) or logical redundancy (virtual fabrics
or virtual SANs).
򐂰 Must meet controller zoning recommendations described in Chapter 3, “Planning and
configuration” on page 83 and also refer to best practices recommendations in IBM
System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521.

Stretched cluster deployment planning


Besides the business requirements of an environment with high availability, there are
considerations regarding the stretched cluster implementation and some prerequisites on
how it can fit in the planned solution. The topics below demonstrate important prerequisites
and planning strategies for SVC stretched cluster implementation.
򐂰 ISL, non-ISL and FCIP implementations:
You must review the possible configurations associated with the equipment to be
implemented in your infrastructure.
In this analysis, consider the availability required when implementing the SAN
components (switches, logical or physical distinct fabrics, ISLs, FCIP tunnels, DWDM
devices, connectivity, bandwidth).
򐂰 Sites and failure domains:
The SVC stretched cluster must be implemented in three different failure domains so it can
provide the right level of resiliency and availability. Usually, the three failure domains are in
different locations (sites), so you must consider where the SVC nodes, storage backend
controllers and quorum devices will be placed to meet the stretched configuration
requirements.
򐂰 Quorum:
In SVC stretched cluster implementations the active quorum disk must be implemented at
the third site.
򐂰 Bandwidth sizing and latency considerations:
The connection between SVC nodes must ensure a minimum bandwidth of 4 Gbps or 2
times the peak host write I/O workload, whichever is highest.
To avoid performance bottlenecks, each SVC node requires two ports that are operating at
full speed (for example, 2 x 8 Gbps) worth of bandwidth to other nodes within the same
cluster.
ISL congestion can affect cluster communication. One non-shared ISL per I/O group per
fabric is needed to prevent congestion.

Appendix C. Stretched Cluster 947


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Quorum disk storage controllers must be Fibre Channel or FCIP attached and provide less
than 80 ms response times with a guaranteed bandwidth of greater than 2 Mbps.
򐂰 Backend storage and quorum controllers:
Consider the implementation of storage backend controllers by placing them in different
sites or failure domains to provide volume mirroring and contingency for data.
Do not connect a storage system in one site directly to a switch fabric in the other site.
The storage system at the third site must support extended quorum disks. More
information is available in the interoperability matrixes.

Note: Stretched system configurations with active/passive controllers such as IBM


DS5000, IBM DS4000, and IBM DS3000 systems must be configured with sufficient
connections such that all sites have direct access to both external storage systems.
Quorum access for stretched system is possible only through the current owner of the
MDisk that is being used as the active quorum disk.

򐂰 Volume Mirroring:
Volume mirroring requires no additional licensing and must be used to increase data
availability. When creating mirrored volumes, consider the amount of space required to
maintain both copies available and synchronized.
򐂰 Internal storage for SVC nodes (expansion enclosures):
Be aware that use of SSDs inside SVC nodes or expansion enclosures attached to SVC
nodes of a stretched cluster deployment is not supported.
򐂰 Hosts placement planning:
With the introduction of site awareness on hosts in the Spectrum Virtualize V7.5, you must
plan where hosts will be deployed so they can access the local storage controller placed in
the same site or failure domain.
򐂰 Infrastructure and application layer planning:
When planning the application layer and design, you must consider the requirements
needed to achieve business availability metrics such as: Recovery Point Objective (RPO)
and Recovery Time Objective (RTO). Those parameters combined with application
server’s software, infrastructure layers, and SVC features helps you to build a reliable high
availability solution.

You should also consider the functionalities and features shown in Table 11-2 when planning
your high availability solution, as they can be deployed together to build a more robust
solution.

Table 11-2 Data mirroring functionalities comparison


Functionality Volume Mirroring (Stretched Remote Copy
Cluster) (MM/GM/GMCV)

Synchronous data copy in two Yes Yes


different storage systems

Identical volume identifier Yes No

Automatic switch to other data Yes No


copy

Consistency group availability No Yes

948 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

Functionality Volume Mirroring (Stretched Remote Copy


Cluster) (MM/GM/GMCV)

Data copies relies on One single SVC cluster Two SVC cluster (or Storwize
equipments)

Removal of both copies is a One step operation Two steps operation

Long distance link utilization Mirrored data, write cache Mirrored data only
mirroring

Avoidance of long distance No Metro Mirror: No


delay Global Mirror: Yes

Enhanced Stretched Cluster


Introduced in the V7.2, the Enhanced Stretched Cluster topology can be set in an SVC
stretched configuration, which provides additional functionalities that can be used regardless
of whether the stretched system is configured with or without ISLs between nodes.

Note: Extra configuration steps are required for an enhanced stretched system. If you
currently have a standard stretched system configuration, you can update this
configuration to an enhanced stretched system after you update the system software to the
minimum level of V7.2. These additional configuration steps can be done by using the
command-line interface (CLI) or the management GUI.

The topics below describe the main features and benefits implementing ESC in your
environment:
򐂰 Site awareness:
– Each node, controller and hosts must have a site definition (site 1, 2 or 3) which will
allow SVC to identify where the components are physically located.
– Connections between controller MDisks to nodes are only allowed within the same site.
– Connections between controller MDisks to nodes in the other site are ignored (path set
offline).
򐂰 Manual failover capability:
– Ability to override quorum and recover from a rolling disaster.
– Issue satask overridequorum only when the administrator has ensured the other site is
not running as a cluster.
– Access to all recovered site volume copies. This includes the mirror-half of stretched
volumes plus any single-copy volumes from the local site.
– Missing nodes must not be allowed to access shared storage and must be removed.
– When the last failed site offline node is removed, any non-local site volume-copies will
come online.
– Administrator can now begin the process of reconstructing the system objects,
including:
• Defining quorum disks in the correct sites.
• Recreating volumes which were not automatically recovered.
• Re-creating copy services relationships.

Appendix C. Stretched Cluster 949


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Route I/O traffic between the nodes and controllers optimizing traffic across the link
reducing traffic between local nodes and remote controllers.
򐂰 Volume Mirroring:
– Each volume mirror copy is configured from storage on different sites 1 or 2 (3 is
reserved for quorum only).
– Mirror write priority can be set to latency for fast failover operation (fast failover within
40s).
– Read I/Os are sent to the local site copy (volume mirroring always reads from the local
site copy if it is in sync).
򐂰 Automatic SVC quorum based failover:
– Automatic quorum disk selection will choose one MDisk in each of sites 1, 2, 3.
Quorum MDisk on site 3 is automatically set as the active.
򐂰 Restrict I/O traffic between sites in failure conditions to ensure data consistency.
򐂰 I/O traffic is routed to minimize I/O data flows:
– Data payloads are only transferred the minimum number of times.
– There are I/O protocol control messages that flow across the link but these are very
small in comparison to the data payload.
򐂰 For write I/Os:
– Write I/O data is typically copied across the link once.
– Upper cache creates write data mirror on I/O group partner node.
– Lower cache makes use of same write data mirror, during a destage.
– For each volume copy the local site node sends the write to the local site backend.
– I/O protocol control message sent but no data sent over the site link.
򐂰 For read I/Os:
– Volume mirroring will issue reads to the local site copy if in sync.

Summary of the main steps to configure an Enhanced Stretched Cluster:


1. Assign each SVC node to a specific site.
2. Assign each backend storage system to a specific site.
3. Assign each host object to a site (valid only for version 7.5 or later)
4. Change system topology to stretched.

Note: For best results, configure an enhanced stretched system to include at least two I/O
groups (four nodes). A system with just one I/O group cannot guarantee to maintain
mirroring of data or uninterrupted host access in the presence of node failures or system
updates.

If there has been the unlikely failure of two failure domains, you can enable a manual override
for this situation if the enhanced stretched system function is configured. You can also use
Metro Mirror or Global Mirror with either an enhanced stretched system or a standard
stretched system on a second SVC cluster for extended disaster recovery.

You configure and manage Metro Mirror or Global Mirror partnerships that include a stretched
system in the same way as other remote copy relationships. SVC supports SAN routing
technology (including FCIP links) for intersystem connections that use Metro Mirror or Global
Mirror.

950 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

Standard and Enhanced Stretched Cluster comparison


In this topic we will briefly discuss the main differences between standard stretched cluster
(pre 7.2 versions) and enhanced stretched cluster.

SVC stretched cluster hardware installation guidelines and infrastructure requirements are
the same for either standard or enhanced stretch clusters.

It is possible to convert an existing standard stretched clusters to an enhanced stretched


cluster non-disruptively any time after upgrade has completed (to V7.2 or later). The main
steps on how to configure enhanced stretched clusters are shown on page 950 (“Summary
main steps to configure an Enhanced Stretched Cluster”).

Prior to V7.2 SVC, stretched clusters were configured without the topology parameter as it
was not available. In addition to that, all prerequisites and requirements for a stretched
solution would have to be met to provide redundancy in some of the components of the
solution.

Note: It is strongly recommend the implementation of the enhanced stretched cluster as


there are many improvements in terms of performance, availability and redundancy as
described in the topic “Enhanced Stretched Cluster” on page 949.

In standard stretched cluster configurations, the topology parameter remains as standard


(instead of stretched) and the nodes are physically stretched among sites.

Consider these important considerations in standard stretched cluster configurations:


򐂰 No automatic quorum selection:
– Quorum disks will be assigned in the standard mode and will not consider the physical
where the backend controllers are (site parameter was implemented in V7.2).
– If a site has no suitable MDisks then less than 3 quorum disks will be configured.
– Manual interaction is needed to ensure the active quorum disk used for tie breaks is
placed in site 3.
򐂰 No manual failover capability:
– In a disaster situation, there is no capability of manual intervention to failover the
access to remaining site (automatic SVC quorum based failover).
򐂰 No policy topology based configuration rules:
– There is no protection to ensure each I/O Group has one node added to sites 1 and 2.
– There is no protection to ensure user quorum selections follow the 1 disk per site rule.
򐂰 No traffic optimization across the link:
– No remote and local I/O traffic considerations.
– No traffic I/O restriction during failure scenarios.
򐂰 Volume Mirroring:
– All volumes must be mirrored between the sites, otherwise local hosts will not be able
to access volumes in a failure scenario as they are not mirrorred among both sites.
– Use of mirror_write_priority parameter is improved with enhanced stretched cluster.
– Volume mirroring always reads from the local site copy if it is in sync in an enhanced
stretched cluster, which is not enforced in standard stretched cluster.

Appendix C. Stretched Cluster 951


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Note: For a standard stretched cluster there are advantages and disadvantages to
latency and redundancy mode. If you want to be sure that the copy at your second
site is updated then the preferred setting is redundancy. If, however, you cannot
tolerate the longer I/O delays if there is a problem with one site or inter-site link, then
the latency should be chosen, but be aware that if a disaster occurs you might be
left with an out-of-sync copy of the data.

For enhanced stretched cluster the recommendation is to use latency (default)


mode. An enhanced stretched cluster can distinguish between problems with the
backend storage and problems with the inter-site link and adjusts the latency mode
time out slightly, depending on the circumstances, to allow the cluster to react
quickly to storage failures without generating false positives that would cause a
mirror to temporarily go out-of-sync.

Enhanced Stretched Cluster comparison with HyperSwap


HyperSwap function is available with IBM Spectrum Virtualize V7.5 and most of the concepts
and practices described in this book will apply to devices capable of running it.

Many HyperSwap concepts, like the site awareness and DR feature, are in fact inherited from
the ESC function. Nevertheless important differences between the two solutions exist as
summarized in Table C-2.

Table C-2 Enhanced Stretched Cluster and HyperSwap comparison


Enhanced Stretched Cluster HyperSwap

Products that function is SVC only SVC with two or more I/O
available on groups; Storwize V7000,
Storwize V5000, FlashSystem
V9000

Complexity of configuration CLI or GUI on single system; Multiple step CLI-based


simple object creation configuration on single system;
Simple object creation through
GUI and CLI

Sites data stored on 2 2

Distance between sites Up to 300km Up to 300km

Independent copies of data 2 2 (4 if additionally Volume


maintained Mirroring to two pools in each
site)

Technology for host to access Standard host multipathing Standard host multipathing
multiple copies and driver driver
automatically fail over

Cache retained if only one site No Yes


online?

Host-to-storage-system path Manual configuration of Automatic configuration based


optimization preferred node per volume prior on host site (requires
to 7.5, automatic based on host ALUA/TPGS support from
site as HyperSwap from 7.5 multipathing driver)

952 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

Enhanced Stretched Cluster HyperSwap

Synchronization and Automatic Automatic


resynchronization of copies

Stale consistent data retained No Yes


during resynchronization for
disaster recovery?

Scope of failure and Single volume One or more volumes, user


resynchronization configurable

Ability to use FlashCopy Yes (though no awareness of Limited: can use FlashCopy
together with High Availability site locality of data) maps with HyperSwap volume
solution as source, avoids sending data
across link between sites

Ability to use Metro Mirror, One remote copy, can maintain No


Global Mirror, or Global Mirror current copies on up to four
Change Volume together with sites
High Availability solution

Maximum highly available 4096 1024


volume count

Minimum paths number 2 4


required per LUN per host port

Minimum I/O groups count One I/O group is supported but 2


not recommended

Licensing Included in base product Requires Remote Mirroring


license for volumes. Exact
license requirements may vary
by product.

The Enhanced Stretched Cluster function uses a stretched system topology, and the
HyperSwap function uses a hyperswap topology. These both spread the nodes of the system
across two sites, with storage at a third site acting as a tiebreaking quorum device.

The topologies differ in how the nodes are distributed across the sites:
򐂰 For each I/O group in the system, the stretched topology has one node on one site, and
one node on the other site. The topology works with any number of I/O groups from 1 to 4,
but as the I/O group is split into two locations, this is only available with SVC as the nodes
from each I/O group can be physically separated.
򐂰 The HyperSwap topology locates both nodes of an I/O group in the same site, making this
possible to use with either Storwize or SVC products. Therefore, to get a volume stored on
both sites, at least two I/O groups (or control enclosures) are required.

The stretched topology uses fewer system resources, allowing a greater number of highly
available volumes to be configured. However, during a disaster that makes one site
unavailable, the system cache on the nodes of the surviving site will be disabled.

The HyperSwap topology uses additional system resources to support a fully independent
cache on each site, allowing full performance even if one site is lost. In some environments a
HyperSwap topology will provide better performance than a stretched topology.

Appendix C. Stretched Cluster 953


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Both topologies allow full configuration of the highly available volumes through a single point
of configuration. The Enhanced Stretched Cluster function may be fully configured through
either the GUI or the CLI.

Note: For more information about IBM Storwize HyperSwap implementation, refer to IBM
Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware implementation,
SG24-8317.

Recommendations and best practices


As it is not possible to mix HyperSwap I/O groups and Stretched I/O groups, you have to
decide between different types of clusters. The list below helps you to find which solution fits
better in your environment.

Stretched Cluster:
򐂰 Requirement for Metro or Global Mirror to an additional cluster (possibly stretched too), to
protect against software failures or to maintain a copy at larger distance.
򐂰 Concurrent I/O to the same volume from hosts in different sites is a significant portion of
the workload. Examples:
– Oracle RAC
– Large Windows Cluster-Shared Volumes (Hyper-V) that are shared cross-site
– Large VMware data stores that are shared cross-site
– OpenVMS clusters
򐂰 Requirement for the maximum number of HA volumes, host objects and host ports, host
paths to preferred node.
򐂰 Requirement for comprehensive FlashCopy support (with broad range of backup/restore
options, minimized RTO for FlashCopy restore, usage of FlashCopy Manager).
򐂰 Requirement for iSCSI access to HA volumes.
򐂰 Requirement for automatic provisioning via VSC.

HyperSwap:
򐂰 Storwize Family: active/active solution for Storwize V7000 or V5000 without requiring
SVC.
򐂰 SVC: requirement for additional protection against node failures or back-end storage
failures (four copies by combining Metro Mirror and Volume mirror).
򐂰 I/O per volume is mostly from hosts in one site. Concurrent I/O to the same volume from
hosts in different sites is only a minor portion of the workload. Examples:
– Active/passive host clusters without concurrent volume access like traditional Windows
MSCS and traditional Unix clusters like AIX HACMP™,
– Small Windows Cluster-Shared Volumes (Hyper-V) that are shared single-site,
– Small VMware data stores that are shared single-site.
򐂰 Requirement for cache-enhanced performance even during disasters.
򐂰 Requirement for guaranteed application-wide failover characteristics (consistency groups).
򐂰 Requirement for additional “golden copy” during re-synchronization.

954 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

򐂰 Volume mirror needed to convert between fully-allocated, thin-provisioned, and


compressed volume types.
򐂰 Requirement of internal SSDs within SVC nodes (CF8, CG8) or expansion units (DH8) for
Tiering instead of external flash storage virtualization.

Non-ISL stretched cluster configuration


In a non-ISL configuration, each IBM SVC I/O Group consists of two independent SVC nodes.
This configuration is similar to a standard SAN Volume Controller environment with the main
difference being that nodes are distributed across two failure domains. If a node fails, the
other node in the same I/O Group takes over the workload, which is standard in an SVC
environment.

Volume mirroring provides a consistent data copy in both sites. If one storage subsystem fails,
the remaining subsystem processes the I/O requests. The combination of SVC node
distribution in two independent data centers and a copy of data in two independent storage
subsystems creates a new level of availability, the stretched cluster.

If all SVC nodes or the storage system in a single site fail; the other SVC nodes take over the
server load by using the remaining storage systems. The volume ID, and assignment to the
server are still the same. No server reboot, no failover scripts, and therefore no script
maintenance are required. This can be accomplished together with application clustered
configurations which will provide enough failover to servers.

However, you must consider that a stretched cluster typically requires a specific setup and
might exhibit substantially reduced performance.

Figure C-1 on page 956 shows an example of a non-ISL stretched cluster configuration as it
has started to be supported in V5.1.

Appendix C. Stretched Cluster 955


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

Figure C-1 Standard SVC stretched environment with non-ISL connections

The stretched cluster uses SVC volume mirroring functionality. Volume mirroring allows the
creation of one volume with two copies of MDisk extents; there is no two volumes with the
same data, but two copies of the same volume each in storage pools. Therefore, volume
mirroring can minimize the effect on volume availability if one set of MDisks goes offline. The
resynchronization between both copies after recovering from a failure is incremental; the SVC
starts the resynchronization process automatically.

As with a standard volume, each mirrored volume is owned by one I/O Group with a preferred
node. Therefore, the mirrored volume goes offline if the whole I/O Group goes offline. The
preferred node performs all I/O operations, which mean reads and writes. The preferred node
can be set manually, and it is always set to be the local node where the server is located.

The quorum disk keeps the status of the mirrored volume. The last status (in sync or not in
sync) and the definitions of the primary and secondary volume copy are saved there.
Therefore, an active quorum disk is required for volume mirroring. To ensure data
consistency, the SVC disables all mirrored volumes if no access exists to any quorum disk
candidate (mainly in disaster scenarios).

Consider the following preferred practices for stretched cluster in a non-ISL configuration:
򐂰 Drive read I/O to the local storage system.
򐂰 For distances less than 10 km (6.2 miles), drive the read I/O to the faster of the two disk
subsystems if they are not identical.
򐂰 Consider long-distance links using Long Wave SFPs.
򐂰 The preferred node must stay at the same site as the server that is accessing the volume.
򐂰 The volume mirroring primary copy must stay at the same site as the server that is
accessing the volume to avoid any potential latency effect where the longer distance
solution is implemented.

956 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

In many cases, no independent third site is available. It is possible to use an existing building
or computer room from the two main sites to create a third, independent failure domain.

Consider the following information when planning the solution:


򐂰 The third failure domain needs an independent power supply or uninterruptible power
supply. If the hosting site fails, the third failure domain must continue to operate.
򐂰 A separate storage controller for the active SVC quorum disk is required. Otherwise, the
SVC loses multiple quorum disk candidates at the same time if a single storage
subsystem fails.

As shown in Figure C-1 on page 956, the setup is similar to a standard SVC environment, but
the nodes are distributed to two sites. The GUI representation of a stretched cluster is
illustrated in Figure C-2.

Figure C-2 Stretched cluster overview in GUI

The SVC nodes and data are equally distributed across two separate sites with independent
power sources, which are named as separate failure domains (Failure Domain 1 and Failure
Domain 2). The quorum disk is in a third site with a separate power source (Failure Domain
3).

If the non-ISL configuration is implemented over a 10 km (6.2 mile) distance, passive WDM
devices (without power) can be used to pool multiple fiber-optic links with different
wavelengths in one or two connections between both sites. Small form-factor pluggable
(SFPs) with different wavelengths or “colored SFPs”, that is, SFPs that are used in Coarse
Wave Division Multiplexing (CWDM) devices are required in this configuration.

The maximum distance between both major sites is limited to 40 km (24.8 miles).

Because we must prevent the risk of burst traffic (because of the lack of buffer-to-buffer
credits), the link speed must be limited. The link speed depends on the cable length between
the nodes in the same I/O Group, as shown in Table C-3.

Table C-3 SVC code level lengths and speed


SVC code level Minimum length Maximum length Maximum link speed

>= SVC 5.1 >= 0 km = 10 km (6.2 miles) 8 Gbps

Appendix C. Stretched Cluster 957


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

SVC code level Minimum length Maximum length Maximum link speed

>= SVC 6.3 >= 10 km (6.2 miles) = 20 km (12.4 miles) 4 Gbps

>= SVC 6.3 >= 20 km (12.4 miles) = 40 km (24.8 miles) 2 Gbps

The configuration covers the following failover cases:


򐂰 Power off FC switch 1: FC switch 2 takes over the load and routes I/O to SVC node 1 and
SVC node 2.
򐂰 Power off SVC node 1: SVC node 2 takes over the load and serves the volumes to the
server. SVC node 2 changes the cache mode to write-through to avoid data loss in case
SVC node 2 fails, as well.
򐂰 Power off storage 1: The SVC waits a short time (15 - 30 seconds), pauses volume copies
on storage 1, and continues I/O operations by using the remaining volume copies on
storage 2.
򐂰 Power off site 1: Local servers no longer has access to the local switches, which causes
the loss of access. You optionally can avoid this loss of access by using more fiber-optic
links between site 1 and site 2 for server access.

The same scenarios are valid for site 2 and similar scenarios apply in a mixed failure
environment, for example, the failure of switch 1, SVC node 2, and storage 2. No manual
failover or failback activities are required because the SVC performs the failover or failback
operation.

The use of AIX Live Partition Mobility or VMware vMotion can increase the number of use
cases significantly. Online system migrations are possible, including running virtual machines
and applications. Online system migrations are an acceptable functionality to handle
maintenance operations in an appropriate way.

Advantages
A non-ISL configuration includes the following advantages:
򐂰 The business continuity solution is distributed across two independent data centers.
򐂰 The configuration is similar to a standard SVC clustered system.
򐂰 Limited hardware effort: Passive WDM devices can be used, but are not required.

Requirements
A non-ISL configuration includes the following requirements:
򐂰 Four independent fiber-optic links for each I/O Group between both data centers.
򐂰 Long-wave SFPs with support over long distance for direct connection to remote SAN.
򐂰 Optional usage of passive WDM devices.
򐂰 Passive WDM device: No power is required for operation.
򐂰 “Colored SFPs” to make different wavelength available.
򐂰 “Colored SFPs” must be supported by the switch vendor.
򐂰 Two independent fiber-optic links between site 1 and site 2 are recommended.
򐂰 Third site for quorum disk placement.
򐂰 Quorum disk storage system must use FC for attachment with similar requirements, such
as Metro Mirror storage (80 ms round-trip delay time, which is 40 ms in each direction).

When possible, use two independent fiber-optic links between site 1 and 2.

Note: For more details about non-ISL configurations, consult the IBM Redbook: IBM SAN
Volume Controller Stretched Cluster with PowerVM and PowerHA, SG24-8142.

958 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

Inter-switch link configuration


Where a longer distance that is beyond 40 km (24.8 miles) between site 1 and site 2 is
required, a new configuration must be applied. The setup is similar to a standard SVC
environment, but the nodes are allowed to communicate over long distance by using ISLs
between both sites that use active or passive WDM and a different SAN configuration. In this
type of configuration, the use of Private and Public SANs are required to provide full
redundancy for the node-to-node communication and different layers of traffic control.
Figure C-3 shows a configuration with active/passive WDM within Private and Public SAN
fabrics.

Figure C-3 Connection with active/passive WDM and ISL

The stretched cluster configuration that is shown in Figure C-3 on page 959 supports
distances of up to 300 km (186.4 miles), which is the same as the recommended distance for
Metro Mirror.

Technically, the SVC tolerates a round-trip delay of up to 80 ms between nodes. Cache


mirroring traffic (rather than Metro Mirror traffic) is sent across the inter-site link and data is
mirrored to back-end storage by using volume mirroring.

Data is written by the preferred node to the local storage and the inter-node communication is
responsible to send the data to remote storage. The SCSI write protocol results in two round
trips. This latency is hidden from the application by the write cache as it is held by different
components of the solution.

The stretched cluster is often used to move the workload between servers at separate sites.
VMotion or equivalent products can be used to move applications between servers; therefore,
applications no longer necessarily issue I/O requests to the local SVC nodes.

Appendix C. Stretched Cluster 959


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

ISL configurations are highly recommended to be used in conjunction with enhanced


stretched clusters implementations. Enhanced stretched cluster has several I/O traffic control
improvements which can increase the performance of your environment.

With the addition of host site awareness, and traffic I/O control, SCSI write commands from
local hosts to remote SVC nodes are blocked to avoid large latency in traffic round trips. All
remote writes (local to remote) are done through node-to-node traffic (private fabrics). For
stretched cluster configurations in a long-distance environment, we advise that you use the
local site for host I/O (as well local storage subsystems).

Advantages
This configuration includes the following advantages:
򐂰 ISLs enable longer distances greater than 40 km (24.85 miles) between failure domains.
򐂰 Active and passive WDM devices can be used between failure domains.
򐂰 The supported distance is up to 300 km (186.41 miles) with WDM.

Requirements
A stretched cluster with ISL configuration must meet the following requirements:
򐂰 Four independent, extended SAN fabrics are shown in Figure C-3 on page 959. Those
fabrics are named Pub SAN 1, Pub SAN 2, Priv SAN 1, and Priv SAN 2. Each Public or
Private SAN can be created with a dedicated FC switch or director, or they can be a virtual
SAN in a CISCO or Brocade virtual FC switch or director.
򐂰 Minimum of two ports per SVC node attached to the private SANs.
򐂰 Minimum of two ports per SVC node attached to the public SANs.
򐂰 SVC volume mirroring exists between site 1 and site 2.
򐂰 Hosts and storage controllers attached to the public SANs.
򐂰 The third site quorum disk attached to the public SANs.
Figure C-4 on page 961 shows the possible configurations with a virtual SAN.

960 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

Figure C-4 ISL configuration with a virtual SAN

Figure C-5 shows the possible configurations with physical SAN switches in Private and
Public fabrics.

Figure C-5 ISL configuration with a physical SAN switches as Private and Public fabrics

Appendix C. Stretched Cluster 961


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

򐂰 Use a third site to house a quorum disk. Connections to the third site can be through FCIP
because of the distance (no FCIP or FC switches were shown in the previous layouts for
simplicity). In many cases, no independent third site is available.
It is possible to use an existing building from the two main sites to create a third,
independent failure domain, but you have the following considerations:
– The third failure domain needs an independent power supply or uninterruptible power
supply. If the host site fails, the third failure domain needs to continue to operate.
– Each site (failure domain) must be placed in a separate fire compartment.
– FC cabling must not go through another site (failure domain). Otherwise, a fire in one
failure domain could destroy the links (and therefore the access) to the SVC quorum
disk.
Applying these considerations, the SVC clustered system can be protected, even though
two failure domains are in the same building. Consider an IBM Advanced Technical
Support (ATS) review or processing a request for price quotation (RRQ)/Solution for
Compliance in a Regulated Environment (SCORE) to review the proposed configuration.
򐂰 Four active/passive WDMs, two for each site, are needed to extend the public and private
SAN over a distance.
򐂰 Place independent storage systems at the primary and secondary sites. Use volume
mirroring to mirror the host data between storage systems at the two sites.
򐂰 The SVC nodes that are in the same I/O Group must be in two remote sites.

Note: For more details about non-ISL configurations, consult the IBM Redbook: IBM SAN
Volume Controller Stretched Cluster with PowerVM and PowerHA, SG24-8142.

FCIP configuration
In this configuration, FCIP links are used between failure domains. SAN Volume Controller
support for FCIP was introduced in V6.4.

This configuration is a variation of the ISL configuration that was described previously in this
chapter, and therefore many of the same requirements apply.

Figure 11-385 shows the connection diagram using FCIP connections between failure
domains.

962 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm

Figure 11-385 FCIP connections in SVC stretched cluster configurations

Bandwidth reduction and planning


Buffer credits, which are also called buffer-to-buffer credits, are used as a flow control method
by FC technology and represent the number of frames that a port can store.

Therefore, buffer-to-buffer credits are necessary to have multiple FC frames in parallel flight.
An appropriate number of buffer-to-buffer credits are required for optimal performance. The
number of buffer credits to achieve the maximum performance over a specific distance
depends on the speed of the link, as shown in the following examples:
򐂰 1 buffer credit = 2 km (1.2 miles) at 1 Gbps
򐂰 1 buffer credit = 1 km (.62 miles) at 2 Gbps
򐂰 1 buffer credit = 0.5 km (.3 miles) at 4 Gbps
򐂰 1 buffer credit = 0.25 km (.15 miles) at 8 Gbps
򐂰 1 buffer credit = 0.125 km (0.08 miles) at 16 Gbps

These guidelines give the minimum numbers. The performance drops if insufficient buffer
credits exist, according to the link distance and link speed, as shown in Table C-4.

Table C-4 FC link speed buffer-to-buffer and distance


FC link speed Buffer-to-buffer credits for 10 Distance with eight credits
km (6.2 miles)

1 Gbps 5 16 km (9.9 miles)

2 Gbps 10 8 km (4.9 miles)

4 Gbps 20 4 km (2.4 miles)

8 Gbps 40 2 km (1.2 miles)

Appendix C. Stretched Cluster 963


7933 13 Stretched Cluster Tarik.fm Draft Document for Review February 4, 2016 8:01 am

The number of buffer-to-buffer credits that is provided by an SVC FC host bus adapter (HBA)
is limited. These numbers are determined by the hardware of the HBA and cannot be
changed. We suggest that you use 2145-DH8 nodes for distances longer than 4 km (2.4
miles) to provide enough buffer-to-buffer credits at a reasonable FC speed. Table Table 11-3
shows the different types of HBA cards and the buffer credits comparison for each one.

Table 11-3 Buffer Credits per port available in SVC nodes


HBA Adapter Buffer Credits per Maximum supported distance with
port LW SFP

8Gb (4 port cards) 41 10 Km (6.2 miles)

16Gb (2 port cards) 80 10 Km (6.2 miles)

16Gb (4 port cards)a 40 5 Km (3.1 miles)


a. Available only with IBM Spectrum Virtualize Software version 7.6

Technical guidelines and Publications


For more information about the SVC Stretched Cluster, Enhanced Stretched Cluster, and
HyperSwap including planning, implementation, configuration steps, and troubleshooting,
consult the following resources:
򐂰 IBM SAN Volume Controller Enhanced Stretched Cluster with VMware, SG24-8211
򐂰 IBM SAN Volume Controller Stretched Cluster with PowerVM and PowerHA, SG24-8142
򐂰 IBM Spectrum Virtualize and IBM Spectrum Scale in an Enhanced Stretched Cluster
Implementation, REDP-5224
򐂰 IBM Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware implementation,
SG24-8317
򐂰 IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521
򐂰 IBM SAN Volume Controller Knowledge Center:
http://www-01.ibm.com/support/knowledgecenter/STPVGU/welcome

964 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933bibl.fm

Related publications

The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.

IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
򐂰 Implementing the IBM System Storage SAN Volume Controller V7.2, SG24-7933
򐂰 Implementing the IBM Storwize V7000 V7.2, SG24-7938
򐂰 IBM b-type Gen 5 16 Gbps Switches and Network Advisor, SG24-8186
򐂰 Introduction to Storage Area Networks and System Networking, SG24-5470
򐂰 IBM SAN Volume Controller and IBM FlashSystem 820: Best Practices and Performance
Capabilities, REDP-5027
򐂰 Implementing the IBM SAN Volume Controller and FlashSystem 820, SG24-8172
򐂰 Implementing IBM FlashSystem 840, SG24-8189
򐂰 IBM FlashSystem in IBM PureFlex System Environments, TIPS1042
򐂰 IBM FlashSystem 840 Product Guide, TIPS1079
򐂰 IBM FlashSystem 820 Running in an IBM StorwizeV7000 Environment, TIPS1101
򐂰 Implementing FlashSystem 840 with SAN Volume Controller, TIPS1137
򐂰 IBM FlashSystem V840 Enterprise Performance Solution, TIPS1158
򐂰 IBM Midrange System Storage Implementation and Best Practices Guide, SG24-6363
򐂰 IBM System Storage b-type Multiprotocol Routing: An Introduction and Implementation,
SG24-7544
򐂰 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848
򐂰 Tivoli Storage Productivity Center for Replication for Open Systems, SG24-8149
򐂰 Tivoli Storage Productivity Center V5.2 Release Guide, SG24-8204
򐂰 Implementing an IBM b-type SAN with 8 Gbps Directors and Switches, SG24-6116

You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks

© Copyright IBM Corp. 2015. All rights reserved. 965


7933bibl.fm Draft Document for Review February 4, 2016 8:01 am

Other resources
These publications are also relevant as further information sources:
򐂰 IBM System Storage Master Console: Installation and User’s Guide, GC30-4090
򐂰 IBM System Storage Open Software Family SAN Volume Controller: CIM Agent
Developers Reference, SC26-7545
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Command-Line
Interface User's Guide, SC26-7544
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Host Attachment
Guide, SC26-7563
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Installation Guide,
SC26-7541
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Planning Guide,
GA22-1052
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Service Guide,
SC26-7542
򐂰 IBM System Storage SAN Volume Controller - Software Installation and Configuration
Guide, SC23-6628
򐂰 IBM System Storage SAN Volume Controller V6.2.0 - Software Installation and
Configuration Guide, GC27-2286
򐂰 IBM System Storage SAN Volume Controller 6.2.0 Configuration Limits and Restrictions,
S1003799
򐂰 IBM TotalStorage Multipath Subsystem Device Driver User’s Guide, SC30-4096
򐂰 IBM XIV and SVC Best Practices Implementation Guide
http://ibm.co/1bk64gW
򐂰 Considerations and Comparisons between IBM SDD for Linux and DM-MPIO
http://ibm.co/1CD1gxG

Referenced websites
These websites are also relevant as further information sources:
򐂰 IBM Storage home page
http://www.storage.ibm.com
򐂰 SAN Volume Controller supported platform
http://ibm.co/1FNjddm
򐂰 SAN Volume Controller IBM Knowledge Center
http://www-01.ibm.com/support/knowledgecenter/STPVGU/welcome
򐂰 Cygwin Linux-like environment for Windows
http://www.cygwin.com

966 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933bibl.fm

򐂰 Open source site for SSH for Windows and Mac


http://www.openssh.com/windows.html
򐂰 Sysinternals home page
http://www.sysinternals.com
򐂰 Download site for Windows SSH freeware
http://www.chiark.greenend.org.uk/~sgtatham/putty

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

Related publications 967


7933bibl.fm Draft Document for Review February 4, 2016 8:01 am

968 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided
250 by 526 which equals a spine width of .4752". In this case, you would use the .5” spine. Now select the Spine width for the book and hide the others: Special>Conditional
Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
Conditional Text Settings (ONLY!) to the book files.
Draft Document for Review February 4, 2016 8:01 am 7933spine.fm 969
Implementing the IBM System SG24-7933-04
Storage SAN Volume Controller ISBN DocISBN
(1.5” spine)
1.5”<-> 1.998”
789 <->1051 pages
Implementing the IBM System SG24-7933-04
Storage SAN Volume Controller ISBN DocISBN
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
SG24-7933-04
Implementing the IBM System Storage SAN Volume Controller ISBN DocISBN
(0.5” spine)
0.475”<->0.873”
250 <-> 459 pages
Implementing the IBM System Storage SAN Volume Controller with IBM
(0.2”spine)
0.17”<->0.473”
90<->249 pages
(0.1”spine)
0.1”<->0.169”
53<->89 pages
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided
250 by 526 which equals a spine width of .4752". In this case, you would use the .5” spine. Now select the Spine width for the book and hide the others: Special>Conditional
Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
Conditional Text Settings (ONLY!) to the book files.
Draft Document for Review February 4, 2016 8:01 am 7933spine.fm 970
Implementing the IBM SG24-7933-04
System Storage SAN ISBN DocISBN
(2.5” spine)
2.5”<->nnn.n”
1315<-> nnnn pages
Implementing the IBM System SG24-7933-04
Storage SAN Volume Controller ISBN DocISBN
with IBM Spectrum Virtualize
(2.0” spine)
2.0” <-> 2.498”
1052 <-> 1314 pages
Back cover
Draft Document for Review February 4, 2016 8:04 am

SG24-7933-04

ISBN DocISBN

Printed in U.S.A.

ibm.com/redbooks

You might also like