Implementing SVC
Implementing SVC
Jon Tate
Erwan Auffret
Pawel Brodacki
Libor Miklas
Glen Routley
James Whitaker
Redbooks
International Technical Support Organization
February 2018
SG24-7933-06
Note: Before using this information and the product it supports, read the information in “Notices” on
page xiii.
This edition applies to IBM Spectrum Virtualize V8.1 and the associated hardware and software detailed
within. Note that the screen captures included within this book might differ from the generally available (GA)
version, because parts of this book were written with pre-GA code.
© Copyright International Business Machines Corporation 2011, 2018. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Chapter 3. Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.1.1 Basic planning flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2 Planning for availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3 Connectivity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.1 Planning for power outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.2 Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5 Planning IP connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.1 Firewall planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6 SAN configuration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.1 Physical topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.2 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.6.3 SVC cluster system zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.6.4 Back-end storage zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.6.5 Host zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.6.6 Zoning considerations for Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . 70
3.6.7 Port designation recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.6.8 Port masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.7 iSCSI configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.7.1 iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.7.2 Topology and IP addressing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.7.3 General preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.7.4 iSCSI back-end storage attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.8 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.9 Storage pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.9.1 The storage pool and SAN Volume Controller cache relationship . . . . . . . . . . . . 79
3.10 Volume configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.10.1 Planning for image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.10.2 Planning for thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.11 Host attachment planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.11.1 Queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.11.2 Offloaded data transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.12 Host mapping and LUN masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.12.1 Planning for large deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.13 NPIV planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
iv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.14 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.14.1 FlashCopy guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.14.2 Combining FlashCopy and Metro Mirror or Global Mirror . . . . . . . . . . . . . . . . . . 87
3.14.3 Planning for Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.15 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.16 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . . . . . . 91
3.17 SAN Volume Controller configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . 92
3.18 IBM Spectrum Virtualize Port Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.19 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.19.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.19.2 Back-end storage subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.19.3 SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.19.4 IBM Real-time Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.19.5 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Contents v
5.7 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.8 Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.9 Access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.9.1 Users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.9.2 Audit log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.10 Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5.10.1 Notifications menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5.10.2 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.10.3 System menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5.10.4 Support menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.10.5 GUI preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.11 Additional frequent tasks in GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.11.1 Renaming components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.11.2 Changing system topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
vi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
7.7 HyperSwap and the mkvolume command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
7.7.1 Volume manipulation commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7.8 Mapping volumes to host after creation of volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
7.8.1 Mapping newly created volumes to the host using the wizard . . . . . . . . . . . . . . 295
7.9 Migrating a volume to another storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
7.10 Migrating volumes using the volume copy feature . . . . . . . . . . . . . . . . . . . . . . . . . . 303
7.11 Volume operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
7.11.1 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
7.11.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
7.11.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
7.11.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
7.11.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
7.11.6 Adding a compressed volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
7.11.7 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
7.11.8 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
7.11.9 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
7.11.10 Using volume protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
7.11.11 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
7.11.12 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
7.11.13 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
7.11.14 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
7.11.15 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
7.11.16 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . 325
7.11.17 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
7.11.18 Showing a volume on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
7.11.19 Showing which volumes are using a storage pool . . . . . . . . . . . . . . . . . . . . . 327
7.11.20 Showing which MDisks are used by a specific volume . . . . . . . . . . . . . . . . . . 328
7.11.21 Showing from which storage pool a volume has its extents . . . . . . . . . . . . . . 328
7.11.22 Showing the host to which the volume is mapped . . . . . . . . . . . . . . . . . . . . . 330
7.11.23 Showing the volume to which the host is mapped . . . . . . . . . . . . . . . . . . . . . 330
7.11.24 Tracing a volume from a host back to its physical disk . . . . . . . . . . . . . . . . . . 331
7.12 I/O throttling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
7.12.1 Define a volume throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
7.12.2 View existing volume throttles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
7.12.3 Remove a volume throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Contents vii
8.5.3 Adding and deleting a host port by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . 385
8.5.4 Host cluster operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
viii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.1.12 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
11.1.13 FlashCopy and image mode Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
11.1.14 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
11.1.15 Thin provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
11.1.16 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
11.1.17 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
11.1.18 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
11.1.19 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . 491
11.1.20 FlashCopy attributes and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
11.2 Managing FlashCopy by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
11.2.1 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
11.2.2 FlashCopy window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
11.2.3 Creating a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
11.2.4 Single-click snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
11.2.5 Single-click clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
11.2.6 Single-click backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
11.2.7 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
11.2.8 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 512
11.2.9 Showing related Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
11.2.10 Moving FlashCopy mappings across Consistency Groups. . . . . . . . . . . . . . . 516
11.2.11 Removing FlashCopy mappings from Consistency Groups . . . . . . . . . . . . . . 517
11.2.12 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
11.2.13 Renaming FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
11.2.14 Deleting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
11.2.15 Deleting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
11.2.16 Starting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
11.2.17 Stopping FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
11.2.18 Memory allocation for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
11.3 Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
11.3.1 Considerations for using Transparent Cloud Tiering. . . . . . . . . . . . . . . . . . . . . 529
11.3.2 Transparent Cloud Tiering as backup solution and data migration. . . . . . . . . . 529
11.3.3 Restore using Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
11.3.4 Transparent Cloud Tiering restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
11.4 Implementing Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
11.4.1 DNS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
11.4.2 Enabling Transparent Cloud Tiering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
11.4.3 Creating cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
11.4.4 Managing cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
11.4.5 Restoring cloud snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
11.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
11.6 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
11.6.1 IBM SAN Volume Controller and Storwize system layers. . . . . . . . . . . . . . . . . 544
11.6.2 Multiple IBM Spectrum Virtualize systems replication. . . . . . . . . . . . . . . . . . . . 546
11.6.3 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
11.6.4 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
11.6.5 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
11.6.6 Synchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
11.6.7 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
11.6.8 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
11.6.9 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
11.6.10 Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
11.6.11 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
11.6.12 Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Contents ix
11.6.13 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
11.6.14 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
11.6.15 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
11.6.16 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
11.6.17 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
11.6.18 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
11.6.19 IBM Spectrum Virtualize HyperSwap topology . . . . . . . . . . . . . . . . . . . . . . . . 563
11.6.20 Consistency Protection for GM/MM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
11.6.21 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . 564
11.6.22 Remote Copy configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
11.6.23 Remote Copy states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
11.7 Remote Copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
11.7.1 Remote Copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
11.7.2 Listing available system partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
11.7.3 Changing the system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
11.7.4 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
11.7.5 Creating a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 576
11.7.6 Creating a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 576
11.7.7 Changing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 576
11.7.8 Changing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 577
11.7.9 Starting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 577
11.7.10 Stopping Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 577
11.7.11 Starting Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . . 578
11.7.12 Stopping Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 578
11.7.13 Deleting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 578
11.7.14 Deleting Metro Mirror/Global Mirror consistency group. . . . . . . . . . . . . . . . . . 578
11.7.15 Reversing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 579
11.7.16 Reversing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 579
11.8 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
11.8.1 Native IP replication technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
11.8.2 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
11.8.3 IP Partnership and data compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
11.8.4 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
11.8.5 IP partnership and terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
11.8.6 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
11.8.7 Remote copy groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
11.8.8 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
11.9 Managing Remote Copy by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
11.9.1 Creating Fibre Channel partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
11.9.2 Creating remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
11.9.3 Creating Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
11.9.4 Renaming remote copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
11.9.5 Renaming a remote copy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 615
11.9.6 Moving stand-alone remote copy relationships to Consistency Group . . . . . . . 616
11.9.7 Removing remote copy relationships from Consistency Group . . . . . . . . . . . . 617
11.9.8 Starting remote copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
11.9.9 Starting a remote copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
11.9.10 Switching a relationship copy direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
11.9.11 Switching a Consistency Group direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
11.9.12 Stopping remote copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
11.9.13 Stopping a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
x Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.9.14 Deleting remote copy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
11.9.15 Deleting a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
11.9.16 Remote Copy memory allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
11.10 Troubleshooting remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
11.10.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
11.10.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
Contents xi
13.3 Configuration backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
13.3.1 Backup using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
13.3.2 Saving the backup using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
13.4 Software update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
13.4.1 Precautions before the update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
13.4.2 IBM Spectrum Virtualize update test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
13.4.3 Update procedure to IBM Spectrum Virtualize V8.1 . . . . . . . . . . . . . . . . . . . . . 707
13.4.4 Updating IBM Spectrum Virtualize with a Hot Spare Node . . . . . . . . . . . . . . . . 713
13.4.5 Updating IBM SAN Volume Controller internal drives code . . . . . . . . . . . . . . . 714
13.4.6 Updating the IBM SAN Volume Controller system manually . . . . . . . . . . . . . . 718
13.5 Health Checker feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
13.6 Troubleshooting and fix procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
13.6.1 Managing event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
13.6.2 Running a fix procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
13.6.3 Resolve alerts in a timely manner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
13.6.4 Event log details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
13.7 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
13.7.1 Email notifications and the Call Home function. . . . . . . . . . . . . . . . . . . . . . . . . 729
13.7.2 Disabling and enabling notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
13.7.3 Remote Support Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
13.7.4 SNMP Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
13.7.5 Syslog notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
13.8 Audit log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
13.9 Collecting support information using the GUI and the CLI . . . . . . . . . . . . . . . . . . . . 744
13.9.1 Collecting information using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
13.9.2 Collecting logs using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
13.9.3 Uploading files to the Support Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
13.10 Service Assistant Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
xii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM® PowerHA®
Bluemix® IBM FlashSystem® Real-time Compression™
DB2® IBM Spectrum™ Redbooks®
developerWorks® IBM Spectrum Accelerate™ Redbooks (logo) ®
DS4000® IBM Spectrum Control™ Storwize®
DS8000® IBM Spectrum Protect™ System Storage®
Easy Tier® IBM Spectrum Scale™ Tivoli®
FlashCopy® IBM Spectrum Storage™ XIV®
HyperSwap® IBM Spectrum Virtualize™
SoftLayer, and The Planet are trademarks or registered trademarks of SoftLayer, Inc., an IBM Company.
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
xiv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Preface
This IBM® Redbooks® publication is a detailed technical guide to the IBM System Storage®
SAN Volume Controller, which is powered by IBM Spectrum™ Virtualize V8.1.
IBM SAN Volume Controller is a virtualization appliance solution that maps virtualized
volumes that are visible to hosts and applications to physical volumes on storage devices.
Each server within the storage area network (SAN) has its own set of virtual storage
addresses that are mapped to physical addresses. If the physical addresses change, the
server continues running by using the same virtual addresses that it had before. Therefore,
volumes or storage can be added or moved while the server is still running.
The IBM virtualization technology improves the management of information at the “block”
level in a network, which enables applications and servers to share storage devices on a
network.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.
Catarina Castro, Frank Enders, Giulio Fiscella, Dharmesh Kamdar, Paulo Tomiyoshi Takeda
xvi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Thanks to the following people for their contributions to this project:
Lee Sanders
Christopher Bulmer
Paul Cashman
Carlos Fuente
IBM Hursley, UK
Catarina Castro
IBM Manchester, UK
Navin Manohar
Terry Niemeyer
IBM Systems, US
Chris Saul
IBM Systems, US
Detlef Helmbrecht
IBM ATS, Germany
Special thanks to the Brocade Communications Systems staff in San Jose, California for their
support of this residency in terms of equipment and support in many areas:
Silviano Gaona
Sangam Racherla
Brian Steffler
Marcus Thordal
Brocade Communications Systems
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xvii
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xviii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-7933-06
for Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V8.1
as created or updated on February 28, 2018.
New information
Add new look GUI
Hot Spare node
RAS line items
Changed information
Added new GUI windows throughout
The focus of this publication is virtualization at the disk layer, which is referred to as
block-level virtualization or the block aggregation layer. A description of file system
virtualization is beyond the intended scope of this book.
The Storage Networking Industry Association’s (SNIA) block aggregation model provides a
useful overview of the storage domain and the layers, as shown in Figure 1-1. It illustrates
several layers of a storage domain:
File
Block aggregation
Block subsystem layers
2 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage
controllers), or in storage devices (intelligent disk arrays).
The IBM SAN Volume Controller is implemented as a clustered appliance in the storage
network layer. The IBM Storwize family is deployed as modular storage that provides
capabilities to virtualize its own internal storage and external storage.
The key concept of virtualization is to decouple the storage from the storage functions that
are required in the storage area network (SAN) environment.
Decoupling means abstracting the physical location of data from the logical representation of
the data. The virtualization engine presents logical entities to the user and internally manages
the process of mapping these entities to the actual location of the physical storage.
The actual mapping that is performed depends on the specific implementation, such as the
granularity of the mapping, which can range from a small fraction of a physical disk up to the
full capacity of a physical disk. A single block of information in this environment is identified by
its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which
is known as a logical block address (LBA).
The term physical disk is used in this context to describe a piece of storage that might be
carved out of a Redundant Array of Independent Disks (RAID) array in the underlying disk
subsystem. Specific to the IBM Spectrum Virtualize implementation, the address space that is
mapped between the logical entity is referred to as a volume. The array of physical disks is
referred to as managed disks (MDisks).
The server and application are aware of the logical entities only, They access these entities by
using a consistent interface that is provided by the virtualization layer.
The functionality of a volume that is presented to a server, such as expanding or reducing the
size of a volume, mirroring a volume, creating an IBM FlashCopy®, and thin provisioning, is
implemented in the virtualization layer. It does not rely in any way on the functionality that is
provided by the underlying disk subsystem. Data that is stored in a virtualized environment is
stored in a location-independent way, which enables a user to move or migrate data between
physical locations, which are referred to as storage pools.
The IBM SAN Volume Controller delivers all these functions in a homogeneous way on a
scalable and high availability software platform over any attached storage and to any attached
server.
You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs relate to managing the storage system.
4 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
But how much of the management of multiple systems, with separate interfaces, can be
handled as a single entity? In a non-virtualized storage environment, every system is an
“island” that must be managed separately.
Because IBM Spectrum Virtualize provides many functions, such as mirroring and IBM
FlashCopy, there is no need to acquire additional subsets of applications for each attached
disk subsystem that is virtualized by IBM Spectrum Virtualize.
Today, it is typical that open systems run at less than 50% of the usable capacity that is
provided by the RAID disk subsystems. A block-level virtualization solution, such as IBM
Spectrum Virtualize, can allow significant savings, increase effective capacity of storage
systems up to five times, and decreases your need for floor space, power, and cooling.
The intent of this book is to cover the major software features and provide a brief summary of
supported hardware.
The node model 2145-SV1 features two Intel Xeon E5 v4 eight-core processors and 64
gigabytes (GB) of memory with options to increase the total amount of memory up to 256 GB.
The IBM SAN Volume Controller model 2145-SV1 features three 10-gigabit (Gb) Ethernet
ports that can be configured as iSCSI connectivity and system management. The node model
2145-SV1 supports up to four I/O ports adapters that provide up to sixteen 16 Gb Fibre
Channel ports or up to four 10 Gb Ethernet ports to be configured as iSCSI or Fibre Channel
over Ethernet (FCoE).
In a clustered solution, the IBM SAN Volume Controller node model 2145-SV1 can support up
to 20 expansion enclosures.
The 2145-SV1 model is complemented with the following expansion enclosures (attachable
also to the 2145-DH8):
2145-12F holds up to 12 LFF 3.5” SAS-attached disk drives in 2U enclosures
2145-24F accommodates up to 24 SFF 2.5” SAS-attached disks in 2U enclosures
2145-92F adds up to 92 internal 3.5” SAS flash drives in 5U enclosures
Note: Model 2147 (including expansion enclosures) has identical hardware to Model 2145,
but delivered with the enterprise-level remote IBM support that offers these features:
Technical Advisors to proactively improve problem determination and communication
On-site and remote software installation and updates
Configuration support
Enhanced response times for high severity problems
6 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Enhanced scalability with up to three Peripheral Component Interconnect Express (PCIe)
slot capabilities, which allow users to install up to three 4-port 8 Gbps FC host bus
adapters (HBAs), for a total of 12 ports. It supports one four-port 10 gigabit Ethernet (GbE)
card (iSCSI or FCoE) and one dual-port 12 Gbps serial-attached SCSI (SAS) card for
flash drive expansion unit attachment (model 2145-24F).
Improved Random Access Compression Engine (RACE) with the processing offloaded to
the secondary dedicated processor and using 36 GB of dedicated memory cache. At a
minimum, one Compression Accelerator card needs to be installed (up to 200 compressed
volumes) or two Compression Accelerators allow up to 512 compressed volumes.
Optional 2U expansion enclosure 2145-24F with up to 24 flash drives (200, 400, 800, or
1600 GB).
Extended functionality of IBM Easy Tier® by storage pool balancing mode within the same
tier. It moves or exchanges extents between highly utilized and low-utilized MDisks within
a storage pool, increasing the read and write performance of the volumes. This function is
enabled automatically in IBM SAN Volume Controller, and does not need any licenses.
The SVC cache rearchitecture splits the original single cache into upper and lower caches
of different sizes. Upper cache uses up to 256 megabytes (MB), and lower cache uses up
to 64 GB of installed memory allocated to both processors (if installed). Also, 36 GB of
memory is always allocated for Real-time Compression if enabled.
Near-instant prepare for FlashCopy due to the presence of the lower cache. Multiple
snapshots of the golden image now share cache data (rather than several N copies).
8 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Software changes:
– Visual and functional enhancements in the GUI, with changed menu layout and an
integrated performance meter on main page.
– Implementation of Distributed RAID, which differs from traditional RAID arrays by
eliminating dedicated spare drives. Spare capacity is spread across disks, making the
reconstruction of failed disk faster.
– Introduced software encryption enabled by IBM Spectrum Virtualize and using
AES256-XTS algorithm. Encryption is enabled on the storage pool level. All newly
created volumes in such pool are automatically encrypted. An encryption license with
Universal Serial Bus (USB) flash drives is required.
– Developed the Comprestimator tool, which is included in IBM Spectrum Virtualize
software. It provides statistics to estimate potential storage savings. Available from the
CLI, it does not need compression licenses and does not trigger any compression
process. It uses the same estimation algorithm as an external host-based application,
so results are similar.
– Enhanced GUI wizard for initial configuration of HyperSwap topology. IBM Spectrum
Virtualize now allows IP-attached quorum disks in HyperSwap system configuration.
– Increased the maximum number of iSCSI hosts attached to the system to 2048
(512 host iSCSI qualified names (IQNs) per I/O group) with a maximum of four iSCSI
sessions per SVC node (8 per I/O group).
– Improved and optimized read I/O performance in HyperSwap system configuration by
parallel read from primary and secondary local volume copies. Both copies must be in
a synchronized state.
– Extends the support of VVols. Using IBM Spectrum Virtualize, you can manage
one-to-one partnership of VM drives to IBM SAN Volume Controller volumes. It
eliminates single, shared volume (data store) I/O contention.
– Customizable login banner. Using CLI commands, you can now define and show a
welcome message or important disclaimer on the login window to users. This banner is
shown in GUI or CLI login window.
10 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
1.4 Summary
The use of storage virtualization is the foundation for a flexible and reliable storage solution
helps enterprises to better align business and IT organizations by optimizing the storage
infrastructure and storage management to meet business demands.
IBM Spectrum Virtualize running on IBM SAN Volume Controller is a mature, ninth-generation
virtualization solution that uses open standards and complies with the SNIA storage model.
IBM SAN Volume Controller is an appliance-based, in-band block virtualization process in
which intelligence (including advanced storage functions) is ported from individual storage
devices to the storage network.
IBM Spectrum Virtualize can improve the usage of your storage resources, simplify storage
management, and improve the availability of business applications.
All the concepts included in this chapter are described in greater level of details in later
chapters.
One goal of this project was to create a system that was almost exclusively composed of
commercial off the shelf (COTS) standard parts. As with any enterprise-level storage control
system, it had to deliver a level of performance and availability that was comparable to the
highly optimized storage controllers of previous generations. The idea of building a storage
control system that is based on a scalable cluster of lower performance servers, rather than a
monolithic architecture of two nodes, is still a compelling idea.
COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices.
The first documentation that covered this project was released to the public in 2003 in the
form of the IBM Systems Journal, Vol. 42, No. 2, 2003, “The software architecture of a SAN
storage control system,” by J. S. Glider, C. F. Fuente, and W. J. Scales. The article is available
at the following website:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853
The results of the COMPASS project defined the fundamentals for the product architecture.
The first release of IBM System Storage SAN Volume Controller was announced in July 2003.
Each of the following releases brought new and more powerful hardware nodes, which
approximately doubled the I/O performance and throughput of its predecessors, provided new
functionality, and offered more interoperability with new elements in host environments, disk
subsystems, and the storage area network (SAN).
The following major approaches are used today for the implementation of block-level
aggregation and virtualization:
Symmetric: In-band appliance
Virtualization splits the storage that is presented by the storage systems into smaller
chunks that are known as extents. These extents are then concatenated, by using various
policies, to make virtual disks (volumes). With symmetric virtualization, host systems can
be isolated from the physical storage. Advanced functions, such as data migration, can run
without the need to reconfigure the host.
With symmetric virtualization, the virtualization engine is the central configuration point for
the SAN. The virtualization engine directly controls access to the storage, and to the data
that is written to the storage. As a result, locking functions that provide data integrity and
advanced functions, such as cache and Copy Services, can be run in the virtualization
engine itself.
Therefore, the virtualization engine is a central point of control for device and advanced
function management. Symmetric virtualization enables you to build a firewall in the
storage network. Only the virtualization engine can grant access through the firewall.
14 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Symmetric virtualization can have disadvantages. The main disadvantage that is
associated with symmetric virtualization is scalability. Scalability can cause poor
performance because all input/output (I/O) must flow through the virtualization engine.
To solve this problem, you can use an n-way cluster of virtualization engines that has
failover capacity.
You can scale the additional processor power, cache memory, and adapter bandwidth to
achieve the level of performance that you want. Additional memory and processing power
are needed to run advanced services, such as Copy Services and caching. The SVC uses
symmetric virtualization. Single virtualization engines, which are known as nodes, are
combined to create clusters. Each cluster can contain from two to eight nodes.
Asymmetric: Out-of-band or controller-based
With asymmetric virtualization, the virtualization engine is outside the data path and
performs a metadata-style service. The metadata server contains all of the mapping and
the locking tables, and the storage devices contain only data. In asymmetric virtual
storage networks, the data flow is separated from the control flow.
A separate network or SAN link is used for control purposes. Because the control flow is
separated from the data flow, I/O operations can use the full bandwidth of the SAN. A
separate network or SAN link is used for control purposes.
Asymmetric virtualization can have the following disadvantages:
– Data is at risk to increased security exposures, and the control network must be
protected with a firewall.
– Metadata can become complicated when files are distributed across several devices.
– Each host that accesses the SAN must know how to access and interpret the
metadata. Specific device drivers or agent software must therefore be running on each
of these hosts.
– The metadata server cannot run advanced functions, such as caching or Copy
Services, because it only “knows” about the metadata and not about the data itself.
Logical Entity
(Volume)
SAN
SAN
Virtualization
Virtualization
The controller-based approach has high functionality, but it fails in terms of scalability or
upgradeability. Because of the nature of its design, no true decoupling occurs with this
approach, which becomes an issue for the lifecycle of this solution, such as with a controller.
Data migration issues and questions are challenging, such as how to reconnect the servers to
the new controller, and how to reconnect them online without any effect on your applications.
Be aware that with this approach, you not only replace a controller but also implicitly replace
your entire virtualization solution. In addition to replacing the hardware, updating, or
repurchasing the licenses for the virtualization feature, advanced copy functions, and so on,
might be necessary.
Only the fabric-based appliance solution provides an independent and scalable virtualization
platform that can provide enterprise-class copy services and that is open for future interfaces
and protocols. By using the fabric-based appliance solution, you can choose the disk
subsystems that best fit your requirements, and you are not locked into specific SAN
hardware.
For these reasons, IBM chose the SAN-based appliance approach with inline block
aggregation for the implementation of storage virtualization with IBM Spectrum Virtualize.
16 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The IBM SAN Volume Controller includes the following key characteristics:
It is highly scalable, which provides an easy growth path to two-n nodes (grow in a pair of
nodes due to the cluster function).
It is SAN interface-independent. It supports FC and FCoE and iSCSI, but it is also open for
future enhancements.
It is host-independent for fixed block-based Open Systems environments.
It is external storage RAID controller-independent, which provides a continuous and
ongoing process to qualify more types of controllers.
It can use disks that are internal disks that are attached to the nodes (flash drives) or
externally direct-attached in expansion enclosures.
On the SAN storage provided by the disk subsystems, the IBM SAN Volume Controller offers
the following services:
Creates a single pool of storage
Provides logical unit virtualization
Manages logical volumes
Mirrors logical volumes
IBM SAN Volume Controller running IBM Spectrum Virtualize V8.1 also provides these
functions:
Large scalable cache
Copy Services
IBM FlashCopy (point-in-time copy) function, including thin-provisioned FlashCopy to
make multiple targets affordable)
IBM Transparent Cloud Tiering function that allows the IBM SAN Volume Controller to
interact with Cloud Service Providers
Metro Mirror (synchronous copy)
Global Mirror (asynchronous copy)
Data migration
Space management (Thin Provisioning, Compression)
IBM Easy Tier to automatically migrate data between storage types of different
performance, based on disk workload
Encryption of external attached storage
Supporting HyperSwap
Supporting VMware VSphere Virtual Volumes (VVols) and Microsoft ODX
Direct attachment of hosts
Hot Spare nodes with standby function of single or multiple nodes
The objectives of IBM Spectrum Virtualize are to manage storage resources in your IT
infrastructure, and to ensure that they are used to the advantage of your business. These
processes take place quickly, efficiently, and in real time, while avoiding increases in
administrative costs.
IBM Spectrum Virtualize is a core software engine of the whole family of IBM Storwize
products (see Figure 2-2). The contents of this book is intentionally related to the deployment
considerations of IBM SAN Volume Controller.
Terminology note: In this book, the terms IBM SAN Volume Controller and SVC are used
to refer to both models of the most recent products as the text applies similarly to both.
Typically, the hosts cannot see or operate on the same physical storage (logical unit number
(LUN)) from the RAID controller that is assigned to IBM SAN Volume Controller. If the same
LUNs are not shared, storage controllers can be shared between the SVC and direct host
access. The zoning capabilities of the SAN switch must be used to create distinct zones to
ensure that this rule is enforced. SAN fabrics can include standard FC, FCoE, iSCSI over
Ethernet, or possible future types.
18 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 2-3 shows a conceptual diagram of a storage system that uses the SVC. It shows
several hosts that are connected to a SAN fabric or local area network (LAN). In practical
implementations that have high-availability requirements (most of the target clients for the
SVC), the SAN fabric cloud represents a redundant SAN. A redundant SAN consists of a
fault-tolerant arrangement of two or more counterpart SANs, which provide alternative paths
for each SAN-attached device.
Both scenarios (the use of a single network and the use of two physically separate networks)
are supported for iSCSI-based and LAN-based access networks to the SAN Volume
Controller.
Redundant paths to volumes can be provided in both scenarios. For simplicity, Figure 2-3
shows only one SAN fabric and two zones: Host and storage. In a real environment, it is a
leading practice to use two redundant SAN fabrics. IBM SAN Volume Controller can be
connected to up to four fabrics.
A clustered system of IBM SAN Volume Controller nodes that are connected to the same
fabric presents logical disks or volumes to the hosts. These volumes are created from
managed LUNs or managed disks (MDisks) that are presented by the RAID disk subsystems.
As explained in 2.1.1, “IBM SAN Volume Controller architectural overview” on page 14, hosts
are not permitted to operate on the RAID LUNs directly. All data transfer happens through the
IBM SAN Volume Controller nodes.
Additional information: For the most up-to-date information about features, benefits, and
specifications of the IBM SAN Volume Controller models, go to:
https://www.ibm.com/us-en/marketplace/san-volume-controller
The information in this book is valid at the time of writing and covers IBM Spectrum
Virtualize V8.1.0.0. However, as the IBM SAN Volume Controller matures, expect to see
new features and enhanced specifications.
I/O Ports and 3x 10 Gb Ethernet ports for 10 GbE 3x 1 Gb Ethernet ports for 1 GbE
Management iSCSI connectivity and system iSCSI connectivity and system
management management
USB Ports 4 4
SAS Chain 2 2
The following optional features are available for IBM SAN Volume Controller model SV1:
256 GB Cache Upgrade fully unlocked with code V8.1
Four port 16 Gb FC adapter card for 16 Gb FC connectivity
Four port 10 Gb Ethernet adapter card for 10 Gb iSCSI/FCoE connectivity
Compression accelerator card
Four port 12 Gb SAS expansion enclosure attachment card
20 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The following optional features are available for IBM SAN Volume Controller model DH8:
Additional Processor with 32 GB Cache Upgrade
Four port 16 Gb FC adapter card for 16 Gb FC connectivity
Four port 10 Gb Ethernet adapter card for 10 Gb iSCSI/FCoE connectivity
Compression accelerator card
Four port 12 Gb SAS expansion enclosure attachment card
Important: IBM SAN Volume Controller nodes model 2145-SV1 and 2145-DH8 can
contain a 16 Gb FC or a 10 Gb Ethernet adapter, but only one 10 Gbps Ethernet adapter is
supported.
The comparison of current and outdated models of SVC is shown in Table 2-2. Expansion
enclosures are not included in the list.
The IBM SAN Volume Controller expansion enclosure consists of enclosure and drives. Each
enclosure contains two canisters that can be replaced and maintained independently. The
IBM SAN Volume Controller supports three types of expansion enclosure. The expansion
enclosure models are 12F, 24F, and 5U Dense Drawers.
The expansion enclosure model 12F features two expansion canisters and holds up to twelve
3.5-inch SAS drives in a 2U, 19-inch rack mount enclosure.
The expansion enclosure model 24F supports up to twenty-four internal flash, 2.5-inch SAS
drives or a combination of them. The expansion enclosure 24F also features two expansion
canisters in a 2U, 19-inch rack mount enclosure.
The Dense Expansion Drawer supports up to 92 3.5-inch drives in a 5U, 19-inch rack
mounted enclosure.
The SAN is zoned such that the application servers cannot see the back-end physical
storage. This configuration prevents any possible conflict between the IBM SAN Volume
Controller and the application servers that are trying to manage the back-end storage.
In the next topics, the terms IBM SAN Volume Controller and SVC are used to refer to both
models of the IBM SAN Volume Controller product. However, the IBM SAN Volume Controller
is based on the components that are described next.
22 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2.2.1 Nodes
Each IBM SAN Volume Controller hardware unit is called a node. The node provides the
virtualization for a set of volumes, cache, and copy services functions. The SVC nodes are
deployed in pairs (cluster), and one or multiple pairs make up a clustered system or system. A
system can consist of 1 - 4 SVC node pairs.
One of the nodes within the system is known as the configuration node. The configuration
node manages the configuration activity for the system. If this node fails, the system chooses
a new node to become the configuration node.
Because the active nodes are installed in pairs, each node provides a failover function to its
partner node if a node fails.
A specific volume is always presented to a host server by a single I/O Group of the system.
The I/O Group can be changed.
When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to one specific I/O Group in the system. Under normal conditions, the I/Os for that
specific volume are always processed by the same node within the I/O Group. This node is
referred to as the preferred node for this specific volume.
Both nodes of an I/O Group act as the preferred node for their own specific subset of the total
number of volumes that the I/O Group presents to the host servers. However, both nodes also
act as failover nodes for their respective partner node within the I/O Group. Therefore, a node
takes over the I/O workload from its partner node, if required.
In an SVC-based environment, the I/O handling for a volume can switch between the two
nodes of the I/O Group. For this reason, it is mandatory for servers that are connected
through FC to use multipath drivers to handle these failover situations.
The SVC I/O Groups are connected to the SAN so that all application servers that are
accessing volumes from this I/O Group have access to this group. Up to 512 host server
objects can be defined per I/O Group. The host server objects can access volumes that are
provided by this specific I/O Group.
If required, host servers can be mapped to more than one I/O Group within the SVC system.
Therefore, they can access volumes from separate I/O Groups. You can move volumes
between I/O Groups to redistribute the load between the I/O Groups. Modifying the I/O Group
that services the volume can be done concurrently with I/O operations if the host supports
nondisruptive volume moves.
It also requires a rescan at the host level to ensure that the multipathing driver is notified that
the allocation of the preferred node changed, and the ports by which the volume is accessed
changed. This modification can be done in the situation where one pair of nodes becomes
overused.
All configuration, monitoring, and service tasks are performed at the system level.
Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a
management IP address is set for the system.
A process is provided to back up the system configuration data onto disk so that it can be
restored if there is a disaster. This method does not back up application data. Only the SVC
system configuration information is backed up.
For the purposes of remote data mirroring, two or more systems must form a partnership
before relationships between mirrored volumes are created.
For more information about the maximum configurations that apply to the system, I/O Group,
and nodes, go to:
https://www.ibm.com/support/docview.wss?uid=ssg1S1010644
Each dense drawer can hold up 92 drives that are positioned in four rows of 14 and an
additional three rows of 12 mounted drives assemblies. The two Secondary Expander
Modules (SEMs) are centrally located in the chassis. One SEM addresses 54 drive ports,
while the other addresses 38 drive ports.
The drive slots are numbered 1 - 14, starting from the left rear slot and working from left to
right, back to front.
Each canister in the dense drawer chassis features two SAS ports numbered 1 and 2. The
use of the SAS port1 is mandatory because the expansion enclosure must be attached to an
SVC node or another expansion enclosure. SAS connector 2 is optional because it is used to
attach to more expansion enclosures.
24 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 2-5 shows a dense expansion drawer.
We added a second scale to Figure 2-6 that gives you an idea of how long it takes to access
the data in a scenario where a single CPU cycle takes one second. This scale gives you an
idea of the importance of future storage technologies closing or reducing the gap between
access times for data that is stored in cache/memory versus access times for data that is
stored on an external medium.
Since magnetic disks were first introduced by IBM in 1956 (Random Access Memory
Accounting System, also known as the IBM 305 RAMAC), they showed remarkable
performance regarding capacity growth, form factor, and size reduction, price savings (cost
per GB), and reliability.
However, the number of I/Os that a disk can handle and the response time that it takes to
process a single I/O did not improve at the same rate, although they certainly did improve. In
actual environments, you can expect from today’s enterprise-class FC serial-attached SCSI
(SAS) disk up to 200 IOPS per disk with an average response time (a latency) of
approximately 6 ms per I/O.
Today’s spinning disks continue to advance in capacity, up to several terabytes (TB), form
factor/footprint (8.89 cm (3.5 inches), 6.35 cm (2.5 inches), and 4.57 cm (1.8 inches)), and
price (cost per GB), but they are not getting much faster.
The limiting factor is the number of revolutions per minute (RPM) that a disk can perform
(approximately 15,000). This factor defines the time that is required to access a specific data
block on a rotating device. Small improvements likely will occur in the future. However, a
significant step, such as doubling the RPM (if technically even possible), inevitably has an
associated increase in power usage and price that will likely be an inhibitor.
Enterprise-class flash drives typically deliver 500,000 read and 300,000 write IOPS with
typical latencies of 50 µs for reads and 800 µs for writes. Their form factors of 4.57 cm (1.8
inches) /6.35 cm (2.5 inches)/8.89 cm (3.5 inches) and their interfaces (FC/SAS/SATA) make
them easy to integrate into existing disk shelves. The IOPS metrics significantly improve
when flash drives are consolidated in storage arrays (flash array). In this case, the read and
write IOPS are seen in millions for specific 4 KB data blocks.
26 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Flash-drive market
The flash-drive storage market is rapidly evolving. The key differentiator among today’s
flash-drive products is not the storage medium, but the logic in the disk internal controllers.
The top priorities in today’s controller development are optimally handling what is referred to
as wear-out leveling, which defines the controller’s capability to ensure a device’s durability,
and closing the remarkable gap between read and write I/O performance.
Today’s flash-drive technology is only a first step into the world of high-performance persistent
semiconductor storage. A group of the approximately 10 most promising technologies is
collectively referred to as storage-class memory (SCM).
The most common SSD technology is MLC. They are commonly found in consumer products
such as portable electronic devices. However, they are also strongly present in some
enterprise storage products. Enterprise class SSDs are built on mid to high-endurance
multi-level cell flash technology, mostly known as mainstream endurance SSD.
MLC SSDs uses the multi cell to store data and features the Wear Leveling method, which is
the process to evenly spread data across all memory cells on the SSD. This method helps to
eliminate potential hotspots caused by repetitive write-erase cycles. SLC SSDs uses a single
cell to store one bit of data, and that makes them generally faster.
To support particular business demands, IBM Spectrum Virtualize has qualified the use of
Read Intensive SSDs with applications where the read operations are significantly high. The
IBM Spectrum Virtualize GUI presents new attributes when managing disk drives, using the
GUI and CLI. The new function reports the “write-endurance” limits (in percentages) for each
qualified RI installed in the system.
Read Intensive (RI) SSDs are available as an optional purchase product to the IBM SAN
Volume Controller and the IBM Storwize Family.
For more information about Read Intensive SSDs and IBM Spectrum Virtualize, see Read
Intensive Flash Drives, REDP-5380.
Storage-class memory
SCM promises a massive improvement in performance (IOPS), a real density, cost, and
energy efficiency compared to today’s flash-drive technology. IBM Research is actively
engaged in these new technologies.
When these technologies become a reality, it will fundamentally change the architecture of
today’s storage infrastructures.
The flash MDisks can then be placed into a single flash drive tier storage pool. High-workload
volumes can be manually selected and placed into the pool to gain the performance benefits
of flash drives.
For a more effective use of flash drives, place the flash drive MDisks into a multitiered storage
pool that is combined with HDD MDisks (generic_hdd tier). Then, once it is turned on, Easy
Tier automatically detects and migrates high-workload extents onto the solid-state MDisks.
2.2.6 MDisks
The IBM SAN Volume Controller system and its I/O Groups view the storage that is presented
to the SAN by the back-end controllers as several disks or LUNs, which are known as
managed disks or MDisks. Because the SVC does not attempt to provide recovery from
physical disk failures within the back-end controllers, an MDisk often is provisioned from a
RAID array.
However, the application servers do not see the MDisks at all. Rather, they see several logical
disks, which are known as virtual disks or volumes. These disks are presented by the SVC
I/O Groups through the SAN (FC/FCoE) or LAN (iSCSI) to the servers.The MDisks are placed
into storage pools where they are divided into several extents.
For more information about the total storage capacity that is manageable per system
regarding the selection of extents, go to:
https://www.ibm.com/support/docview.wss?uid=ssg1S1010644
A volume is host-accessible storage that was provisioned out of one storage pool, or, if it is a
mirrored volume, out of two storage pools.
The maximum size of an MDisk is 1 PiB. An IBM SAN Volume Controller system supports up
to 4096 MDisks (including internal RAID arrays). When an MDisk is presented to the IBM
SAN Volume Controller, it can be one of the following statuses:
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and has no metadata that is stored
on it. The SVC does not write to an MDisk that is in unmanaged mode, except when it
attempts to change the mode of the MDisk to one of the other modes. The SVC can see
the resource, but the resource is not assigned to a storage pool.
Managed MDisk
Managed mode MDisks are always members of a storage pool, and they contribute
extents to the storage pool. Volumes (if not operated in image mode) are created from
these extents. MDisks that are operating in managed mode might have metadata extents
that are allocated from them and can be used as quorum disks. This mode is the most
common and normal mode for an MDisk.
28 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume by
using virtualization. This mode is provided to satisfy the following major usage scenarios:
– Image mode enables the virtualization of MDisks that already contain data that was
written directly and not through an SVC. Rather, it was created by a direct-connected
host.
This mode enables a client to insert the SVC into the data path of an existing storage
volume or LUN with minimal downtime. For more information about the data migration
process, see Chapter 9, “Storage migration” on page 391.
Image mode enables a volume that is managed by the SVC to be used with the native
copy services function that is provided by the underlying RAID controller. To avoid the
loss of data integrity when the SVC is used in this way, it is important that you disable
the SVC cache for the volume.
– The SVC provides the ability to migrate to image mode, which enables the SVC to
export volumes and access them directly from a host without the SVC in the path.
Each MDisk that is presented from an external disk controller has an online path count
that is the number of nodes that has access to that MDisk. The maximum count is the
maximum number of paths that is detected at any point by the system. The current count
is what the system sees at this point. A current value that is less than the maximum can
indicate that SAN fabric paths were lost.
SSDs that are in the SVC 2145-CG8 or flash space, which are presented by the external
Flash Enclosures of the SVC 2145-DH8 or SV1 nodes, are presented to the cluster as
MDisks. To determine whether the selected MDisk is an SSD/Flash, click the link on the
MDisk name to display the Viewing MDisk Details window.
If the selected MDisk is an SSD/Flash that is on an SVC, the Viewing MDisk Details
window displays values for the Node ID, Node Name, and Node Location attributes.
Alternatively, you can select Work with Managed Disks → Disk Controller Systems
from the portfolio. On the Viewing Disk Controller window, you can match the MDisk to the
disk controller system that has the corresponding values for those attributes.
2.2.7 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive experience seek time and latency time at the drive level, which can result
in 1 ms - 10 ms of response time (for an enterprise-class disk).
The SVC provides a flexible cache model, and the node’s memory can be used as read or
write cache.
Cache is allocated in 4 KiB segments. A segment holds part of one track. A track is the unit of
locking and destaging granularity in the cache. The cache virtual track size is 32 KiB (eight
segments). A track might be only partially populated with valid pages. The SVC combines
writes up to a 256 KiB track size if the writes are in the same tracks before destage. For
example, if 4 KiB is written into a track, another 4 KiB is written to another location in the
same track.
Therefore, the blocks that are written from the SVC to the disk subsystem can be any size
between 512 bytes up to 256 KiB. The large cache and advanced cache management
algorithms allow it to improve on the performance of many types of underlying disk
technologies. The SVC’s capability to manage, in the background, the destaging operations
that are incurred by writes (in addition to still supporting full data integrity) assists with SVC’s
capability in achieving good database performance.
Figure 2-7 shows the separation of the upper and lower cache.
The upper cache delivers the following functions, which enable the SVC to streamline data
write performance:
Provides fast write response times to the host by being as high up in the I/O stack as
possible
Provides partitioning
Combined, the two levels of cache also deliver the following functions:
Pins data when the LUN goes offline
Provides enhanced statistics for IBM Tivoli® Storage Productivity Center, and maintains
compatibility with an earlier version
Provides trace for debugging
Reports medium errors
Resynchronizes cache correctly and provides the atomic write functionality
Ensures that other partitions continue operation when one partition becomes 100% full of
pinned data
Supports fast-write (two-way and one-way), flush-through, and write-through
Integrates with T3 recovery procedures
Supports two-way operation
30 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Supports none, read-only, and read/write as user-exposed caching policies
Supports flush-when-idle
Supports expanding cache as more memory becomes available to the platform
Supports credit throttling to avoid I/O skew and offer fairness/balanced I/O between the
two nodes of the I/O Group
Enables switching of the preferred node without needing to move volumes between I/O
Groups
Depending on the size, age, and technology level of the disk storage system, the total
available cache in the IBM SAN Volume Controller nodes can be larger, smaller, or about the
same as the cache that is associated with the disk storage.
Because hits to the cache can occur in either the SVC or the disk controller level of the overall
system, the system as a whole can take advantage of the larger amount of cache wherever
the cache is located. Therefore, if the storage controller level of the cache has the greater
capacity, expect hits to this cache to occur, in addition to hits in the SVC cache.
In addition, regardless of their relative capacities, both levels of cache tend to play an
important role in enabling sequentially organized data to flow smoothly through the system.
The SVC cannot increase the throughput potential of the underlying disks in all cases
because this increase depends on both the underlying storage technology and the degree to
which the workload exhibits hotspots or sensitivity to cache size or cache algorithms.
SVC V7.3 introduced a major upgrade to the cache code and in association with 2145-DH8
hardware it provided an additional cache capacity upgrade. A base SVC node configuration
included 32 GB of cache. Adding the second processor and cache upgrade for Real-time
Compression (RtC) took a single node to a total of 64 GB of cache. A single I/O Group with
support for RtC contained 128 GB of cache, whereas an eight node SVC system with a
maximum cache configuration contained a total of 512 GB of cache.
These limits have been enhanced with 2145-SV1 appliance with SVC V8.1. Before this
release, the SVC memory manager (PLMM) could only address 64 GB of memory. In V8.1,
the underlying PLMM has been rewritten and the structure size increased. The cache size
can be now upgraded up to 256 GB and the whole memory can now be used. However, the
write cache is still assigned to a maximum of 12 GB and compression cache to a maximum of
34 GB. The remaining installed cache is simply used as read cache (including allocation for
features like FlashCopy, Global or Metro Mirror, and so on).
Important: When upgrading to a V8.1 system, where there is already more than 64 GB of
physical memory installed (but not used), the error message “1199 Detected hardware
needs activation” appears after the upgrade in the GUI event log (and error code 0x841
as a result of lseventlog command in CLI).
The nodes are split into groups where the remaining nodes in each group can communicate
with each other, but not with the other group of nodes that were formerly part of the system. In
this situation, some nodes must stop operating and processing I/O requests from hosts to
preserve data integrity while maintaining data access. If a group contains less than half the
nodes that were active in the system, the nodes in that group stop operating and processing
I/O requests from hosts.
It is possible for a system to split into two groups, with each group containing half the original
number of nodes in the system. A quorum disk determines which group of nodes stops
operating and processing I/O requests. In this tie-break situation, the first group of nodes that
accesses the quorum disk is marked as the owner of the quorum disk. As a result, the owner
continues to operate as the system, handling all I/O requests.
If the other group of nodes cannot access the quorum disk, or finds the quorum disk is owned
by another group of nodes, it stops operating as the system and does not handle I/O
requests. A system can have only one active quorum disk used for a tie-break situation.
However, the system uses three quorum disks to record a backup of system configuration
data to be used if there is a disaster. The system automatically selects one active quorum
disk from these three disks.
The other quorum disk candidates provide redundancy if the active quorum disk fails before a
system is partitioned. To avoid the possibility of losing all of the quorum disk candidates with a
single failure, assign quorum disk candidates on multiple storage systems.
Quorum disk placement: If possible, the SVC places the quorum candidates on separate
disk subsystems. However, after the quorum disk is selected, no attempt is made to ensure
that the other quorum candidates are presented through separate disk subsystems.
32 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You can list the quorum disk candidates and the active quorum disk in a system by using the
lsquorum command.
When the set of quorum disk candidates is chosen, it is fixed. However, a new quorum disk
candidate can be chosen in one of the following conditions:
When the administrator requests that a specific MDisk becomes a quorum disk by using
the chquorum command
When an MDisk that is a quorum disk is deleted from a storage pool
When an MDisk that is a quorum disk changes to image mode
For disaster recovery purposes, a system must be regarded as a single entity so that the
system and the quorum disk must be colocated.
Special considerations are required for the placement of the active quorum disk for a
stretched or split cluster and split I/O Group configurations. For more information, see IBM
Knowledge Center.
Important: Running an SVC system without a quorum disk can seriously affect your
operation. A lack of available quorum disks for storing metadata prevents any migration
operation (including a forced MDisk delete).
Mirrored volumes can be taken offline if no quorum disk is available. This behavior occurs
because the synchronization status for mirrored volumes is recorded on the quorum disk.
During the normal operation of the system, the nodes communicate with each other. If a node
is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the system. If a
node fails for any reason, the workload that is intended for the node is taken over by another
node until the failed node is restarted and readmitted into the system (which happens
automatically).
If the Licensed Internal Code on a node becomes corrupted, which results in a failure, the
workload is transferred to another node. The code on the failed node is repaired, and the
node is readmitted into the system (which is an automatic process).
IP quorum configuration
In a stretched configuration or HyperSwap configuration, you must use a third, independent
site to house quorum devices. To use a quorum disk as the quorum device, this third site must
use Fibre Channel or IP connectivity together with an external storage system. In a local
environment, no extra hardware or networking, such as Fibre Channel or SAS-attached
storage, is required beyond what is normally always provisioned within a system.
To use an IP-based quorum application as the quorum device for the third site, no Fibre
Channel connectivity is used. Java applications are run on hosts at the third site. However,
there are strict requirements on the IP network, and some disadvantages with using IP
quorum applications.
Unlike quorum disks, all IP quorum applications must be reconfigured and redeployed to
hosts when certain aspects of the system configuration change. These aspects include
adding or removing a node from the system, or when node service IP addresses are
changed.
Even with IP quorum applications at the third site, quorum disks at site one and site two are
required because they are used to store metadata. To provide quorum resolution, use the
mkquorumapp command to generate a Java application that is copied from the system and run
on a host at a third site. The maximum number of applications that can be deployed is five.
Currently, supported Java runtime environments (JREs) are IBM Java 7.1 and IBM Java 8.
At any point, an MDisk can be a member in one storage pool only, except for image mode
volumes.
34 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 2-8 shows the relationships of the SVC entities to each other.
Pool_SSDN7 Pool_SSDN8
Storage_Pool_01
Storage_Pool_02
SSD SSD SSD SSD
MD1 MD2 MD3 MD4 MD5 SSD SSD SSD SSD
Each MDisk in the storage pool is divided into several extents. The size of the extent is
selected by the administrator when the storage pool is created, and cannot be changed later.
The size of the extent is 16 MiB - 8192 MiB.
It is a preferred practice to use the same extent size for all storage pools in a system. This
approach is a prerequisite for supporting volume migration between two storage pools. If the
storage pool extent sizes are not the same, you must use volume mirroring to copy volumes
between pools.
The SVC limits the number of extents in a system to 222 =~4 million. Because the number of
addressable extents is limited, the total capacity of an SVC system depends on the extent
size that is chosen by the SVC administrator.
2.2.11 Volumes
Volumes are logical disks that are presented to the host or application servers by the SVC.
The hosts cannot see the MDisks. They can see only the logical volumes that are created
from combining extents from a storage pool.
Sequential
A sequential volume is where the extents are allocated one after the other, from one
MDisk to the next MDisk (Figure 2-10).
36 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Image mode
Image mode volumes (Figure 2-11) are special volumes that have a direct relationship
with one MDisk. The most common use case of image volumes is a data migration from
your old (typically non-virtualized) storage to the SVC-based virtualized infrastructure.
When the image mode volume is created, a direct mapping is made between extents that
are on the MDisk and the extents that are on the volume. The logical block address (LBA)
x on the MDisk is the same as the LBA x on the volume, which ensures that the data on
the MDisk is preserved as it is brought into the clustered system.
Some virtualization functions are not available for image mode volumes, so it is often useful to
migrate the volume into a new storage pool. After the migration completion, the MDisk
becomes a managed MDisk.
If you add new MDisk containing any historical data to a storage pool, all data on the MDisk is
lost. Ensure that you create image mode volumes from MDisks that contain data before
adding MDisks to the storage pools.
Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the
Easy Tier function that is turned on in a multitier storage pool over a 24-hour period.
Next, it creates an extent migration plan that is based on this activity, and then dynamically
moves high-activity or hot extents to a higher disk tier within the storage pool. It also moves
extents whose activity dropped off or cooled down from the high-tier MDisks back to a
lower-tiered MDisk.
The automatic load balancing function is enabled by default on each volume, and cannot be
turned off using the GUI. This load balancing feature is not considered to be an Easy Tier
function, although it uses the same principles.
The IBM Easy Tier function can make it more appropriate to use smaller storage pool extent
sizes. The usage statistics file can be offloaded from the SVC nodes. Then, you can use IBM
Storage Tier Advisor Tool (STAT) to create a summary report. STAT is available on the web at
no initial cost at the following link:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000935
A more detailed description of Easy Tier is provided in Chapter 10, “Advanced features for
storage efficiency” on page 407.
2.2.13 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A
host within the SVC is a collection of host bus adapter (HBA) worldwide port names
(WWPNs) or iSCSI-qualified names (IQNs) that are defined on the specific server.
Note: iSCSI names are internally identified by “fake” WWPNs, or WWPNs that are
generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume
that is accessed by multiple hosts of a server system.
iSCSI is an alternative way of attaching hosts and starting with SVC V7.7. In addition,
back-end storage can be attached by using iSCSI. This configuration is very useful for
migration purposes from non-Fibre-Channel-based environments to the new virtualized
solution.
Node failover can be handled without having a multipath driver that is installed on the iSCSI
server. An iSCSI-attached server can reconnect after a node failover to the original target
IP address, which is now presented by the partner node. To protect the server against link
failures in the network or HBA failures, the use of a multipath driver is mandatory.
Volumes are LUN-masked to the host’s HBA WWPNs by a process called host mapping.
Mapping a volume to the host makes it accessible to the WWPNs or IQNs that are configured
on the host object. For a SCSI over Ethernet connection, the IQN identifies the iSCSI target
(destination) adapter. Host objects can have IQNs and WWPNs.
38 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2.2.15 RAID
When planning your network, consideration must be given to the type of RAID configuration.
The IBM SAN Volume Controller supports either the traditional array configuration or the
distributed array.
An array can contain 2 - 16 drives; several arrays create the capacity for a pool. For
redundancy, spare drives (“hot spares”) are allocated to assume read/write operations if any
of the other drives fail. The rest of the time, the spare drives are idle and do not process
requests for the system.
When an array member drive fails, the system automatically replaces the failed member with
a hot spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives
can be manually exchanged with array members.
Distributed array configurations can contain 4 - 128 drives. Distributed arrays remove the
need for separate drives that are idle until a failure occurs. Rather than allocating one or more
drives as spares, the spare capacity is distributed over specific rebuild areas across all the
member drives. Data can be copied faster to the rebuild area and redundancy is restored
much more rapidly. Additionally, as the rebuild progresses, the performance of the pool is
more uniform because all of the available drives are used for every volume extent.
After the failed drive is replaced, data is copied back to the drive from the distributed spare
capacity. Unlike hot spare drives, read/write requests are processed on other parts of the
drive that are not being used as rebuild areas. The number of rebuild areas is based on the
width of the array.
2.2.16 Encryption
The IBM SAN Volume Controller provides optional encryption of data at rest, which protects
against the potential exposure of sensitive user data and user metadata that is stored on
discarded, lost, or stolen storage devices. Encryption of system data and system metadata is
not required, so system data and metadata are not encrypted.
Planning for encryption involves purchasing a licensed function and then activating and
enabling the function on the system.
To encrypt data that is stored on drives, the nodes capable of encryption must be licensed
and configured to use encryption. When encryption is activated and enabled on the system,
valid encryption keys must be present on the system when the system unlocks the drives or
the user generates a new key.
In IBM Spectrum Virtualize V7.4, hardware encryption was introduced, with software
encryption option introduced in V7.6. Encryption keys could be either managed by IBM
Security Key Lifecycle Manager (SKLM) or stored on USB flash drives attached to a minimum
of one of the nodes. V8.1 now also allows a combination of SKLM and USB key repositories.
IBM Security Key Lifecycle Manager is an IBM solution to provide the infrastructure and
processes to locally create, distribute, backup, and manage the lifecycle of encryption keys
and certificates. Before activating and enabling encryption, you must determine the method of
accessing key information during times when the system requires an encryption key to be
present.
Data encryption is protected by the Advanced Encryption Standard (AES) algorithm that uses
a 256-bit symmetric encryption key in XTS mode, as defined in the Institute of Electrical and
Electronics Engineers (IEEE) 1619-2007 standard as XTS-AES-256. That data encryption
key is itself protected by a 256-bit AES key wrap when stored in non-volatile form.
As data security and encryption plays significant role in today’s storage environments, this
book provides more details in Chapter 12, “Encryption” on page 633.
2.2.17 iSCSI
iSCSI is an alternative means of attaching hosts and external storage controllers to the IBM
SAN Volume Controller.
The iSCSI function is a software function that is provided by the IBM Spectrum Virtualize
code, not hardware. In V7.7, IBM introduced software capabilities to allow the underlying
virtualized storage to attach to IBM SAN Volume Controller using iSCSI protocol.
iSCSI protocol allows the transport of SCSI commands and data over an Internet Protocol
network, which is based on IP routers and Ethernet switches. iSCSI is a block-level protocol
that encapsulates SCSI commands. Therefore, it uses an existing IP network rather than
Fibre Channel infrastructure.
The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the Internet Protocol network, especially
over a potentially unreliable IP network.
Every iSCSI node in the network must have an iSCSI name and address:
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
initiator name and target name also refer to an iSCSI name.
An iSCSI address specifies not only the iSCSI name of an iSCSI node, but a location of
that node. The address consists of a host name or IP address, a TCP port number (for the
target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node
and provides statically allocated IP addresses.
40 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
IBM Real-time Compression provides the following benefits:
Compression for active primary data. IBM Real-time Compression can be used with active
primary data.
Compression for replicated/mirrored data. Remote volume copies can be compressed in
addition to the volumes at the primary storage tier. This process reduces storage
requirements in Metro Mirror and Global Mirror destination volumes as well.
No changes to the existing environment are required. IBM Real-time Compression is part
of the storage system.
Overall savings in operational expenses. More data is stored in a rack space, so fewer
storage expansion enclosures are required to store a data set. This reduced rack space
has the following benefits:
– Reduced power and cooling requirements. More data is stored in a system, requiring
less power and cooling per gigabyte or used capacity.
– Reduced software licensing for additional functions in the system. More data stored per
enclosure reduces the overall spending on licensing.
Disk space savings are immediate. The space reduction occurs when the host writes the
data. This process is unlike other compression solutions, in which some or all of the
reduction is realized only after a post-process compression batch job is run.
2.2.19 IP replication
IP replication was introduced in the V7.2 and allows data replication between IBM Spectrum
Virtualize family members. IP replication uses IP-based ports of the cluster nodes.
IP replication function is transparent to servers and applications in the same way that
traditional FC-based mirroring is. All remote mirroring modes (Metro Mirror, Global Mirror, and
Global Mirror with changed volumes) are supported.
The configuration of the system is straightforward and IBM Storwize family systems normally
find each other in the network and can be selected from the GUI.
IP connections that are used for replication can have long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many “hops”
between switches and other appliances in the network. Traditional replication solutions
transmit data, wait for a response, and then transmit more data, which can result in network
utilization as low as 20% (based on IBM measurements). In addition, this scenario gets worse
the longer the latency.
Bridgeworks SANSlide technology, which is integrated with the IBM Storwize family, requires
no separate appliances and so requires no additional cost and no configuration steps. It uses
artificial intelligence (AI) technology to transmit multiple data streams in parallel, adjusting
automatically to changing network environments and workloads.
SANSlide improves network bandwidth utilization up to 3x. Therefore, customers can deploy
a less costly network infrastructure, or take advantage of faster data transfer to speed
replication cycles, improve remote data currency, and enjoy faster recovery.
Starting with V6.3 (now IBM Spectrum Virtualize), copy services functions are implemented
within a single IBM SAN Volume Controller or between multiple members of the IBM
Spectrum Virtualize family.
The copy services layer sits above and operates independently of the function or
characteristics of the underlying disk subsystems used to provide storage resources to an
IBM SAN Volume Controller.
Synchronous remote copy ensures that updates are committed at both the primary and the
secondary volumes before the application considers the updates complete. Therefore, the
secondary volume is fully up to date if it is needed in a failover. However, the application is
fully exposed to the latency and bandwidth limitations of the communication link to the
secondary volume. In a truly remote situation, this extra latency can have a significant
adverse effect on application performance.
Special configuration guidelines exist for SAN fabrics and IP networks that are used for data
replication. There must be considerations in regards the distance and available bandwidth of
the intersite links.
A function of Global Mirror designed for low bandwidth has been introduced in IBM Spectrum
Virtualize. It uses change volumes that are associated with the primary and secondary
volumes. These volumes are used to record changes to the remote copy volume, the
FlashCopy relationship that exists between the secondary volume and the change volume,
and between the primary volume and the change volume. This function is called Global
Mirror cycling mode.
42 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 2-12 shows an example of this function where you can see the relationship between
volumes and change volumes.
In asynchronous remote copy, the application acknowledges that the write is complete before
the write is committed at the secondary volume. Therefore, on a failover, certain updates
(data) might be missing at the secondary volume. The application must have an external
mechanism for recovering the missing updates, if possible. This mechanism can involve user
intervention. Recovery on the secondary site involves starting the application on this recent
backup, and then rolling forward or backward to the most recent commit point.
FlashCopy
FlashCopy is sometimes described as an instance of a time-zero (T0) copy or a point-in-time
(PiT) copy technology.
FlashCopy can be performed on multiple source and target volumes. FlashCopy allows the
management operations to be coordinated so that a common single point in time is chosen
for copying target volumes from their respective source volumes.
With IBM Spectrum Virtualize, multiple target volumes can undergo FlashCopy from the same
source volume. This capability can be used to create images from separate points in time for
the source volume, and to create multiple images from a source volume at a common point in
time. Source and target volumes can be thin-provisioned volumes.
Reverse FlashCopy enables target volumes to become restore points for the source volume
without breaking the FlashCopy relationship, and without waiting for the original copy
operation to complete. IBM Spectrum Virtualize supports multiple targets, and therefore
multiple rollback points.
The Transparent Cloud Tiering function helps organizations to reduce costs related to power
and cooling when off-site data protection is required to send sensitive data out of the main
site.
Transparent Cloud Tiering uses IBM FlashCopy techniques that provide full and incremental
snapshots of one or more volumes. Snapshots are encrypted and compressed before being
uploaded to the cloud. Reverse operations are also supported within that function. When a
set of data is transferred out to cloud, the volume snapshot is stored as object storage.
IBM Cloud Object Storage uses innovative approach and cost-effective solution to store large
amount of unstructured data and delivers mechanisms to provide security services, high
availability, and reliability.
The management GUI provides an easy-to-use initial setup, advanced security settings, and
audit logs that records all backup and restore to cloud.
Resources on the clustered system act as highly available versions of unclustered resources.
If a node (an individual computer) in the system is unavailable or too busy to respond to a
request for a resource, the request is passed transparently to another node that can process
the request. The clients are unaware of the exact locations of the resources that they use.
The SVC is a collection of up to eight nodes, which are added in pairs that are known as I/O
Groups. These nodes are managed as a set (system), and they present a single point of
control to the administrator for configuration and service activity.
The eight-node limit for an SVC system is a limitation that is imposed by the Licensed Internal
Code, and not a limit of the underlying architecture. Larger system configurations might be
available in the future.
44 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Although the SVC code is based on a purpose-optimized Linux kernel, the clustered system
feature is not based on Linux clustering code. The clustered system software within the SVC,
that is, the event manager cluster framework, is based on the outcome of the COMPASS
research project. It is the key element that isolates the SVC application from the underlying
hardware nodes.
The clustered system software makes the code portable. It provides the means to keep the
single instances of the SVC code that are running on separate systems’ nodes in sync.
Therefore, restarting nodes during a code upgrade, adding new nodes, removing old nodes
from a system, or failing nodes cannot affect the SVC’s availability.
All active nodes of a system must know that they are members of the system. This knowledge
is especially important in situations where it is key to have a solid mechanism to decide which
nodes form the active system, such as the split-brain scenario where single nodes lose
contact with other nodes. A worst case scenario is a system that splits into two separate
systems.
Within an SVC system, the voting set and a quorum disk are responsible for the integrity of
the system. If nodes are added to a system, they are added to the voting set. If nodes are
removed, they are removed quickly from the voting set. Over time, the voting set and the
nodes in the system can completely change so that the system migrates onto a separate set
of nodes from the set on which it started.
The SVC clustered system implements a dynamic quorum. Following a loss of nodes, if the
system can continue to operate, it adjusts the quorum requirement so that further node failure
can be tolerated.
The lowest Node Unique ID in a system becomes the boss node for the group of nodes. It
proceeds to determine (from the quorum rules) whether the nodes can operate as the
system. This node also presents the maximum two-cluster IP addresses on one or both of its
nodes’ Ethernet ports to allow access for system management.
Stretched Clusters are considered high availability (HA) solutions because both sites work as
instances of the production environment (there is no standby location). Combined with
application and infrastructure layers of redundancy, Stretched Clusters can provide enough
protection for data that requires availability and resiliency.
When the IBM SAN Volume Controller was first introduced, the maximum supported distance
between nodes within an I/O Group was 100 meters. With the evolution of code and
introduction of new features, IBM SAN Volume Controller V5.1 introduced support for the
Stretched Cluster configuration. In this configuration, nodes within an I/O Group can be
separated by a distance of up to 10 kilometers (km) using specific configurations.
Within IBM Spectrum Virtualize V7.5, the site awareness concept has been extended to
hosts. This change enables more efficiency for host I/O traffic through the SAN, and an easier
host path management.
IBM Spectrum Virtualize V7.6 introduces a new feature for stretched systems, the IP Quorum
application. Using an IP-based quorum application as the quorum device for the third site, no
Fibre Channel connectivity is required. Java applications run on hosts at the third site.
However, there are strict requirements on the IP network with using IP quorum applications.
Unlike quorum disks, all IP quorum applications must be reconfigured and redeployed to
hosts when certain aspects of the system configuration change.
IP Quorum details can be found in IBM Knowledge Center for SAN Volume Controller:
https://ibm.biz/BdsmN2
Note: Stretched cluster and Enhanced Stretched Cluster features are supported only for
IBM SAN Volume Controller. They are not supported in IBM Storwize family of products.
The HyperSwap feature provides highly available volumes accessible through two sites at up
to 300 km apart. A fully independent copy of the data is maintained at each site. When data is
written by hosts at either site, both copies are synchronously updated before the write
operation is completed. The HyperSwap feature will automatically optimize itself to minimize
data transmitted between sites and to minimize host read and write latency.
46 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
For further technical details and implementation guidelines on deploying Stretched Cluster or
Enhanced Stretched Cluster, see IBM Spectrum Virtualize and SAN Volume Controller
Enhanced Stretched Cluster with VMware, SG24-8211.
Up to four nodes can be added to a single cluster and they must match the hardware type and
configuration of your active cluster nodes. That is, in a mixed node cluster you should have
one of each node type. Given that V8.1 is only supported on SVC 2145-DH8 and 2145-SV1
nodes, this mixture is not a problem but is something to be aware of. Most clients upgrade the
whole cluster to a single node type anyway following best practices. However, in addition to
the node type, the hardware configurations must match. Specifically, the amount of memory
and number and placement of Fibre Channel/Compression cards must be identical.
The Hot Spare node essentially becomes another node in the cluster. but is not doing
anything under normal conditions. Only when it is needed does it use the NPIV feature of the
host virtual ports to take over the personality of the failed node. There is approximately a
minute before the cluster swaps in a node. This delay is set intentionally to avoid any
thrashing around when a node fails and the system must be sure it has definitely failed, and
not just, for example, rebooting.
Because you have NPIV enabled, the host should not notice anything during this time. The
first thing that happens is the failed nodes virtual host ports failover to the partner node. Then,
when the spare swaps in they fail over to that node. The cache will flush while only one node
is in the IO Group, but when the spare swaps in you get the full cache back.
Note: Warm start of active node (code assert or restart) will not cause the hot spare to
swap in because the rebooted node becomes available within one minute.
The other use case for Hot Spare nodes is during a software upgrade. Normally the only
impact during an upgrade is slightly degraded performance. While the node that is upgrading
is down, the partner in the IO Group will be writing through cache and handling both nodes
workload. So to work around this limitation, the cluster takes a spare in place of the node that
is upgrading. Therefore, the cache does not need to go into write through mode.
After the upgraded node returns, it is swapped back so you end up rolling through the nodes
as normal but without any failover and failback seen at the multipathing layer. All of this
process is handled by the NPIV ports and so should make upgrades seamless for
administrators working in large enterprise SVC deployments.
Note: After the cluster commits new code, it will also automatically upgrade Hot Spares to
match the cluster code level.
This feature is available only to SVC. While Storwize systems can make use of NPIV and get
the general failover benefits, you cannot get spare canisters or split IO group in Storwize
V7000.
You can maintain a chat session with the IBM service representative so that you can monitor
this activity and either understand how to fix the problem yourself or allow the representative
to fix it for you.
To use the IBM Assist On-site tool, the master console must be able to access the Internet.
The following website provides further information about this tool:
http://www.ibm.com/support/assistonsite/
When you access the website, you sign in and enter a code that the IBM service
representative provides to you. This code is unique to each IBM Assist On-site session. A
plug-in is downloaded on to your master console to connect you and your IBM service
representative to the remote service session. The IBM Assist On-site tool contains several
layers of security to protect your applications and your computers. The plug-in is removed
after the next reboot.
You can also use security features to restrict access by the IBM service representative. Your
IBM service representative can provide you with more detailed instructions for using the tool.
The embedded part of the SVC V8.1 code is a software toolset called Remote Support Client.
It establishes a network connection over a secured channel with Remote Support Server in
the IBM network. The Remote Support Server provides predictive analysis of SVC status and
assists administrators for troubleshooting and fix activities. Remote Support Assistance is
available at no extra charge, and no additional license is needed.
Each event that IBM SAN Volume Controller detects is assigned a notification type of Error,
Warning, or Information. You can configure the IBM SAN Volume Controller to send each
type of notification to specific recipients.
48 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You can use the Management Information Base (MIB) file for SNMP to configure a network
management program to receive SNMP messages that are sent by the IBM Spectrum
Virtualize.
IBM SAN Volume Controller can send syslog messages that notify personnel about an event.
The event messages can be sent in either expanded or concise format. You can use a syslog
manager to view the syslog messages that IBM SAN Volume Controller sends.
IBM Spectrum Virtualize uses the User Datagram Protocol (UDP) to transmit the syslog
message. You can use the management GUI or the CLI to configure and modify your syslog
settings.
To send email, you must configure at least one SMTP server. You can specify as many as five
more SMTP servers for backup purposes. The SMTP server must accept the relaying of email
from the IBM SAN Volume Controller clustered system IP address. You can then use the
management GUI or the CLI to configure the email settings, including contact information and
email recipients. Set the reply address to a valid email address.
Send a test email to check that all connections and infrastructure are set up correctly. You can
disable the Call Home function at any time by using the management GUI or CLI.
Chapter 3. Planning
This chapter describes steps that are required to plan the installation of an IBM System
Storage SAN Volume Controller in your storage network.
Important: At the time of writing, the statements provided in this book are correct, but they
might change. Always verify any statements that are made in this book with the IBM SAN
Volume Controller supported hardware list, device driver, firmware, and recommended
software levels that are available at the following websites:
Support Information for SAN Volume Controller:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM System Storage Interoperation Center (SSIC):
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
To maximize benefit from the SAN Volume Controller, pre-installation planning must include
several important steps. These steps ensure that the SAN Volume Controller provides the
best possible performance, reliability, and ease of management for your application needs.
The correct configuration also helps minimize downtime by avoiding changes to the SAN
Volume Controller and the storage area network (SAN) environment to meet future growth
needs.
Note: Make sure that the planned configuration is reviewed by IBM or an IBM Business
Partner before implementation. Such review can both increase the quality of the final
solution and prevent configuration errors that could impact solution delivery.
This book is not intended to provide in-depth information about described topics. For an
enhanced analysis of advanced topics, see IBM System Storage SAN Volume Controller and
Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
Below is a list of items that you should consider when planning for the SAN Volume
Controller:
Collect and document the number of hosts (application servers) to attach to the SAN
Volume Controller. Identify the traffic profile activity (read or write, sequential, or random),
and the performance requirements (bandwidth and input/output (I/O) operations per
second (IOPS)) for each host.
Collect and document the following information:
– Information on the existing back-end storage that is present in the environment and is
intended to be virtualized by the SAN Volume Controller.
– Whether you need to configure image mode volumes. If you want to use image mode
volumes, decide whether and how you plan to migrate them into managed mode
volumes.
– Information on the planned new back-end storage to be provisioned on the SAN
Volume Controller.
52 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
– The required virtual storage capacity for fully provisioned and space-efficient (SE)
volumes.
– The required storage capacity for local mirror copy (volume mirroring).
– The required storage capacity for point-in-time copy (IBM FlashCopy).
– The required storage capacity for remote copy (Metro Mirror and Global Mirror).
– The required storage capacity for compressed volumes.
– The required storage capacity for encrypted volumes.
– Shared storage (volumes presented to more than one host) required in your
environment.
– Per host:
• Volume capacity.
• Logical unit number (LUN) quantity.
• Volume sizes.
Note: When planning the capacities, make explicit notes if the numbers state the net
storage capacity (that is, available to be used by applications running on any host), or
gross capacity, which includes overhead for spare drives (both due to RAID redundancy
and planned hot spare drives) and for file system metadata. For file system metadata,
include overhead incurred by all layers of storage virtualization. In particular, if you plan
storage for virtual machines whose drives are actualized as files on a parallel file
system, then include metadata overhead for the storage virtualization technology used
by your hypervisor software.
Decide whether you need to plan for more than one site. For multi-site deployment, review
the additional configuration requirements imposed.
Define the number of clusters and the number of pairs of nodes (1 - 4) for each cluster.
The number of necessary I/O Groups depends on the overall performance requirements
and the number of hosts you plan to attach.
Decide whether you are going to use N_Port ID Virtualization (NPIV). If you plan to use
NPIV, then review the additional configuration requirements imposed.
Design the SAN according to the requirement for high availability (HA) and best
performance. Consider the total number of ports and the bandwidth that is needed at each
link, especially Inter-Switch Links (ISLs). Consider ISL trunking for improved performance.
Separately collect requirements for Fibre Channel and IP-based storage network.
Note: Check and carefully count the required ports. Separately note the ports
dedicated for extended links. Especially in an enhanced stretched cluster (ESC) or
HyperSwap environment, you might need additional long wave gigabit interface
converters (GBICs).
Define a naming convention for the SAN Volume Controller clusters, nodes, hosts, and
storage objects.
Define the SAN Volume Controller service Internet Protocol (IP) addresses and the
system’s management IP addresses.
Define subnets for the SAN Volume Controller system and for the hosts for Internet Small
Computer System Interface (iSCSI) connectivity.
Define the IP addresses for IP replication (if required).
Define back-end storage that will be used by the system.
Chapter 3. Planning 53
Define the managed disks (MDisks) in the back-end storage to be used by SAN Volume
Controller.
Define the storage pools, specify MDisks for each pool and document mapping of MDisks
to back-end storage. Parameters of the back-end storage determine the characteristics of
the volumes in the pool. Make sure that each pool contains MDisks of similar (ideally,
identical) performance characteristics.
Plan allocation of hosts and volumes to I/O Groups to optimize the I/O load distribution
between the hosts and the SAN Volume Controller. Allowing a host to access more than
one I/O group might better distribute the load between system nodes. However, doing so
will reduce the maximum number of hosts attached to the SAN Volume Controller.
Plan queue depths for the attached hosts. For more information, see this website:
https://ibm.biz/BdjKcK
Plan for the physical location of the equipment in the rack.
Verify that your planned environment is a supported configuration.
Verify that your planned environment does not exceed system configuration limits.
Planning activities required for SAN Volume Controller deployment are described in the
following sections.
Note: If you are installing a hot-spare node, the Fibre Channel cabling must be identical
for all nodes of the system. In other words, port 1 on every node must be connected to
the same fabric, port 2 on every node must be connected to the same fabric, and so on.
54 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Quorum disk placement
The SAN Volume Controller uses three MDisks as quorum disks for the clustered system.
A preferred practice is to have each quorum disk in a separate storage subsystem, where
possible. The current locations of the quorum disks can be displayed by using the
lsquorum command, and relocated by using the chquorum command.
Failure domain sizes
Failure of an MDisk takes the whole storage pool offline that contains this MDisk. To
reduce impact of an MDisk failure, consider reducing the number of back-end storage
systems per storage pool, and increasing the number of storage pools and reducing their
size. Note that this configuration in turn limits the maximum performance of the pool (fewer
back-end systems to share the load), increases storage management effort, can lead to
less efficient storage capacity consumption, and might be subject to limitation by system
configuration maximums.
Consistency
Strive to achieve consistent availability levels of all system building blocks. For example, if
the solution relies on a single switch placed in the same rack as one of the SAN Volume
Controller nodes, investment in a dual-rack configuration for placement of the second
node is not justified. Any incident affecting the rack that holds the critical switch brings
down the whole system, no matter where the second SAN Volume Controller node is
placed.
SAN Volume Controller supports SAN routing technologies between SAN Volume Controller
and storage systems, as long as the routing stays entirely within Fibre Channel connectivity
and does not use other transport technologies such as IP. However, SAN routing technologies
(including FCIP links) are supported for connections between the SAN Volume Controller and
hosts. The use of long-distance FCIP connections might degrade the storage performance for
any servers that are attached through this technology.
Table 3-1 shows the fabric type that can be used for communicating between hosts, nodes,
and back-end storage systems. All fabric types can be used at the same time.
When you plan deployment of SAN Volume Controller, identify networking technologies that
you will use.
Chapter 3. Planning 55
3.4 Physical planning
You must consider several key factors when you are planning the physical site of a SAN
Volume Controller installation. The physical site must have the following characteristics:
Meets power, cooling, and location requirements of the SAN Volume Controller nodes.
Has two separate power sources.
There is sufficient rack space for controller nodes installation.
Has sufficient maximum power rating of the rack. Plan your rack placement carefully to not
exceed maximum power rating of the rack. For more information about the power
requirements, see the following website:
https://ibm.biz/Bdjvhm
For more information about SAN Volume Controller nodes rack installation planning, including
environmental requirements and sample rack layouts, see:
https://ibm.biz/Bdjm5Y for 2145-DH8
https://ibm.biz/Bdjm5z for 2145-SV1
The functionality of UPS units is provided by internal batteries, which are delivered with each
node’s hardware. The batteries ensure that during external power loss or disruption, the node
is kept operational long enough to copy data from its physical memory to its internal disk drive
and shut down gracefully. This process enables the system to recover without data loss when
external power is restored.
For more information about the 2145-DH8 Model, see IBM SAN Volume Controller 2145-DH8
Introduction and Implementation, SG24-8229.
For more information about installing the 2145-SV1, see IBM Knowledge Center:
https://ibm.biz/Bdr7wp
3.4.2 Cabling
Create a cable connection table that follows your environment’s documentation procedure to
track all of the following connections that are required for the setup:
Power
Ethernet
iSCSI or Fibre Channel over Ethernet (FCoE) connections
Switch ports (FC, Ethernet, and FCoE)
When planning SAN cabling, make sure that your physical topology allows you to observe
zoning rules and recommendations.
If the data center provides more than one power source, make sure that you use that capacity
when planning power cabling for your system.
56 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.5 Planning IP connectivity
Starting with V6.1, system management is performed through an embedded graphical user
interface (GUI) running on the nodes. To access the management GUI, direct a web browser
to the system management IP address.
The SAN Volume Controller 2145-DH8 node has a feature called a Technician port. Ethernet
port 4 is allocated as the Technician service port, and is marked with a T. All initial
configuration for each node is performed by using the Technician port. The port runs a
Dynamic Host Configuration Protocol (DHCP) service so that any notebook or computer
connected to the port is automatically assigned an IP address.
After the cluster configuration has been completed, the Technician port automatically routes
the connected user directly to the service GUI.
Note: The default IP address for the Technician port on a 2145-DH8 Node is 192.168.0.1.
If the Technician port is connected to a switch, it is disabled and an error is logged.
Each SAN Volume Controller node requires one Ethernet cable to connect it to an Ethernet
switch or hub. The cable must be connected to port 1. A 10/100/1000 megabit (Mb) Ethernet
connection is supported on the port. Both Internet Protocol Version 4 (IPv4) and Internet
Protocol Version 6 (IPv6) are supported.
Note: For increased availability, an optional second Ethernet connection is supported for
each SAN Volume Controller node.
Ethernet port 1 on every node must be connected to the same set of subnets. The same rule
applies to Ethernet port 2 if it is used. However, the subnets available for Ethernet port 1 do
not have to be the same as configured for interfaces on Ethernet port 2.
Each SAN Volume Controller cluster has a Cluster Management IP address, in addition to a
Service IP address for each node in the cluster. See Example 3-1 for details.
Each node in a SAN Volume Controller clustered system needs to have at least one Ethernet
connection. Both IPv4 and IPv6 addresses are supported. SAN Volume Controller can
operate with either Internet Protocol or with both internet protocols concurrently.
For configuration and management, you must allocate an IP address to the system, which is
referred to as the management IP address. For additional fault tolerance, you can also
configure a second IP address for the second Ethernet port on the node.The addresses must
be fixed addresses. If both IPv4 and IPv6 are operating concurrently, an address is required
for each protocol.
Note: The management IP address cannot be the same as any of the service IPs used.
Chapter 3. Planning 57
Figure 3-1 shows the IP addresses that can be configured on Ethernet ports.
Support for iSCSI enables one additional IPv4 address, IPv6 address, or both for each
Ethernet port on every node. These IP addresses are independent of the system’s
management and service IP addresses.
If you configure management IP on both Ethernet ports, choose one of the IP addresses to
connect to GUI or CLI. Note that the system is not able to automatically fail over the
management IP address to a different port. If one management IP address is unavailable, use
an IP address on the alternate network. Clients might be able to use the intelligence in
domain name servers (DNSs) to provide partial failover.
This section describes several IP addressing plans that you can use to configure SAN Volume
Controller V6.1 and later.
58 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-2 shows the use of the same IPv4 subnet for management and iSCSI addresses.
Figure 3-3 shows the use of two separate IPv4 subnets for management and iSCSI
addresses.
Chapter 3. Planning 59
Figure 3-4 shows the use of redundant networks.
Figure 3-5 shows the use of a redundant network and a third subnet for management.
60 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-6 shows the use of a redundant network for iSCSI data and management.
Chapter 3. Planning 61
The hardware compatible with V8.1 supports 8 Gbps, and 16 Gbps FC fabric, depending on
the hardware platform and on the switch to which the SAN Volume Controller is connected. In
an environment where you have a fabric with multiple-speed switches, the preferred practice
is to connect the SAN Volume Controller and back-end storage systems to the switch
operating at the highest speed.
You can use the lsfabric command to generate a report that displays the connectivity
between nodes and other controllers and hosts. This report is helpful for diagnosing SAN
problems.
SAN Volume Controller nodes are always deployed in pairs (I/O Groups). An odd number of
nodes in a cluster is a valid standard configuration only if one of the nodes is configured as a
hot spare. However, if there is no hot spare node and a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but the configuration is still
valid.
If possible, avoid communication between nodes that route across ISLs. Connect all nodes to
the same Fibre Channel or FCF switches.
No ISL hops are permitted among the nodes within the same I/O group, except in a stretched
system configuration with ISLs. For more information, see https://ibm.biz/Bdjacf.
However, no more than three ISL hops are permitted among nodes that are in the same
system but in different I/O groups. If your configuration requires more than three ISL hops for
nodes that are in the same system but in different I/O groups, contact your support center.
Avoid ISL on the path between nodes and back-end storage. If possible, connect all storage
systems to the same Fibre Channel or FCF switches as the nodes. One ISL hop between the
nodes and the storage systems is permitted. If your configuration requires more than one ISL,
contact your support center.
In larger configurations, it is common to have ISLs between host systems and the nodes.
To verify the supported connection speed for FC links to the SAN Volume Controller, use IBM
System Storage Interoperation Center (SSIC) site:
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
In an Enhanced Stretched Cluster or HyperSwap setup, the two nodes forming an I/O Group
can be colocated (within the same set of racks), or can be placed in separate racks, separate
rooms, or both. For more information, see IBM System Storage SAN Volume Controller and
Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
3.6.2 Zoning
In SAN Volume Controller deployments, the SAN fabric must have three distinct zone classes:
SAN Volume Controller cluster system zone: Allows communication between storage
system nodes (intra-cluster traffic).
Host zones: Allows communication between SAN Volume Controller and hosts.
Storage zone: Allows communication between SAN Volume Controller and back-end
storage.
62 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-7 shows the SAN Volume Controller zoning classes.
The subsequent sections contain fundamental rules of SAN Volume Controller zoning.
However, also review the latest zoning guidelines and requirements at the following site when
designing zoning for the planned solution:
https://ibm.biz/BdjGkN
Note: Configurations that use Metro Mirror, Global Mirror, N_Port ID Virtualization, or
long-distance links have extra zoning requirements. Do not follow just the general zoning
rules if you plan to use any of the above.
Create up to two SAN Volume Controller cluster system zones per fabric. In each of them,
place a single port per node designated for intracluster traffic. No more than four ports per
node should be allocated to intracluster traffic. Each node in the system must have at least
two ports with paths to all other nodes in the system. A system node cannot have more than
16 paths to another node in the same system.
Mixed port speeds are not possible for intracluster communication. All node ports within a
clustered system must be running at the same speed.
Chapter 3. Planning 63
Figure 3-8 shows a SAN Volume Controller clustered system zoning example.
1 1 1 1 SVC 2 2 2 2 SVC
SVC 1 1 2 3 4 Port #
SVC 2
1 2 3 4 Port #
Fabric ID 21 Fabric ID 22
Fabric Fabric
1 2
ISL
ISL
Ports 0 1 2 3 Ports 0 1 2 3
1 2 1 2 SVC # 1 2 1 2 SVC #
Storwize Family
Note: You can use more than four fabric ports per node to improve peak load I/O
performance. However, if a node receives more than 16 logins from another node, then it
causes node error 860. To avoid that error you need to use zoning, port masking, or a
combination of the two.
For more information, see 3.6.7, “Port designation recommendations” on page 71, 3.6.8,
“Port masking” on page 72, and the IBM SAN Volume Controller documentation at:
https://ibm.biz/BdjmGS
A storage controller can present LUNs to the SAN Volume Controller (as MDisks) and to other
hosts in the SAN. However, if this is the case, it is better to allocate different ports on the
back-end storage for communication with SAN Volume Controller and for hosts traffic.
64 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
All nodes in a system must be able to connect to the same set of storage system ports on
each device. A system that contains any two nodes that cannot connect to the same set of
storage-system ports is considered degraded. In this situation, a system error is logged that
requires a repair action.
This rule can have important effects on a storage system. For example, an IBM DS4000®
series controller can have exclusion rules that determine to which host bus adapter (HBA)
worldwide node names (WWNNs) that a storage partition can be mapped to.
Figure 3-9 shows an example of the SAN Volume Controller, host, and storage subsystem
connections.
Figure 3-9 Example of SAN Volume Controller, host, and storage subsystem connections
Chapter 3. Planning 65
Figure 3-10 shows a storage subsystem zoning example.
1 2 1 2 SVC # 1 2 1 2 SVC #
Fabric Fabric
1 2
ISL
ISL
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
Fabric ID 11 Fabric ID 12
V1
V2
E1 E2
Storwize Family
EMC
²
There might be particular zoning rules governing attachment of specific back-end storage
systems. Review the guidelines at the following website to verify whether you need to
consider additional policies when planning zoning for your back end systems:
https://ibm.biz/Bdjm8H
The preferred zoning policy is to create a separate zone for each host HBA port, and place
exactly one port from each node in each I/O group that the host accesses in this zone. For
deployments with more than 64 hosts defined in the system, this host zoning scheme is
mandatory.
If you plan to use NPIV, review additional host zoning requirements at:
https://ibm.biz/Bdjacb
66 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-11 shows a host zoning example.
P1 P2
Fabric ID 21 Fabric ID 22
ISL
ISL
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
Fabric ID 11 Fabric ID 12
AC AC
DC DC
SVC-Power System Zone P1: Zoning Info: SVC-Power System Zone P2:
Fabric Domain ID, Port One Power System Fabric Domain ID, Port
21,1 - 11,0 - 11,1 Port and one SVC 22,1 - 12,2 - 12,3
Port per SVC Node
Consider the following rules for zoning hosts with the SAN Volume Controller:
HBA to SAN Volume Controller port zones
Place each host’s HBA in a separate zone with exactly one port from each node in each
I/O group that the host accesses.
It is not prohibited to zone host’s HBA to one port from every node in the cluster, but it will
reduce the maximum number of hosts that can be attached to the system.
Optional (n+2 redundancy): With four HBA ports, zone HBA ports to SAN Volume
Controller ports 1:2 for a total of eight paths.
Here, the term HBA port is used to describe the SCSI initiator and SAN Volume
Controller port to describe the SCSI target.
Important: The maximum number of host paths per LUN must not exceed eight.
Chapter 3. Planning 67
Another way to control the number of paths between hosts and the SAN Volume Controller
is to use port mask. The port mask is an optional parameter of the mkhost and chhost
commands. The port mask configuration has no effect on iSCSI connections.
For each login between a host Fibre Channel port and node Fibre Channel port, the node
examines the port mask for the associated host object. It then determines whether access
is allowed (port mask bit for given port is set) or denied (port mask bit is cleared). If access
is denied, the node responds to SCSI commands as though the HBA WWPN is unknown.
The port mask is 64 bits. Valid mask values range from all 0s (no ports enabled) to all 1s
(all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default
value is all 1s.
Balanced host load across HBA ports
If the host has more than one HBA port per fabric, zone each host port with a separate
group of SAN Volume Controller ports.
Balanced host load across SAN Volume Controller ports
To obtain the best overall performance of the subsystem and to prevent overloading, the
load of each SAN Volume Controller port should be equal. Assuming similar load
generated by each host, you can achieve this balance by zoning approximately the same
number of host ports to each SAN Volume Controller port.
Figure 3-12 on page 69 shows an example of a balanced zoning configuration that was
created by completing the following steps:
1. Divide ports on the I/O Group into two disjoint sets, such that each set contains two ports
from each I/O Group node, each connected to a different fabric.
For consistency, use the same port number on each I/O Group node. The example on
Figure 3-12 on page 69 assigns ports 1 and 4 to one port set, and ports 2 and 3 to the
second set.
Because the I/O Group nodes have four FC ports each, two port sets are created.
2. Divide hosts attached to the I/O Group into two equally numerous groups.
In general, for I/O Group nodes with more than four ports, divide the hosts into as many
groups as you created sets in step 1 on page 68.
3. Map each host group to exactly one port set.
4. Zone all hosts from each group to the corresponding set of I/O Group node ports.
The host connections in the example in Figure 3-12 are defined in the following manner:
– Hosts in group one are always zoned to ports 1 and 4 on both nodes.
– Hosts in group two are always zoned to ports 2 and 3 on both nodes of the I/O Group.
Tip: Create an alias for the I/O Group port set. This step makes it easier to correctly zone
hosts to the correct set of I/O Group ports. Additionally, it also makes host group
membership visible in the FC switch configuration.
The use of this schema provides four paths to one I/O Group for each host, and helps to
maintain an equal distribution of host connections on SAN Volume Controller ports.
Tip: To maximize performance from the host point of view, distribute volumes that are
mapped to each host between both I/O Group nodes.
68 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Hosts 1/3/5/7….255, total 128 Hosts, Zoning:
Fabric A: ( Host _P0;N1_P1;N2_P1 )
IOGRP_0
IIOGRP_
OGRP_0
Fabric B: ( Host _P1;N1_P4;N2_P4 ) IIOGRP_0
OGRP__00
IIOGRP_0
OGRP_00
IOGRP_0
Switch P1
P0
P2
P1
P3
P4
Fabric B
| P4
Host 2/4/6/…. 256, total 128 Hosts, Zoning:
Fabric A: ( Host _P0;N1_P3;N2_P3 )
Fabric B: ( Host _P1;N1_P2;N2_P2 )
When possible, use the minimum number of paths that are necessary to achieve a sufficient
level of redundancy. For the SAN Volume Controller environment, no more than four paths per
I/O Group are required to accomplish this layout.
All paths must be managed by the multipath driver on the host side. Make sure that the
multipath driver on each server is capable of handling the number of paths required to access
all volumes mapped to the host.
Chapter 3. Planning 69
For hosts that use four HBAs/ports with eight connections to an I/O Group, use the zoning
schema that is shown in Figure 3-13. You can combine this schema with the previous
four-path zoning schema.
Switch P1
P2
P3
P4
P0
P1 Fabric B
P3
| P4
When designing zoning for a geographically dispersed solution, consider the effect of the
cross-site links on the performance of the local system.
Important: Be careful when you perform the zoning so that ports dedicated for intra-cluster
communication are not used for Host/Storage traffic in the 8-port and 12-port
configurations.
The use of mixed port speeds for intercluster communication can lead to port congestion,
which can negatively affect the performance and resiliency of the SAN. Therefore, it is not
supported.
70 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Important: If you zone two Fibre Channel ports on each node in the local system to two
Fibre Channel ports on each node in the remote system, you will be able to limit the impact
of severe and abrupt overload of the intercluster link on system operations.
If you zone all node ports for intercluster communication and the intercluster link becomes
severely and abruptly overloaded, the local FC fabric can become congested so that no FC
ports on the local SAN Volume Controller nodes can perform local intracluster heartbeat
communication. This situation can, in turn, result in the nodes experiencing lease expiry
events.
In a lease expiry event, a node restarts to attempt to reestablish communication with the
other nodes in the clustered system. If the leases for all nodes expire simultaneously, a
loss of host access to volumes can occur during the restart events.
For more information about zoning best practices, see IBM System Storage SAN Volume
Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
Additionally, there is a benefit in isolating remote replication traffic to dedicated ports, and
ensuring that any problems that affect the cluster-to-cluster interconnect do not impact all
ports on the local cluster.
Figure 3-14 shows port designations suggested by IBM for 2145-DH8 and 2145-CG8 nodes.
Figure 3-14 Port designation recommendations for isolating traffic on 2145-DH8 and 2145-CG8 nodes
Chapter 3. Planning 71
Figure 3-15 shows the suggested designations for 2145-SV1 nodes.
Figure 3-15 Port designation recommendations for isolating traffic on 2145-SV1 nodes
Note: With 12 or more ports per node, four ports should be dedicated for because-node
traffic. Doing so is especially important when high write data rates are expected as all
writes are mirrored between I/O Group nodes over these ports.
The port designation patterns shown in the tables provide the required traffic isolation and
simplify migrations to configurations with greater number of ports. More complicated port
mapping configurations that spread the port traffic across the adapters are supported and can
be considered. However, these approaches do not appreciably increase availability of the
solution.
Alternative port mappings that spread traffic across HBAs might allow adapters to come back
online following a failure. However, they do not prevent a node from going offline temporarily
to restart and attempt to isolate the failed adapter and then rejoin the cluster. Also, the mean
time between failures (MTBF) of the adapter is not significantly shorter than that of the
non-redundant node components. The presented approach takes all of these considerations
into account with a view that increased complexity can lead to migration challenges in the
future, and a simpler approach is usually better.
There are two Fibre Channel port masks on a system. The local port mask control
connectivity to other nodes in the same system, and the partner port mask control
connectivity to nodes in remote, partnered systems. By default, all ports are enabled for both
local and partner connectivity.
72 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The port masks apply to all nodes on a system. A different port mask cannot be set on nodes
in the same system. You do not have to have the same port mask on partnered systems.
A mixed traffic of host, back-end, intracluster, and replication can cause congestion and buffer
to buffer credit exhaustion. This type of traffic can result in heavy degradation of performance
in your storage environment.
Fibre Channel IO ports are logical ports, which can exist on Fibre Channel platform ports or
on FCoE platform ports.
The port mask is a 64-bit field that applies to all nodes in the cluster. In the local FC port
masking, you can set a port to be dedicated to node-to-node/intracluster traffic by setting a 1
to that port. Remote FC port masking allows you to set which ports can be used for replication
traffic by setting 1 to that port. If a port has a 0 in the specific mask, it means no traffic of that
type is allowed. Therefore, in a local FC port map, a 0 means no node-to-node traffic will
happen, and a 0 on the remote FC port masking means that no replication traffic will happen
on that port. Therefore, if a port has a 0 on both local and remote FC port masking, only host
and back-end storage traffic is allowed on it.
If you are using the GUI, click Settings → Network → Fibre Channel Ports. Then, you can
select the use of a port. Setting none means no node-to-node and no replication traffic is
allowed, and only host and storage traffic is allowed. Setting local means only node-to-node
traffic is allowed, and remote means that only replication traffic is allowed. Figure 3-16 shows
an example of setting a port mask on port 1 to Local.
Each SAN Volume Controller node is equipped with up to three onboard Ethernet network
interface cards (NICs), which can operate at a link speed of 10 Mbps, 100 Mbps, or
1000 Mbps. All NICs can be used to carry iSCSI traffic. For optimal performance, use 1 Gbps
links between SAN Volume Controller and iSCSI-attached hosts when the SAN Volume
Controller node’s onboard NICs are used.
Chapter 3. Planning 73
Starting with the SAN Volume Controller 2145-DH8, an optional 10 Gbps 4-port Ethernet
adapter (Feature Code AH12) is available. This feature provides one I/O adapter with four
10 GbE ports and SFP+ transceivers. It can be used to add 10 Gb iSCSI/FCoE connectivity
to the SAN Volume Controller Storage Engine.
Figure 3-17 shows an overview of the iSCSI implementation in the SAN Volume Controller.
iSCSI Initiator Node: iqn.1991-05.com.microsoft:itsoW2008 iSCSI Network Entity, i.e SVC cluster
Both onboard Ethernet ports of a SAN Volume Controller node can be configured for iSCSI.
For each instance of an iSCSI target node (that is, each SAN Volume Controller node), you
can define two IPv4 and two IPv6 addresses or iSCSI network portals:
If the optional 10 Gbps Ethernet feature is installed, you can use them for iSCSI traffic.
All node types that can run SAN Volume Controller V6.1 or later can use the iSCSI feature.
Generally, enable jumbo frames in your iSCSI storage network.
iSCSI IP addresses can be configured for one or more nodes.
iSCSI Simple Name Server (iSNS) addresses can be configured in the SAN Volume
Controller.
Decide whether you implement authentication for the host to SAN Volume Controller iSCSI
communication. The SAN Volume Controller supports the Challenge Handshake
Authentication Protocol (CHAP) authentication methods for iSCSI.
74 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
An introduction to the workings of iSCSI protocol can be found in iSCSI Implementation and
Best Practices on IBM Storwize Storage Systems, SG24-8327.
If you plan to use node’s 1 Gbps Ethernet ports for iSCSI host attachment, dedicate Ethernet
port one for the SAN Volume Controller management and port two for iSCSI use. This way,
port two can be connected to a separate network segment or virtual local area network
(VLAN) dedicated to iSCSI traffic.
Note: Ethernet link aggregation (port trunking) or channel bonding for the SAN Volume
Controller nodes’ Ethernet ports is not supported for the 1 Gbps ports.
You can use the following types of iSCSI initiators in host systems:
Software initiator: Available for most operating systems (OS), including AIX, Linux, and
Windows.
Hardware initiator: Implemented as a network adapter with an integrated iSCSI processing
unit, which is also known as an iSCSI HBA.
Make sure that iSCSI initiators, targets, or both that you plan to use are supported. Use the
following sites for reference:
IBM SAN Volume Controller V8.1 Support Matrix:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM Knowledge Center for IBM SAN Volume Controller:
https://ibm.biz/Bdjvhm
IBM System Storage Interoperation Center (SSIC)
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
An alias string can also be associated with an iSCSI node. The alias enables an organization
to associate a string with the iSCSI name. However, the alias string is not a substitute for the
iSCSI name.
Chapter 3. Planning 75
Note: The cluster name and node name form part of the IQN. Changing any of them might
require reconfiguration of all iSCSI nodes that communicate with the SAN Volume
Controller.
For more information about back-end storage supported for iSCSI connectivity, see these
websites:
IBM Support Information for SAN Volume Controller
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM System Storage Interoperation Center (SSIC)
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
For more information about supported storage subsystems, see these websites:
IBM Support Information for SAN Volume Controller
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM System Storage Interoperation Center (SSIC)
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
Apply the following general guidelines for back-end storage subsystem configuration
planning:
In the SAN, storage controllers that are used by the SAN Volume Controller clustered
system must be connected through SAN switches. Direct connection between the SAN
Volume Controller and the storage controller is not supported.
Enhanced Stretched Cluster configurations have additional requirements and
configuration guidelines. For more information about performance and preferred practices
for the SAN Volume Controller, see IBM System Storage SAN Volume Controller and
Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
76 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
MDisks within storage pools: V6.1 and later provide for better load distribution across
paths within storage pools.
In previous code levels, the path to MDisk assignment was made in a round-robin fashion
across all MDisks that are configured to the clustered system. With that method, no
attention is paid to how MDisks within storage pools are distributed across paths.
Therefore, it was possible and even likely that certain paths were more heavily loaded than
others.
Starting with V6.1, the code contains logic that takes into account which MDisks are
provided by which back-end storage systems. Therefore, the code more effectively
distributes active paths based on the storage controller ports that are available.
The Detect MDisk commands must be run following the creation or modification (addition
of or removal of MDisk) of storage pools for paths to be redistributed.
If your back-end storage system does not support the SAN Volume Controller round-robin
algorithm, ensure that the number of MDisks per storage pool is a multiple of the number of
storage ports that are available. This approach ensures sufficient bandwidth for the storage
controller, and an even balance across storage controller ports.
In general, configure disk subsystems as though SAN Volume Controller was not used.
However, there might be specific requirements or limitations as to the features usable in the
given back-end storage system when it is attached to SAN Volume Controller. Review the
appropriate section of documentation to verify that your back-end storage is supported and to
check for any special requirements:
https://ibm.biz/Bdjm8H
Chapter 3. Planning 77
3.9 Storage pool configuration
The storage pool is at the center of the many-to-many relationship between the MDisks and
the volumes. It acts as a container of physical disk capacity from which chunks of MDisk
space, known as extents, are allocated to form volumes presented to hosts.
MDisks in the SAN Volume Controller are LUNs that are assigned from the back-end storage
subsystems to the SAN Volume Controller. There are two classes of MDisks: Managed and
unmanaged. An unmanaged MDisk is a LUN that is presented to SVC by back-end storage,
but is not assigned to any storage pool. A managed MDisk is an MDisk that is assigned to a
storage pool. An MDisk can be assigned only to a single storage pool.
SAN Volume Controller clustered system must have exclusive access to every LUN (MDisk) it
is using. Any specific LUN cannot be presented to more than one SAN Volume Controller
cluster. Also, presenting the same LUN to a SAN Volume Controller and a host is not allowed.
One of the basic storage pool parameters is the extent size. All MDisks in the storage pool
have the same extent size, and all volumes that are allocated from the storage pool inherit its
extent size.
The SAN Volume Controller supports extent sizes from 16 mebibytes (MiB) to 8192 MiB. The
extent size is a property of the storage pool and is set when the storage pool is created.
The extent size of a storage pool cannot be changed. If you need to change extent size, the
storage pool must be deleted and a new storage pool configured.
Table 3-2 lists all of the available extent sizes in a SAN Volume Controller and the maximum
managed storage capacity for each extent size.
Table 3-2 Extent size and total storage capacities per system
Extent size (MiB) Total storage capacity manageable per system
0512 2 PiB
1024 4 PiB
2048 8 PiB
4096 16 PiB
8192 32 PiB
78 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
When planning storage pool layout, consider the following aspects:
Pool extent size:
– Generally, use 128 MiB or 256 MiB. The IBM Storage Performance Council (SPC)
benchmarks use a 256 MiB extent.
– Pick the extent size and then use that size for all storage pools.
– You cannot migrate volumes between storage pools with different extent sizes.
However, you can use volume mirroring to create copies between storage pools with
different extent sizes.
Storage pool reliability, availability, and serviceability (RAS) considerations:
– The number and size of storage pools affects system availability. Using a larger
number of smaller pools reduces the failure domain in case one of the pools goes
offline. However, increased number of storage pools introduces management
overhead, impacts storage space use efficiency, and is subject to the configuration
maximum limit.
– An alternative approach is to create few large storage pools. All MDisks that constitute
each of the pools should have the same performance characteristics.
– The storage pool goes offline if an MDisk is unavailable, even if the MDisk has no data
on it. Do not put MDisks into a storage pool until they are needed.
– Put image mode volumes in a dedicated storage pool or pools.
Storage pool performance considerations:
– It might make sense to create multiple storage pools if you are attempting to isolate
workloads to separate disk drives.
– Create storage pools out of MDisks with similar performance. This technique is the only
way to ensure consistent performance characteristics of volumes created from the
pool.
3.9.1 The storage pool and SAN Volume Controller cache relationship
The SAN Volume Controller uses cache partitioning to limit the potential negative effects that
a poorly performing storage controller can have on the clustered system. The cache partition
allocation size is based on the number of configured storage pools. This design protects
against individual overloaded back-end storage system from filling system write cache and
degrading the performance of the other storage pools. For more information, see Chapter 2,
“System overview” on page 13.
Table 3-3 shows the limit of the write-cache data that can be used by a single storage pool.
1 100%
2 066%
3 040%
4 030%
5 or more 025%
Chapter 3. Planning 79
No single partition can occupy more than its upper limit of write cache capacity. When the
maximum cache size is allocated to the pool, the SAN Volume Controller starts to limit
incoming write I/Os for volumes that are created from the storage pool. That is, the host writes
are limited to the destage rate, on a one-out-one-in basis.
Only writes that target the affected storage pool are limited. The read I/O requests for the
throttled pool continue to be serviced normally. However, because the SAN Volume Controller
is destaging data at a maximum rate that the back-end storage can sustain, read response
times are expected to be affected.
All I/O that is destined for other (non-throttled) storage pools continues as normal.
Every volume is assigned to an I/O Group that defines which pair of SAN Volume Controller
nodes will service I/O requests to the volume.
Important: No fixed relationship exists between I/O Groups and storage pools.
Strive to distribute volumes evenly across available I/O Groups and nodes within the clustered
system. Although volume characteristics depend on the storage pool from which it is created,
any volume can be assigned to any node.
When you create a volume, it is associated with one node of an I/O Group, the preferred
access node. By default, when you create a volume it is associated with the I/O Group node by
using a round-robin algorithm. However, you can manually specify the preferred access node
if needed.
No matter how many paths are defined between the host and the volume, all I/O traffic is
serviced by only one node (the preferred access node).
If you plan to use volume mirroring, for maximum availability put each copy in a different
storage pool backed by different back-end storage subsystems. However, depending on your
needs it might be sufficient to use a different set of physical drives, a different storage
controller, or a different back-end storage for each volume copy. Strive to place all volume
copies in storage pools with similar performance characteristics. Otherwise, the volume
performance as perceived by the host might be limited by the performance of the slowest
storage pool.
Image mode volumes are an extremely useful tool in storage migration and when introducing
IBM SAN Volume Controller to existing storage environment.
80 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.10.2 Planning for thin-provisioned volumes
A thin-provisioned volume has a virtual capacity and a real capacity. Virtual capacity is the
volume storage capacity that a host sees as available. Real capacity is the actual storage
capacity that is allocated to a volume copy from a storage pool. Real capacity limits the
amount of data that can be written to a thin-provisioned volume.
When planning use of thin-provisioned volumes, consider expected usage patterns for the
volume. In particular, the actual size of the data and the rate of data change.
Thin-provisioned volumes require more I/Os because of directory accesses. For fully random
access, and a workload with 70% reads and 30% writes, a thin-provisioned volume requires
approximately one directory I/O for every user I/O. Additionally, thin-provisioned volumes
require more processor processing, so the performance per I/O Group can also be reduced.
However, the directory is two-way write-back-cached (as with the SAN Volume Controller
fastwrite cache), so certain applications perform better.
Additionally, the ability to thin-provision volumes can be a worthwhile tool allowing hosts to
see storage space significantly larger than what is actually allocated within the storage pool.
Thin provisioning can also simplify storage allocation management. You can define virtual
capacity of a thinly provisioned volume to an application based on the future requirements,
but allocate real storage based on today’s use.
The main risk that is associated with using thin-provisioned volumes is running out of real
capacity in the storage volumes, pool, or both and the resultant unplanned outage. Therefore,
strict monitoring of the used capacity on all non-autoexpand volumes, and monitoring of the
free space in the storage pool is required.
When you configure a thin-provisioned volume, you can define a warning level attribute to
generate a warning event when the used real capacity exceeds a specified amount or
percentage of the total virtual capacity. You can also use the warning event to trigger other
actions, such as taking low-priority applications offline or migrating data into other storage
pools.
If a thin-provisioned volume does not have enough real capacity for a write operation, the
volume is taken offline and an error is logged (error code 1865, event ID 060001). Access to
the thin-provisioned volume is restored by increasing the real capacity of the volume, which
might require increasing the size of the storage pool from which it is allocated. Until this time,
the data is held in the SAN Volume Controller cache. Although in principle this situation is not
a data integrity or data loss issue, you must not rely on the SAN Volume Controller cache as a
backup storage mechanism.
Chapter 3. Planning 81
Important: Set and monitor a warning level on the used capacity so that you have
adequate time to respond and provision more physical capacity.
Consider using the autoexpand feature of the thin-provisioned volumes to reduce human
intervention required to maintain access to thin-provisioned volumes.
When you create a thin-provisioned volume, you can choose the grain size for allocating
space in 32 kibibytes (KiB), 64 KiB, 128 KiB, or 256 KiB chunks. The grain size that you select
affects the maximum virtual capacity for the thin-provisioned volume. The default grain size is
256 KiB, which is the preferred option. If you select 32 KiB for the grain size, the volume size
cannot exceed 260,000 gibibytes (GiB). The grain size cannot be changed after the
thin-provisioned volume is created.
Generally, smaller grain sizes save space, but require more metadata access, which can
adversely affect performance. If you are not going to use the thin-provisioned volume as a
FlashCopy source or target volume, use 256 KiB to maximize performance. If you are going to
use the thin-provisioned volume as a FlashCopy source or target volume, specify the same
grain size for the volume and for the FlashCopy function. In this situation ideally grain size
should be equal to the typical I/O size from the host.
A thin-provisioned volume feature that is called zero detect provides clients with the ability to
reclaim unused allocated disk space (zeros) when they are converting a fully allocated
volume to a thin-provisioned volume by using volume mirroring.
The SAN Volume Controller imposes no particular limit on the actual distance between the
SAN Volume Controller nodes and host servers. However, for host attachment, the SAN
Volume Controller supports up to three ISL hops in the fabric. This capacity means that the
server to the SAN Volume Controller can be separated by up to five FC links, four of which
can be 10 km long (6.2 miles) if long wave Small Form-factor Pluggables (SFPs) are used.
82 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 3-18 shows an example of a supported configuration with SAN Volume Controller
nodes using shortwave SFPs.
In Figure 3-18, the optical distance between SAN Volume Controller Node 1 and Host 2 is
slightly over 40 km (24.85 miles).
To avoid latencies that lead to degraded performance, avoid ISL hops whenever possible. In
an optimal setup, the servers connect to the same SAN switch as the SAN Volume Controller
nodes.
Note: Before attaching host systems to SAN Volume Controller, see the Configuration
Limits and Restrictions for the IBM System Storage SAN Volume Controller described in:
http://www.ibm.com/support/docview.wss?uid=ssg1S1009560
Therefore, for large storage networks you should plan for setting the correct SCSI commands
queue depth on your hosts. For this purpose, a large storage network is defined as one that
contains at least 1000 volume mappings. For example, a deployment with 50 hosts with 20
volumes mapped to each of them would be considered a large storage network. For details of
the queue depth calculations, see this website:
https://ibm.biz/BdjKcK
Chapter 3. Planning 83
3.11.2 Offloaded data transfer
If your Microsoft Windows hosts are configured to use Microsoft Offloaded Data Transfer
(ODX) to offload the copy workload to the storage controller, then consider the benefits of this
technology against additional load on storage controllers. Both benefits and impact of
enabling ODX are especially prominent in Microsoft Hyper-V environments with ODX
enabled.
LUN masking is usually implemented in the device driver software on each host. The host has
visibility of more LUNs than it is intended to use. The device driver software masks the LUNs
that are not to be used by this host. After the masking is complete, only some disks are visible
to the operating system. The system can support this type of configuration by mapping all
volumes to every host object and by using operating system-specific LUN masking
technology. However, the default, and preferred, system behavior is to map only those
volumes that the host is required to access.
The act of mapping a volume to a host makes the volume accessible to the WWPNs or iSCSI
names such as iSCSI qualified names (IQNs) or extended-unique identifiers (EUIs) that are
configured in the host object.
For best performance, split each host group into two sets. For each set, configure the
preferred access node for volumes presented to the host set to one of the I/O Group nodes.
This approach helps to evenly distribute load between the I/O Group nodes.
Note that a volume can be mapped only to a host that is associated with the I/O Group to
which the volume belongs.
84 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.14 Advanced Copy Services
The SAN Volume Controller offers the following Advanced Copy Services:
FlashCopy
Metro Mirror
Global Mirror
Layers: A property called layer for the clustered system is used when a copy services
partnership exists between a SAN Volume Controller and an IBM Storwize V7000. There
are two layers: Replication and storage. All SAN Volume Controller clustered systems are
configured as replication layer and cannot be changed. By default, the IBM Storwize V7000
is configured as storage layer. This configuration must be changed by using the chsystem
CLI command before you use it to make any copy services partnership with the SAN
Volume Controller.
Chapter 3. Planning 85
For each volume define which FlashCopy type best fits your requirements:
– No copy
– Full copy
– Thin-Provisioned
– Incremental
Define how many copies you need and the lifetime of each copy.
Estimate the expected data change rate for FlashCopy types other than full copy.
Consider memory allocation for copy services. If you plan to define multiple FlashCopy
relationships, you might need to modify the default memory setting. See 11.2.18, “Memory
allocation for FlashCopy” on page 526.
Define the grain size that you want to use. When data is copied between volumes, it is
copied in units of address space known as grains. The grain size is 64 KB or 256 KB. The
FlashCopy bitmap contains one bit for each grain. The bit records whether the associated
grain has been split by copying the grain from the source to the target. Larger grain sizes
can cause a longer FlashCopy time and a higher space usage in the FlashCopy target
volume. The data structure and the source data location can modify those effects.
If the grain is larger than most host writes, this can lead to write amplification on the target
system. This increase is because for every write IO to an unsplit grain, the whole grain
must be read from the FlashCopy source and copied to the target. Such a situation could
result in performance degradation.
If using a thin-provisioned volume in a FlashCopy map, for best performance use the same
grain size as the map grain size. Additionally, if using a thin-provisioned volume directly
with a host system, use a grain size that more closely matches the host IO size.
Define which FlashCopy rate best fits your requirement in terms of the storage
performance and the amount of time required to complete the FlashCopy. Table 3-4 shows
the relationship of the background copy rate value to the number of grain split attempts per
second.
For performance-sensitive configurations, test the performance observed for different
settings of grain size and FlashCopy rate in your actual environment before committing a
solution to production use. See Table 3-4 for some baseline data.
11 - 20 256 KiB 1 4
21 - 30 512 KiB 2 8
31 - 40 1 MiB 4 16
41 - 50 2 MiB 8 32
51 - 60 4 MiB 16 64
61 - 70 8 MiB 32 128
71 - 80 16 MiB 64 256
86 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3.14.2 Combining FlashCopy and Metro Mirror or Global Mirror
Use of FlashCopy in combination with Metro Mirror or Global Mirror is allowed if the following
conditions are fulfilled:
A FlashCopy mapping must be in the idle_copied state when its target volume is the
secondary volume of a Metro Mirror or Global Mirror relationship.
A FlashCopy mapping cannot be manipulated to change the contents of the target volume
of that mapping when the target volume is the primary volume of a Metro Mirror or Global
Mirror relationship that is actively mirroring.
The I/O group for the FlashCopy mappings must be the same as the I/O group for the
FlashCopy target volume.
Global Mirror is a copy service that is similar to Metro Mirror but copies data asynchronously.
You do not have to wait for the write to the secondary system to complete. For long distances,
performance is improved compared to Metro Mirror. However, if a failure occurs, you might
lose data.
Global Mirror uses one of two methods to replicate data. Multicycling Global Mirror is
designed to replicate data while adjusting for bandwidth constraints. It is appropriate for
environments where it is acceptable to lose a few minutes of data if a failure occurs. For
environments with higher bandwidth, non-cycling Global Mirror can be used so that less than
a second of data is lost if a failure occurs. Global Mirror also works well when sites are more
than 300 kilometers away.
When SAN Volume Controller copy services are used, all components in the SAN must
sustain the workload that is generated by application hosts and the data replication workload.
Otherwise, the system can automatically stop copy services relationships to protect your
application hosts from increased response times.
Starting with V7.6, you can use the chsystem command to set the maximum replication delay
for the system. This value ensures that the single slow write operation does not affect the
entire primary site.
You can configure this delay for all relationships or consistency groups that exist on the
system by using the maxreplicationdelay parameter on the chsystem command. This value
indicates the amount of time (in seconds) that a host write operation can be outstanding
before replication is stopped for a relationship on the system. If the system detects a delay in
replication on a particular relationship or consistency group, only that relationship or
consistency group is stopped.
In systems with many relationships, a single slow relationship can cause delays for the
remaining relationships on the system. This setting isolates the potential relationship with
delays so that you can investigate the cause of these issues. When the maximum replication
delay is reached, the system generates an error message that identifies the relationship that
exceeded the maximum replication delay.
Chapter 3. Planning 87
To avoid such incidents, consider deployment of a SAN performance monitoring tool to
continuously monitor the SAN components for error conditions and performance problems.
Use of such a tool helps you detect potential issues before they affect your environment.
When planning for use of data replication services, plan for the following aspects of the
solution:
Volumes and consistency groups for copy services
Copy services topology
Choice between Metro Mirror and Global Mirror
Connection type between clusters (FC, FCoE, IP)
Cluster configuration for copy services, including zoning
IBM explicitly tests products for interoperability with the SAN Volume Controller. For more
information about the current list of supported devices, see the IBM System Storage
Interoperation Center (SSIC) website:
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
If an application requires write order to be preserved for the set of volumes that it uses, create
a consistency group for these volumes.
Metro Mirror allows you to prevent any data loss during a system failure, but has more
stringent requirements especially regarding intercluster link bandwidth and latency, as well as
remote site storage performance. Additionally it possibly incurs a performance penalty
because writes are not confirmed to the host until data reception confirmation is received
from the remote site. Because of finite data transfer speeds, this remote write penalty grows
with the distance between the sites. A point-to-point dark fiber-based link typically incurs a
round-trip latency of 1 ms per 100 km (62.13 miles). Other technologies provide longer
round-trip latencies. Inter-site link latency defines the maximum possible distance for any
performance level.
Global Mirror allows you to relax constraints on system requirements at the cost of using
asynchronous replication, which allows the remote site to lag behind the local site. Choice of
the replication type has a major impact on all other aspects of the copy services planning.
The use of Global Mirror and Metro Mirror between the same two clustered systems is
supported.
88 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If you plan to use copy services to realize some application function (for example, disaster
recovery orchestration software), review the requirements of the application you plan to use.
Verify that the complete solution is going to fulfill supportability criteria of both IBM and the
application vendor.
Intercluster link
The local and remote clusters can be connected by an FC, FCoE, or IP network. The IP
network can be used as a carrier for an FCIP solution or as a native data carrier.
Each of the technologies has its own requirements concerning supported distance, link
speeds, bandwidth, and vulnerability to frame or packet loss. For the most current information
regarding requirements and limitations of each of the supported technologies, see this
website:
https://ibm.biz/BdjKbu
The two major parameters of a link are its bandwidth and latency. Latency might limit
maximum bandwidth available over IP links depending on the details of the technology used.
When planning the Intercluster link, take into account the peak performance that is required.
This consideration is especially important for Metro Mirror configurations.
When Metro Mirror or Global Mirror is used a certain amount of bandwidth is required for the
IBM SAN Volume Controller intercluster heartbeat traffic. The amount of traffic depends on
how many nodes are in each of the two clustered systems.
Table 3-5 shows the amount of heartbeat traffic, in megabits per second, that is generated by
various sizes of clustered systems.
2 nodes 5 06 06 06
4 nodes 6 10 11 12
6 nodes 6 11 16 17
8 nodes 6 12 17 21
These numbers estimate the amount of traffic between the two clustered systems when no
I/O is taking place to mirrored volumes. Half of the data is sent by each of the systems. The
traffic is divided evenly over all available intercluster links. Therefore, if you have two
redundant links, half of this traffic is sent over each link.
The bandwidth between sites must be sized to meet the peak workload requirements. You
can estimate the peak workload requirement by measuring the maximum write workload
averaged over a period of 1 minute or less, and adding the heartbeat bandwidth. Statistics
must be gathered over a typical application I/O workload cycle, which might be days, weeks,
or months, depending on the environment on which the SAN Volume Controller is used.
When planning the inter-site link, consider also the initial sync and any future resync
workloads. It might be worthwhile to secure additional link bandwidth for the initial data
synchronization.
Chapter 3. Planning 89
If the link between the sites is configured with redundancy so that it can tolerate single
failures, you must size the link so that the bandwidth and latency requirements are met even
during single failure conditions.
When planning the inter-site link, make a careful note whether it is dedicated to the
inter-cluster traffic or is going to be used to carry any other data. Sharing the link with other
traffic (for example, cross-site IP traffic) might reduce the cost of creating the inter-site
connection and improve link utilization. However, doing so might also affect the links’ ability to
provide the required bandwidth for data replication.
Verify carefully that the devices that you plan to use to implement the intercluster link are
supported.
Cluster configuration
If you configure replication services, you might decide to dedicate ports for intercluster
communication, for the intracluster traffic, or both. In that case, make sure that your cabling
and zoning reflects that decision. Additionally, these dedicated ports are inaccessible for host
or back-end storage traffic, so plan your volume mappings as well as hosts and back-end
storage connections accordingly.
Global Mirror volumes should have their preferred access nodes evenly distributed between
the nodes of the clustered systems. Figure 3-20 shows an example of a correct relationship
between volumes in a Metro Mirror or Global Mirror solution.
The back-end storage systems at the replication target site must be capable of handling the
peak application workload to the replicated volumes, plus the client-defined level of
background copy, plus any other I/O being performed at the remote site. The performance of
applications at the local clustered system can be limited by the performance of the back-end
storage controllers at the remote site. This consideration is especially important for Metro
Mirror replication.
90 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To ensure that the back-end storage is able to support the data replication workload, you can
dedicate back-end storage systems to only Global Mirror volumes. You can also configure the
back-end storage to ensure sufficient quality of service (QoS) for the disks that are used by
Global Mirror. Alternatively, you can ensure that physical disks are not shared between data
replication volumes and other I/O.
For more detailed information about SAN boot, see Appendix B, “CLI setup” on page 769.
Because multiple data migration methods are available, choose the method that best fits your
environment, operating system platform, type of data, and the application’s service level
agreement (SLA).
Chapter 3. Planning 91
You might want to use the SAN Volume Controller as a data mover to migrate data from a
non-virtualized storage subsystem to another non-virtualized storage subsystem. In this
case, you might have to add checks that relate to the specific storage subsystem that you
want to migrate.
Be careful when you are using slower disk subsystems for the secondary volumes for
high-performance primary volumes because the SAN Volume Controller cache might not
be able to buffer all the writes. Flushing cache writes to slower back-end storage might
impact performance of your hosts.
For more information, see Chapter 13, “RAS, monitoring, and troubleshooting” on page 689.
This application currently only supports upgrades from 2145-CF8, 2145-CG8, and 2145-DH8
nodes to SV1 nodes. For more information, see:
https://ports.eu-gb.mybluemix.net/
Tip: Technically, almost all storage controllers provide both striping (in the form of RAID 5,
RAID 6, or RAID 10) and a form of caching. The real benefit of SAN Volume Controller is
the degree to which you can stripe the data across disks in a storage pool, even if they are
installed in different back-end storage systems. This technique maximizes the number of
active disks available to service I/O requests. The SAN Volume Controller provides
additional caching, but its impact is secondary for sustained workloads.
To ensure the performance that you want and verify the capacity of your storage
infrastructure, undertake a performance and capacity analysis to reveal the business
requirements of your storage environment. Use the analysis results and the guidelines in this
chapter to design a solution that meets the business requirements of your organization.
92 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
When considering performance for a system, always identify the bottleneck and, therefore,
the limiting factor of a specific system. This is a multidimensional analysis that needs to be
performed for each of your workload patterns. There can be different bottleneck components
for different workloads.
When you are designing a storage infrastructure with the SAN Volume Controller or
implementing a SAN Volume Controller in an existing storage infrastructure, you must ensure
that the performance and capacity of the SAN, back-end disk subsystems and SAN Volume
Controller meets the requirements for the set of known or expected workloads.
3.19.1 SAN
The following SAN Volume Controller models are supported for software V8.1:
2145-DH8
2145-SV1
All of these models can connect to 8 Gbps, and 16 Gbps switches (2 Gbps and 4 Gbps are no
longer supported). Correct zoning on the SAN switch provides both security and
performance. Implement a dual HBA approach at the host to access the SAN Volume
Controller.
The SAN Volume Controller is designed to handle many paths to the back-end storage.
In most cases, the SAN Volume Controller can improve performance, especially of mid-sized
to low-end disk subsystems, older disk subsystems with slow controllers, or uncached disk
systems, for the following reasons:
The SAN Volume Controller can stripe across disk arrays, and it can stripe across the
entire set of configured physical disk resources.
The SAN Volume Controller 2145-DH8 has 32 GB of cache (64 GB of cache with a second
CPU used for hardware-assisted compression acceleration for IBM Real-time
Compression (RtC) workloads).The SAN Volume Controller 2145-SV1 has at least 64 GB
(up to 264 GB) of cache.
The SAN Volume Controller can provide automated performance optimization of hot spots
by using flash drives and Easy Tier.
The SAN Volume Controller large cache and advanced cache management algorithms also
allow it to improve the performance of many types of underlying disk technologies. The SAN
Volume Controller capability to asynchronously manage destaging operations incurred by
writes while maintaining full data integrity has the potential to be important in achieving good
database performance.
Chapter 3. Planning 93
Because hits to the cache can occur both in the upper (SAN Volume Controller) and the lower
(back-end storage disk controller) level of the overall system, the system as a whole can use
the larger amount of cache wherever it is located. Therefore, SAN Volume Controller cache
provides additional performance benefits for back-end storage systems with extensive cache
banks.
Also, regardless of their relative capacities, both levels of cache tend to play an important role
in enabling sequentially organized data to flow smoothly through the system.
However, SAN Volume Controller cannot increase the throughput potential of the underlying
disks in all cases. Performance benefits depend on the underlying storage technology and the
workload characteristics, including the degree to which the workload exhibits hotspots or
sensitivity to cache size or cache algorithms.
Assuming that no bottlenecks exist in the SAN or on the disk subsystem, you must follow
specific guidelines when you perform the following tasks:
Creating a storage pool
Creating volumes
Connecting to or configuring hosts that use storage presented by a SAN Volume
Controller clustered system
For more information about performance and preferred practices for the SAN Volume
Controller, see IBM System Storage SAN Volume Controller and Storwize V7000 Best
Practices and Performance Guidelines, SG24-7521.
Although the technology is easy to implement and manage, it is helpful to understand the
basics of internal processes and I/O workflow to ensure a successful implementation of any
storage solution.
94 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The following are some general suggestions:
Best results can be achieved if the data compression ratio stays at 25% or above. Volumes
can be scanned with the built-in Comprestimator utility to support the decision if RtC is a
good choice for the specific volume.
More concurrency within the workload gives a better result than single-threaded
sequential I/O streams.
I/O is de-staged to RACE from the upper cache in 64 KiB pieces. The best results are
achieved if the host I/O size does not exceed this size.
Volumes that are used for only one purpose usually have the same work patterns. Mixing
database, virtualization, and general-purpose data within the same volume might make
the workload inconsistent. These workloads might have no stable I/O size and no specific
work pattern, and a below-average compression ratio, making these volumes hard to
investigate during performance degradation. Real-time Compression development
advises against mixing data types within the same volume whenever possible.
It is best to not recompress pre-compressed data. Volumes with compressed data should
stay as uncompressed volumes.
Volumes with encrypted data have a very low compression ratio and are not good
candidates for compression. This observation is true for data encrypted by the host.
Real-time Compression might provide satisfactory results for volumes encrypted by SAN
Volume Controller because compression is performed before encryption.
For more information about using IBM Spectrum Control to monitor your storage subsystem,
see this website:
http://www.ibm.com/systems/storage/spectrum/control/
Also, see IBM Spectrum Family: IBM Spectrum Control Standard Edition, SG24-8321.
Chapter 3. Planning 95
96 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4
Additional features such as user authentication, secure communications, and local port
masking are also covered. These features are optional and do not need to be configured
during the initial configuration.
98 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.2 System initialization
This section provides step-by-step instructions on how to create the SVC cluster. The
procedure is performed by using the technician port for 2145-SV1 and 2145-DH8 models.
Attention: Do not repeat the instructions for system initialization on more than one node.
After system initialization completes, use the management GUI to add more nodes to the
system. See 4.3.2, “Adding nodes” on page 115 for information about how to perform this
task.
During system initialization, you must specify either an IPv4 or an IPv6 system address. This
address is given to Ethernet port 1. After system initialization, you can specify additional IP
addresses for port 1 and port 2 until both ports have an IPv4 address and an IPv6 address.
Choose any 2145-SV1 or 2145-DH8 node that you want to be a member of the cluster being
created, and connect a personal computer (PC) or notebook to the technician port on the rear
of the node.
Figure 4-1 shows the location of the technician port on the 2145-SV1 model.
Figure 4-2 shows the location of the technician port on the 2145-DH8 model.
The technician port provides a DHCP IPv4 address. So you must ensure that your PC or
notebook Ethernet port is configured for DHCP if you want the IP to be assigned
automatically. If your PC or notebook does not have DHCP, you can set a static IP on the
Ethernet port as 192.168.0.2.
Note: The SVC does not provide IPv6 IP addresses for the technician port.
Note: During the system initialization, you are prompted to accept untrusted certificates
because the system certificates are self-signed. You can accept these because they
are not harmful.
2. The welcome dialog box opens, as shown in Figure 4-3. Click Next to start the procedure.
100 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Select the first option, As the first node in a new system, as shown in Figure 4-4. Click
Next.
Figure 4-4 System initialization: Configuring the first node in a new system
4. Enter the IP address details for the new system. You can choose between an IPv4 or IPv6
address. In this example an IPv4 address is set, as shown in Figure 4-5. Click Next.
6. After the system initialization is complete, follow the instructions shown in Figure 4-7:
a. Disconnect the Ethernet cable from the technician port and from your PC or notebook.
b. Connect the PC or notebook to the same network as the system.
c. Click Finish to be redirected to the management GUI to complete the system setup.
Note: You can access the management GUI from any management console that is
connected to the same network as the system. Enter the system IP address on a
supported browser to access the management GUI.
102 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.3 System setup
This section provides step-by-step instructions on how to define the basic settings of the
system with the system setup wizard, and on how to add additional nodes and optional
expansion enclosures.
Note: The first time that you connect to the management GUI, you are prompted to accept
untrusted certificates because the system certificates are self-signed. You can accept
these certificate because they are not harmful.
You can install certificates signed by a third-party certificate authority after you complete
system setup. See 4.5, “Configuring secure communications” on page 134 for instructions
on how to perform this task.
Important: The default password for the superuser account is passw0rd (the number
zero and not the letter O).
3. Carefully read the license agreement. Select I agree with the terms in the license
agreement when you are ready, as shown in Figure 4-10. Click Next.
104 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. Enter a new password for superuser, as shown in Figure 4-11. The password length is 6 -
64 characters and it cannot begin or end with a space. Click Apply and Next.
106 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. Enter either the number of tebibytes (TiB) or the number of Storage Capacity Units (SCUs)
licensed for each function as authorized by your license agreement. Figure 4-13 shows
some values as an example only.
Note: Encryption uses a different licensing scheme and is activated later in the wizard.
Note: If you choose to manually enter these settings, you cannot select the 24-hour
clock at this time. However, you can select the 24-hour clock after you complete the
wizard by clicking Settings → System and selecting Date and Time.
108 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
8. Select whether the encryption feature was purchased for this system. In this example, it is
assumed encryption was not purchased, as shown in Figure 4-15. Click Next.
Note: If you have purchased the encryption feature, you are prompted to activate your
encryption license either manually or automatically. For information about how to
activate your encryption license during the system setup wizard, see Chapter 12,
“Encryption” on page 633.
Note: If your system is not in the US, complete the state or province field with XX.
110 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.Enter the contact details of the person to be contacted to resolve issues on the system.
You can choose to enter the details for a 24-hour operations desk. Figure 4-17 shows
some details as an example only. Click Apply and Next.
SVC can use SNMP traps, syslog messages, and call home to notify you and IBM Support
when significant events are detected. Any combination of these notification methods can
be used simultaneously. However, only call home is configured during the system setup
wizard. For information about how to configure other notification methods, see Chapter 13,
“RAS, monitoring, and troubleshooting” on page 689.
Note: When call home is configured, the system automatically creates a support
contact with one of the following email addresses, depending on country or region of
installation:
US, Canada, Latin America, and Caribbean Islands: [email protected]
All other countries or regions: [email protected]
If you do not want to configure call home now, it can be done later by navigating to
Settings → Notifications.
112 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
12.A summary of all the changes is displayed, as shown in Figure 4-19. Confirm that the
changes are correct and click Finish.
13.The message shown in Figure 4-20 opens, confirming that the setup is complete. Click
Close. You are automatically redirected to the management GUI Dashboard.
During system setup, if there is only one node on the fabric that is not part of the cluster, that
node is added automatically. If there is more than one node, no node is added automatically.
Figure 4-22 shows the System window for a system with two nodes and no other nodes
visible on the fabric.
114 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 4-23 shows the System window for a system with one node and seven other nodes
visible in the fabric.
Figure 4-23 System window: One node and seven nodes to add
If you have purchased only two nodes, all nodes are already part of the cluster. If you have
purchased more than two nodes, you must manually add them to the cluster. See 4.3.2,
“Adding nodes” on page 115 for instructions on how to perform this task.
When all nodes are part of the cluster, you can install the optional expansion enclosures. See
4.3.4, “Adding expansion enclosures” on page 121 for instructions about how to perform this
task. If you have no expansion enclosures to install, system setup is complete.
Completing system setup means that all mandatory steps of the initial configuration have
been completed and you can start configuring your storage. Optionally, you can configure
other features, such as user authentication, secure communications, and local port masking.
Before beginning this process, ensure that the new nodes are correctly installed and cabled to
the existing system. Ensure that the Ethernet and Fibre Channel connectivity is correctly
configured and that the nodes are powered on.
116 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Select the nodes that you want to add to each I/O group, as shown in Figure 4-26.
You can turn on the identify LED lights on a node by clicking the icon on the right of the
node, as shown in Figure 4-27.
3. Click Finish and wait for the nodes to be added to the system.
Note: For more information about adding spare nodes to the system, see 13.4.4,
“Updating IBM Spectrum Virtualize with a Hot Spare Node” on page 713.
This procedure is the same whether you are configuring the system for the first time or
expanding it afterward.
Before commencing, ensure that the spare nodes are correctly installed and cabled to the
existing system. Ensure that the Ethernet and Fibre Channel connectivity has been correctly
configured and that the nodes are powered on.
118 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Complete the following steps to add spare nodes to the system:
1. Click Actions and then Add Nodes, as shown in Figure 4-29.
2. Select the nodes that you want to add to the system as hot spares, as shown in
Figure 4-30.
3. Click Finish and wait for the nodes to be added to the system.
The second view window titled Hot Spare displays all spare nodes that are configured for the
system, as shown in Figure 4-32.
120 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.3.4 Adding expansion enclosures
Before continuing, ensure that the new expansion enclosures are correctly installed, cabled to
the existing system, and powered on. If all prerequisites are fulfilled, the Systems window
displays an empty expansion under the two nodes of the I/O group it is attached to, as shown
in Figure 4-33. The plus sign means that there are expansions that are not yet added to the
system.
3. Review the summary in the next dialog box. Click Finish to add the expansions to the
system. The new expansions are now displayed in the Systems window. Figure 4-35
shows a system with eight nodes and two expansion enclosures installed under I/O
group 0.
122 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.4 Configuring user authentication
There are two methods of user authentication to control access to the GUI and to the CLI:
Local authentication is performed within the SVC system. Local GUI authentication is
done with user name and password. Local CLI authentication is done either with an SSH
key or a user name and password.
Remote authentication allows users to authenticate to the system using credentials stored
on an external authentication service. This feature means that you can use the passwords
and user groups defined on the remote service to simplify user management and access,
to enforce password policies more efficiently, and to separate user management from
storage management.
Note: Superuser is the only user allowed to log in to the Service Assistant Tool. It is also
the only user allowed to run sainfo and satask commands through the CLI.
Superuser is a member of the SecurityAdmin user group, which is the most privileged role
within the system.
The password for superuser is set by the user during system setup. The superuser password
can be reset to its default value of passw0rd using the technician port.
User names can contain up to 256 printable American Standard Code for Information
Interchange (ASCII) characters. Forbidden characters are the single quotation mark ('), colon
(:), percent symbol (%), asterisk (*), comma (,), and double quotation marks (“). A user name
cannot begin or end with a blank space.
Passwords for local users can be up to 64 printable ASCII characters. There are no forbidden
characters. However, passwords cannot begin or end with blanks.
Key authentication is attempted first with the password as a fallback. The password and the
SSH key are used for CLI or file transfer access. For GUI access, only the password is used.
Note: Local users are created for each SVC system. If you want to allow access for a user
on multiple systems, you must define the user in each system with the same name and the
same privileges.
Users that are authenticated by an LDAP server can log in to the management GUI and the
CLI. These users do not need to be configured locally for CLI access, nor do they need an
SSH key configured to log in using the CLI.
If multiple LDAP servers are available, you can assign multiple LDAP servers to improve
availability. Authentication requests are processed by those LDAP servers that are marked as
preferred unless the connections fail or a user is not found. Requests are distributed across
all preferred servers for load balancing in a round-robin fashion.
Note: All LDAP servers that are configured within the same system must be of the same
type.
If users that are part of a group on the LDAP server are to be authenticated remotely, a user
group with an identical name must exist on the system. The user group name is case
sensitive. The user group must also be enabled for remote authentication on the system.
A user who is authenticated remotely is granted permissions according to the role that is
assigned to the user group of which the user is a member.
124 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To configure remote authentication using LDAP, start by enabling remote authentication:
1. Click Settings → Security, and select Remote Authentication and then Configure
Remote Authentication, as shown in Figure 4-36.
2. Enter the LDAP settings. Note that these settings are not server specific. They are
common to every server configured. Extra optional settings are available by clicking
Advanced Settings. The following settings are available:
– LDAP type
• IBM Tivoli Directory Server (for IBM Security Directory Server)
• Microsoft Active Directory
• Other (for OpenLDAP)
In this example, we configure an OpenLDAP server, as shown in Figure 4-37 on
page 126.
– Security
Choose between None, SSL, or Transport Layer Security. Using some form of
security ensures that user credentials are encrypted before being transmitted. Select
SSL to use LDAP over SSL (LDAPS) to establish secure connections using port 636
for negotiation and data transfer. Select Transport Layer Security to establish secure
connections using Start TLS, allowing both encrypted and unencrypted connections to
be handled by the same port.
– Service Credentials
This is an advanced and optional setting. Leave Distinguished Name and Password
empty if your LDAP server supports anonymous bind. In this example, we enter the
credentials of an existing user on the LDAP server with permission to query the LDAP
directory. You can enter this information in the format of an email address (for example,
[email protected]) or as a distinguished name (for example,
cn=Administrator,cn=users,dc=ssd,dc=hursley,dc=ibm,dc=com in Figure 4-38).
126 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
– User Attribute
This LDAP attribute is used to determine the user name of remote users. The attribute
must exist in your LDAP schema and must be unique for each of your users.
This is an advanced setting that defaults to sAMAaccountName for Microsoft Active
Directory and to uid for IBM Security Directory Server and OpenLDAP.
– Group Attribute
This LDAP attribute is used to determine the user group memberships of remote users.
The attribute must contain either the distinguished name of a group or a
colon-separated list of group names.
This is an advanced setting that defaults to memberOf for Microsoft Active Directory and
OpenLDAP and to ibm-allGroups for IBM Security Directory Server. For OpenLDAP
implementations, you might need to configure the memberOf overlay if it is not in place.
– Audit Log Attribute
This LDAP is attribute used to determine the identity of remote users. When an LDAP
user performs an audited action, this identity is recorded in the audit log. This is an
advanced setting that defaults to userPrincipalName for Microsoft Active Directory and
to uid for IBM Security Directory Server and OpenLDAP.
3. Enter the server settings for one or more LDAP servers, as shown in Figure 4-39 on
page 128. To add more servers, click the plus (+) icon. The following settings are available:
– Preferred
Authentication requests are processed by the preferred servers unless the connections
fail or a user is not found. Requests are distributed across all preferred servers for load
balancing. Select Preferred to set the server as a preferred server.
– IP Address
The IP address of the server.
– Base DN
The distinguished name to use as a starting point for searching for users on the server
(for example, dc=ssd,dc=hursley,dc=ibm,dc=com).
– SSL Certificate
The SSL certificate that is used to securely connect to the LDAP server. This certificate
is required only if you chose to use SSL or Transport Layer Security as a security
method earlier.
Click Finish to save the settings.
Now that remote authentication is enabled, the remote user groups must be configured. You
can use the default built-in user groups for remote authentication. However, remember that
the name of the default user groups cannot be changed. If the LDAP server already contains
a group that you want to use, the name of the group must be changed on the server side to
match the default name. Any user group, whether default or self-defined, must be enabled for
remote authentication.
Complete the following steps to create a user group with remote authentication enabled:
1. Click Access → Users and select Create User Group, as shown in Figure 4-40.
128 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Enter the details for the new group. Select Enable for this group to enable remote
authentication, as shown in Figure 4-41. Click Create.
130 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Select Enable for this group, as shown in Figure 4-43.
When you have at least one user group enabled for remote authentication, make sure that the
LDAP server is configured correctly by verifying that the following conditions are true:
The name of the user group on the LDAP server matches the one you just modified or
created.
Each user that you want to authenticate remotely is a member of the appropriate user
group for the intended system role.
Figure 4-45 shows the result of a successful connection test. If the connection is not
successful, an error is logged in the event log.
132 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
There is also the option to test a real user authentication attempt. Click Settings →
Security → Remote Authentication, and select Global Actions and then Test LDAP
Authentication, as shown in Figure 4-46.
Enter the user credentials of a user defined on the LDAP server, as shown in Figure 4-47.
Click Test.
Again, the message CMMVC70751 The LDAP task completed successfully is shown after a
successful test.
Both the connection test and the authentication test must complete successfully to ensure
that LDAP authentication works correctly. Assuming both tests succeed, users can log in to
the GUI and CLI using their network credentials.
A user can log in with their short name (that is, without the domain component) or with the
fully qualified user name in the form of an email address.
The rights of a user who belongs to a specific user group are defined by the role that is
assigned to the user group. It is the role that defines what a user can or cannot do on the
system.
SVC provides six user groups and seven roles by default, as shown in Table 4-2. The
VasaProvider role is not associated with a default user group.
Note: The VasaProvider role is used to allow VMware to interact with the system when
implementing Virtual Volumes. Avoid using this role for users who are not controlled by
VMware.
SecurityAdmin SecurityAdmin
Administrator Administrator
CopyOperator CopyOperator
Service Service
Monitor Monitor
RestrictedAdmin RestrictedAdmin
- VasaProvider
Signed SSL certificates are issued by a third-party certificate authority. A browser maintains a
list of trusted certificate authorities, identified by their root certificate. The root certificate must
be included in this list in order for the signed certificate to be trusted. If it is not, the browser
presents security warnings.
134 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To see the details of your current system certificate, click Settings → Security and select
Secure Communications, as shown in Figure 4-48.
SVC allows you to generate a new self-signed certificate or to configure a signed certificate.
Attention: Before generating a request, ensure that your current browser does not
have restrictions on the type of keys that are used for certificates. Some browsers limit
the use of specific key-types for security and compatibility issues.
136 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Save the generated request file. The Secure Communications window now mentions that
there is an outstanding certificate request, as shown in Figure 4-50. This is the case until
the associated signed certificate is installed.
Attention: If you need to update a field in the certificate request, you can generate a
new request. However, do not generate a new request after sending the original one to
the certificate authority. Generating a new request overrides the original one and the
signed certificate associated with the original request cannot be installed.
7. You are prompted to confirm the action, as shown in Figure 4-52. Click Yes to proceed.
The signed certificate is installed.
138 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.5.2 Generating a self-signed certificate
Complete the following steps to generate a self-signed certificate:
1. Select Update Certificate on the Secure Communications window.
2. Select Self-signed certificate and enter the details for the new certificate. Key type and
validity days are the only mandatory fields. Figure 4-53 shows some values as an
example.
Attention: Before creating a new self-signed certificate, ensure that your current
browser does not have restrictions on the type of keys that are used for certificates.
Some browsers limit the use of specific key-types for security and compatibility issues.
Click Update.
With Fibre Channel port masking, you control the use of Fibre Channel ports. You can control
whether the ports are used to communicate to other nodes within the same local system, and
if they are used to communicate to nodes in partnered systems. Fibre Channel port masking
does not affect host or storage traffic. It gets applied only to node-to-node communications
within a system and replication between systems.
Note: This section only applies to local port masking. For information about configuring the
partner port mask for intercluster node communications, see 11.6.4, “Remote copy
intercluster communication” on page 550.
The setup of Fibre Channel port masks is useful when you have more than four Fibre
Channel ports on any node in the system because it saves setting up many SAN zones on
your switches. Fibre Channel I/O ports are logical ports, which can exist on Fibre Channel
platform ports or on FCoE platform ports. Using a combination of port masking and fabric
zoning, you can ensure that the number of logins per node is not more than the limit. If a
canister receives more than 16 logins from another node, then it causes node error 860.
The port masks apply to all nodes on a system. A different port mask cannot be set on nodes
in the same system. You do not have to have the same port mask on partnered systems.
140 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: The lsfabric command shows all of the paths that are possible in IBM Spectrum
Virtualize (as defined by zoning) independent of their usage. Therefore, the command
output includes paths that will not be used because of port masking.
A port mask is a string of zeros and ones. The last digit in the string represents port one. The
previous digits represent ports two, three, and so on. If the digit for a port is “1”, the port is
enabled and the system attempts to send and receive traffic on that port. If it is “0”, the system
does not send or receive traffic on the port. If there are not sufficient digits in the string to
specifically set a port number, that port is disabled for traffic.
For example, if the local port mask is set to 101101 on a node with eight Fibre Channel ports,
ports 1, 3, 4 and 6 are able to connect to other nodes in the system. Ports 2, 5, 7, and 8 do
not have connections. On a node in the system with only four Fibre Channel ports, ports 1, 3,
and 4 are able to connect to other nodes in the system.
The Fibre Channel ports for the system can be viewed by navigating to Settings → Network
and opening the Fibre Channel Ports menu, as shown in Figure 4-55. Port numbers refer to
the Fibre Channel I/O port IDs.
When replacing or upgrading your node hardware to newer models, consider that the number
of Fibre Channel ports and their arrangement might have changed. Take this possible change
into consideration and ensure that any configured port masks are still valid for the new
configuration.
142 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4.6.2 Setting the local port mask
Take the example fabric port configuration shown in Figure 4-57.
The positions in the mask represent the Fibre Channel I/O port IDs with ID 1 in the rightmost
position. In this example, ports A1, A2, A3, A4, B1, B2, B3, and B4 correspond to FC I/O port
IDs 1, 2, 3, 4, 5, 6, 7 and 8.
To set the local port mask, use the chsystem command. For local node-to-node
communication, apply a mask that limits communication to ports A1, A2, A3, and A4 by
applying a port mask of 00001111 to both systems, as shown in Example 4-1.
Example 4-1 Setting a local port mask using the chsystem command
IBM_Storwize:ITSO:superuser>chsystem -localfcportmask 00001111
IBM_Storwize:ITSO:superuser>
144 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5
This chapter explains the basic view and the configuration procedures that are required to get
your IBM SAN Volume Controller environment running as quickly as possible by using GUI.
This chapter does not describe advanced troubleshooting or problem determination and
some of the complex operations (compression, encryption) because they are explained later
in this book.
Throughout the chapter, all GUI menu items are introduced in a systematic, logical order as
they appear in the GUI. However, topics that are described more in detail in other chapters of
the book are not covered in depth and are only referred to here. For example, Pools, Volumes,
Hosts, and Copy Services are described in dedicated chapters that include their associated
GUI operations.
Demonstration: The IBM Client Demonstration Center has a demo of the V8.1 GUI here:
https://www.ibm.com/systems/clientcenterdemonstrations/faces/dcDemoView.jsp?dem
oId=2641
For illustration, the examples configure the IBM SAN Volume Controller (SVC) cluster in a
standard topology.
Multiple users can be logged in to the GUI at any time. However, no locking mechanism
exists, so be aware that if two users change the same object at the same time, the last action
that is entered from the GUI is the action that takes effect.
IBM Spectrum Virtualize V8.1 introduced a major change in the GUI design to be aligned with
the unified look and visual style of other IBM products. Also, some specific features and
options to manage SVC have been added and some limited in their variability of attributes.
This chapter highlights these additions and limitations as compared to the previous version of
V7.8.
Important: Data entries that are made through the GUI are case-sensitive.
You must enable Java Script in your browser. For Mozilla Firefox, Java Script is enabled by
default and requires no additional configuration. For more information about configuring
your web browser, go to this website:
https://ibm.biz/BdjKmU
146 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
It is very preferable for each user to have their own unique account. The default user accounts
should be disabled for use or their passwords changed and kept secured for emergency
purposes only. This approach helps to identify personnel working on the systems and track all
important changes done by them. The Superuser account should be used for initial
configuration only.
After a successful login, the V8.1 welcome window shows up with the new system dashboard
(Figure 5-2).
System Health indicates the current status of all critical system components grouped in
three categories: Hardware, logical, and connectivity components. From each group, you
can navigate directly to the section of GUI where the affected component is managed from
(Figure 5-5).
The Dashboard in V8.1 appears as a welcome page instead of the system pane as in from
previous versions. This System overview has been relocated to the menu Monitoring →
System. Although the Dashboard pane provides key information about system behavior, the
System menu is a preferred starting point to obtain the necessary details about your SVC
components. This advice is followed in the next sections of this chapter.
148 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5.2 Introduction to the GUI
As shown in Figure 5-6, the former IBM SAN Volume Controller GUI System pane has been
relocated to Monitoring → System.
Performance indicator
In this case, the GUI warns you that no host is defined yet. You can directly perform the task
from this window or cancel it and run the procedure later at any convenient time. Other
suggested tasks that typically appear after the initial system configuration are to create a
volume and configure a storage pool.
The dynamic IBM Spectrum Virtualize menu contains the following panes:
Dashboard
Monitoring
Pools
Volumes
Hosts
Copy Services
Access
Settings
150 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Alerts indication
The left icon in the notification area informs administrators about important alerts in the
systems. Click the icon to list warning messages in yellow and errors in red (Figure 5-10).
You can navigate directly to the events menu by clicking View All Events option or see each
event message separately by clicking the Details icon of the specific message, analyze the
content, and eventually run suggested fix procedures (Figure 5-11).
In our case shown in Figure 5-12, we have not yet defined any hosts attached to the systems,
Therefore, the system suggests that we do so and offers us direct access to the associated
host menu. Click Run Task to define the host according to the procedure explained in
Chapter 8, “Hosts” on page 337. If you do not want to define any host at the moment, click
Not Now and the suggestion message disappears.
Similarly, you can analyze the details of running tasks, either all of them together in one
window or of a single task. Click View to open the volume format job as shown in Figure 5-13.
152 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Help
To access online help, click the question mark icon in the left of the notification area and
select the context-based help topic, as shown in Figure 5-14. The help window displays the
context item for the pane that you are working on.
For example, on the System pane, you have the option to open help related to the system in
general as shown in Figure 5-15.
The Help Contents option redirects you to the SVC IBM Knowledge Center. However, it
requires internet access from the workstation where the management GUI is started.
The following content of the chapter helps you to understand the structure of the pane and
how to navigate to various system components to manage them more efficiently and quickly.
Table filtering
On most pages, a Filter option (magnifying glass icon) is available on the upper-left side of the
window. Use this option if the list of object entries is too long.
154 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Complete the following steps to use search filtering:
1. Click Filter on the upper-left side of the window, as shown in Figure 5-17, to open the
search box.
2. Enter the text string that you want to filter and press Enter.
3. By using this function, you can filter your table that is based on column names. In our
example, a volume list is displayed that contains the names that include DS somewhere in
the name. DS is highlighted in amber, as shown in Figure 5-18. The search option is not
case-sensitive.
4. Remove this filtered view by clicking the Reset Filter icon, as shown in Figure 5-19.
Filtering: This filtering option is available in most menu options of the GUI.
For example, on the Volumes pane, complete the following steps to add a column to the table:
1. Right-click any column headers of the table or select the icon in the left corner of the table
header. A list of all of the available columns is displayed, as shown in Figure 5-20.
right-click
2. Select the column that you want to add (or remove) from this table. In our example, we
added the volume ID column and sorted the content by ID, as shown on the left in
Figure 5-21.
3. You can repeat this process several times to create custom tables to meet your
requirements.
156 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. You can always return to the default table view by selecting Restore Default View in the
column selection menu, as shown in Figure 5-22.
Sorting: By clicking a column, you can sort a table based on that column in ascending or
descending order.
The following section describes each option on the Monitoring menu (Figure 5-24).
158 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
When you click a specific component of a node, a pop-up window indicates the details of the
component. By right-clicking and selecting Properties, you see detailed technical attributes,
such as CPU, memory, serial number, node name, encryption status, and node status (online
or offline) as shown in Figure 5-26.
1 3
right-click
In an environment with multiple IBM SAN Volume Controller clusters, you can easily direct the
onsite personnel or technician to the correct device by enabling the identification LED on the
front pane. Click Identify in the window that is shown in Figure 5-27.
right-click
Wait for confirmation from the technician that the device in the data center was correctly
identified.
Alternatively, you can use the SVC command-line interface (CLI) to get the same results.
Type the following commands in this sequence:
1. Type svctask chnode -identify yes 1 (or just type chnode -identify yes 1).
2. Type svctask chnode -identify no 1 (or just type chnode -identify no 1).
Each system that is shown in the System view pane can be rotated by 180° to see its rear
side. Click the rotation arrow in the lower-right corner of the device, as illustrated in
Figure 5-29.
5.4.2 Events
The Events option, available from the Monitoring menu, tracks all informational, warning,
and error messages that occur in the system. You can apply various filters to sort them, or
export them to an external CSV file. A CSV file can be created from the information that is
shown here. Figure 5-30 provides an example of records in the system Event log.
160 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
For the error messages with the highest internal priority, perform corrective actions by running
fix procedures. Click the Run Fix button as shown in Figure 5-30 on page 160. The fix
procedure wizard opens as indicated in Figure 5-31.
The wizard guides you through the troubleshooting and fixing process either from a hardware
or software perspective. If you consider that the problem cannot be fixed without a
technician’s intervention, you can cancel the procedure execution at any time. Details about
fix procedures are discussed in Chapter 13, “RAS, monitoring, and troubleshooting” on
page 689.
5.4.3 Performance
The Performance pane reports the general system statistics that relate to processor (CPU)
utilization, host and internal interfaces, volumes, and MDisks. You can switch between MBps
or IOPS, or even drill down in the statistics to the node level. This capability might be useful
when you compare the performance of each node in the system if problems exist after a node
failover occurs. See Figure 5-32.
The charts that are shown in Figure 5-33 represent 5 minutes of the data stream. For in-depth
storage monitoring and performance statistics with historical data about your SVC system,
use IBM Spectrum Control (enabled by former IBM Tivoli Storage Productivity Center for Disk
and IBM Virtual Storage Center).
You can switch between each type (group) of operation, but you cannot show them all in one
list (Figure 5-35).
162 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5.5 Pools
Pools menu option is used to configure and manage storage pools, internal, and external
storage, MDisks, and to migrate old attached storage to the system.
Pools menu contains the following items accessible from GUI (Figure 5-36):
Pools
Volumes by Pool
Internal Storage
External Storage
MDisks by Pool
System Migration
The details about storage pool configuration and management are provided in Chapter 6,
“Storage pools” on page 197.
5.6 Volumes
A volume is a logical disk that the system presents to attached hosts. Using GUI operations,
you can create different types of volumes, depending on the type of topology that is
configured on your system.
Volumes menu contains the following items (Figure 5-37 on page 164):
Volumes
Volumes by Pool
Volumes by Host
Cloud Volumes
The details about all those tasks and guidance through the configuration and management
process are provided in Chapter 7, “Volumes” on page 251.
5.7 Hosts
A host system is a computer that is connected to the system through either a Fibre Channel
interface or an IP network. It is a logical object that represents a list of worldwide port names
(WWPNs) that identify the interfaces that the host uses to communicate with the SVC. Both
Fibre Channel and SAS connections use WWPNs to identify the host interfaces to the
systems.
Additional detailed information about configuration and management of hosts using the GUI is
available in Chapter 8, “Hosts” on page 337.
164 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5.8 Copy Services
The IBM Spectrum Virtualize copy services and volumes copy operations are based on the
IBM FlashCopy function. In its basic mode, the function creates copies of content on a source
volume to a target volume. Any data that existed on the target volume is lost and is replaced
by the copied data.
More advanced functions allow FlashCopy operations to occur on multiple source and target
volumes. Management operations are coordinated to provide a common, single point-in-time
for copying target volumes from their respective source volumes. This technique creates a
consistent copy of data that spans multiple volumes.
The IBM SAN Volume Controller Copy Services menu offers the following operations in the
GUI (Figure 5-39):
FlashCopy
Consistency Groups
FlashCopy Mappings
Remote Copy
Partnerships
Because the Copy Services are one of the most important features for resiliency solutions,
study the additional technical details in Chapter 11, “Advanced Copy Services” on page 461.
5.9 Access
The access menu in the GUI maintains who can log in to the system, defines the access
rights for the user, and tracks what has been done by each privileged user to the system. It is
logically split into two categories:
Users
Audit Log
This section explains how to create, modify, or remove user, and how to see records in the
audit log.
5.9.1 Users
You can create local users who can access the system. These user types are defined based
on the administrative privileges that they have on the system.
Local users must provide either a password, a Secure Shell (SSH) key, or both. Local users
are authenticated through the authentication methods that are configured on the system. If
the local user needs access to the management GUI, a password is needed for the user. If
the user requires access to the CLI through SSH, either a password or a valid SSH key file is
necessary. Local users must be part of a user group that is defined on the system. User
groups define roles that authorize the users within that group to a specific set of operations on
the system.
To define your User Group in the IBM SAN Volume Controller, click Access → Users as
shown in Figure 5-41.
The following privilege User group roles exist in the IBM Spectrum Virtualize:
Security Administrator can manage all functions of the systems except tasks associated
with the commands satask and sainfo.
Administrator has full rights in the system except those commands related to user
management and authentication.
Restricted Administrator has the same rights as Administrators except removing
volumes, host mappings, hosts, or pools. This is the ideal option for support personnel.
166 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Copy Operators can start, stop, or pause any FlashCopy-based operations.
Monitor users have access to all viewing operations. They cannot change any value or
parameters of the system.
Service users can set the time and date on the system, delete dump files, add and delete
nodes, apply service, and shut down the system. They have access to all views.
VASA Provider users can manage VMware vSphere Virtual Volumes (VVOLs).
Deleting a user
To remove a user account, select the user in the same menu, click Actions, and select
Delete (Figure 5-43).
right-click
168 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
An example of the audit log is shown in Figure 5-45.
Important: Failed commands are not recorded in the audit log. Commands triggered by
IBM Support personnel are recorded with the flag Challenge because they use
challenge-response authentication.
5.10 Settings
Use the Settings pane to configure system options for notifications, security, IP addresses,
and preferences that are related to display options in the management GUI (Figure 5-46).
The following options are available for configuration from the Settings menu:
Notifications: The system can use Simple Network Management Protocol (SNMP) traps,
syslog messages, and Call Home emails to notify you and the support center when
significant events are detected. Any combination of these notification methods can be
used simultaneously.
Notifications are normally sent immediately after an event is raised. However, events can
occur because of service actions that are performed. If a recommended service action is
active, notifications about these events are sent only if the events are still unfixed when the
service action completes.
Email notifications
The Call Home feature transmits operational and event-related data to you and IBM through a
Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification
email. When configured, this function alerts IBM service personnel about hardware failures
and potentially serious configuration or environmental issues.
170 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. The Email settings appear as shown in Figure 5-48.
3. This view provides the following useful information about email notification and call-home
information, among others:
– The IP of the email server (SMTP Server) and Port
– The Call-home email address
– The email of one or more users set to receive one or more email notifications
– The contact information of the person in the organization responsible for the system
– System location
To view the SNMP configuration, use the System window. Move the mouse pointer over
Settings and click Notification → SNMP (Figure 5-49).
From this window (Figure 5-49), you can view and configure an SNMP server to receive
various informational, error, or warning notifications by setting the following information:
IP Address
The address for the SNMP server.
Server Port
The remote port number for the SNMP server. The remote port number must be a value of
1 - 65535.
Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong.
Event Notifications
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
To remove an SNMP server, click the Minus sign (-). To add another SNMP server, click
the Plus sign (+).
172 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event. You can use a Syslog pane to view the
Syslog messages that are sent by the SVC. To view the Syslog configuration, use the System
window and move the mouse pointer over the Settings and click Notification → Syslog
(Figure 5-50).
From this window, you can view and configure a syslog server to receive log messages from
various systems and store them in a central repository by entering the following information:
IP Address
The IP address for the syslog server.
Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard detail about the event.
– The expanded format provides more details about the event.
Event Notifications
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
The syslog messages can be sent in concise message format or expanded message format.
5.10.2 Network
This section describes how to view the network properties of the IBM SAN Volume Controller
system. The network information can be obtained by Network as shown in Figure 5-51.
174 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Management IP addresses
To view the management IP addresses of IBM Spectrum Virtualize, move your mouse cursor
over Settings → Network and click Management IP Addresses. The GUI shows the
management IP address by moving the mouse cursor over the network ports as shown
Figure 5-52.
Service IP information
To view the Service IP information of your IBM Spectrum Virtualize, move your mouse cursor
over Settings → Network as shown in Figure 5-51 on page 174, and click the Service IP
Address option to view the properties as shown in Figure 5-53.
Instead of reaching the Management IP address, the service IP address directly connects to
each individual node for service operations, for example. You can select a node from the
drop-down list and then click any of the ports that are shown in the GUI. The service IP
address can be configure to support IPv4 or IPv6.
iSCSI information
From the iSCSI pane in the Settings menu, you can display and configure parameters for the
system to connect to iSCSI-attached hosts, as shown in Figure 5-54.
Important: If you change the name of the system after iSCSI is configured, you might
need to reconfigure the iSCSI hosts.
To change the system name, click the system name and specify the new name.
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The name can be 1 - 63 characters.
176 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You can also enable Challenge Handshake Authentication Protocol (CHAP) to
authenticate the system and iSCSI-attached hosts with the specified shared secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts that use the same connection. You can set the CHAP for the whole system
under the system properties or for each host definition. The CHAP must be identical on
the server and the system/host definition. You can create an iSCSI host definition without
the use of a CHAP.
178 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. From this pane, you can modify the following information:
– Time zone
Select a time zone for your system by using the drop-down list.
– Date and time
The following options are available:
• If you are not using a Network Time Protocol (NTP) server, select Set Date and
Time, and then manually enter the date and time for your system, as shown in
Figure 5-58. You can also click Use Browser Settings to automatically adjust the
date and time of your SVC system with your local workstation date and time.
• If you are using an NTP server, select Set NTP Server IP Address and then enter
the IP address of the NTP server, as shown in Figure 5-59.
4. Click Save.
Licensing
The system supports both differential and capacity-based licensing. For virtualization and
compression functions, differential licensing charges different rates for different types of
storage, which provides cost effective management of capacity across multiple tiers of
storage. Licensing for these functions are based on the number of Storage Capacity Units
(SCUs) purchased. With other functions, like remote mirroring and FlashCopy, the license
grants a specific number of terabytes for that function.
3. In the Licensed Functions pane, you can set the licensing options for the SVC for the
following elements (limits are in TiB):
– External Virtualization
Enter the number of SCU units that are associated to External Virtualization for your
IBM SAN Volume Controller environment.
– FlashCopy Limit
Enter the capacity that is available for FlashCopy mappings.
Important: The Used capacity for FlashCopy mapping is the sum of all of the
volumes that are the source volumes of a FlashCopy mapping.
Important: The Used capacity for Global Mirror and Metro Mirror is the sum of the
capacities of all of the volumes that are in a Metro Mirror or Global Mirror
relationship. Both master volumes and auxiliary volumes are included.
180 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
During system setup, you can activate the license using the authorization code. The
authorization code is sent with the licensed function authorization documents that you
receive after purchasing the license.
Encryption is activated on a per system basis and an active license is required for each
node that uses encryption. During system setup, the system detects the nodes that
support encryption and a license should be applied to each. If additional nodes are
added and require encryption, additional encryption licenses need to be purchased
and activated.
Update System
The update procedure is described in details in Chapter 13, “RAS, monitoring, and
troubleshooting” on page 689.
VVOL management is enabled in SVC in the System section, as shown in Figure 5-61. The
NTP server must be configured before enabling VVOLs management. It is strongly advised to
use the same NTP server for ESXi and for SVC.
Restriction: You cannot enable VVOLs support until the NTP server is configured in SVC.
For a quick-start guide to VVOLs, see Quick-start Guide to Configuring VMware Virtual
Volumes for Systems Powered by IBM Spectrum Virtualize, REDP-5321.
In addition, see Configuring VMware Virtual Volumes for Systems Powered by IBM Spectrum
Virtualize, SG24-8328.
Resources
Use this option to change memory limits for Copy Services and RAID functions per I/O group.
Copy Services features and RAID require that small amounts of volume cache be converted
from cache memory into bitmap memory to allow the functions to operate. If you do not have
enough bitmap space allocated when you try to use one of the functions, you will not be able
to complete the configuration.
Table 5-1 provides an example of the amount of memory that is required for remote mirroring
functions, FlashCopy functions, and volume mirroring.
Remote Copy 256 2 TiB of total Metro Mirror, Global Mirror, or HyperSwap
volume capacity
IP Quorum
Starting with IBM Spectrum Virtualize V7.6, a new feature was introduced for enhanced
stretched systems, the IP Quorum application. Using an IP-based quorum application as the
quorum device for the third site, no Fibre Channel connectivity is required. Java applications
run on hosts at the third site.
182 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To start with IP Quorum, complete the following steps:
1. If your IBM SAN Volume Controller is configured with IP addresses version 4, click
Download IPv4 Application, or select Download IPv6 Application for systems running
with IP version 6. In our example, IPv4 is the option as shown in Figure 5-63.
2. Click Download IPv4 Application and IBM Spectrum Virtualize generates an IP Quorum
Java application as shown in Figure 5-64. The application can be saved and installed in a
host that is to run the IP quorum application.
3. On the host, you must use the Java command line to initialize the IP quorum application.
Change to the folder where the application is located and run java -jar ip_quorum.jar.
I/O Groups
For ports within an I/O group, you can enable virtualization of Fibre Channel ports that are
used for host I/O operations. With N_Port ID virtualization (NPIV), the Fibre Channel port
consists of both a physical port and a virtual port. When port virtualization is enabled, ports
do not come up until they are ready to handle I/O, which improves host behavior around node
unpends. In addition, path failures due to an offline node are masked from hosts.
The target port mode on the I/O group indicates the current state of port virtualization:
Enabled: The I/O group contains virtual ports that are available to use.
Disabled: The I/O group does not contain any virtualized ports.
The port virtualization settings of I/O groups are available by clicking Settings → System →
I/O Groups, as shown in Figure 5-65.
You can change the status of the port by right-clicking the wanted I/O group and selecting
Change Target Port as indicated in Figure 5-66.
right-click
184 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To view and configure DNS server information in IBM Spectrum Virtualize, complete the
following steps:
1. In the left pane, click the DNS icon and enter the IP address and the Name of each DNS
server. The IBM Spectrum Virtualize supports up two DNS Servers, IPv4 or IPv6. See
Figure 5-67.
With transparent cloud tiering, administrators can move older data to cloud storage to free up
capacity on the system. Point-in-time snapshots of data can be created on the system and
then copied and stored on the cloud storage. An external cloud service provider manages the
cloud storage, which reduces storage costs for the system. Before data can be copied to
cloud storage, a connection to the cloud service provider must be created from the system.
A cloud account is an object on the system that represents a connection to a cloud service
provider by using a particular set of credentials. These credentials differ depending on the
type of cloud service provider that is being specified. Most cloud service providers require the
host name of the cloud service provider and an associated password, and some cloud
service providers also require certificates to authenticate users of the cloud storage.
Public clouds use certificates that are signed by well-known certificate authorities. Private
cloud service providers can use either self-signed certificate or a certificate that is signed by a
trusted certificate authority. These credentials are defined on the cloud service provider and
passed to the system through the administrators of the cloud service provider. A cloud
account defines whether the system can successfully communicate and authenticate with the
cloud service provider by using the account credentials.
If the system is authenticated, it can then access cloud storage to either copy data to the
cloud storage or restore data that is copied to cloud storage back to the system. The system
supports one cloud account to a single cloud service provider. Migration between providers is
not supported.
Each cloud service provider requires different configuration options. The system supports the
following cloud service providers:
IBM Bluemix® (also known as SoftLayer Object Storage)
OpenStack Swift
Amazon S3
To view your IBM Spectrum Virtualize cloud provider settings, from the SVC Settings pane,
move the pointer over Settings and click System, then select Transparent Cloud Tiering,
as shown in Figure 5-68.
Using this view, you can enable and disable features of your Transparent Cloud Tiering and
update the system information concerning your cloud service provider. This pane allows you
to set a number of options:
Cloud service provider
Object Storage URL
The Tenant or the container information that is associated to your cloud object storage
User name of the cloud object account
API Key
The container prefix or location of your object
Encryption
Bandwidth
For detailed instructions about how to configure and enable Transparent Cloud Tiering, see
11.4, “Implementing Transparent Cloud Tiering” on page 531.
186 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5.10.4 Support menu
Use the Support pane to configure and manage connections and upload support packages to
the support center.
The menus are available under Settings → Support as shown in Figure 5-69.
More details about how the Support menu helps with troubleshooting of your system or how
to make a backup of your systems are provided in 13.7.3, “Remote Support Assistance” on
page 734.
Login Message
IBM Spectrum Virtualize V7.6 and later enables administrators to configure the welcome
banner (login message). This is a text message that appears either in the GUI login window
or at the CLI login prompt.
The content of the welcome message is helpful when you need to notify users about some
important information about the system, such as security warnings or a location description.
To define and enable the welcome message by using the GUI, edit the text area with the
message content and click Save (Figure 5-71).
188 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The result of the action before is shown in Figure 5-72. The system shows the welcome
message in the GUI before login.
General settings
The General Settings menu allows the user to refresh the GUI cache, to set the low graphics
mode option, and to enable advanced pools settings.
Complete the following steps to view and configure general GUI preferences:
1. From the SVC Settings window, move the pointer over Settings and click GUI
Preferences (Figure 5-74).
When you choose a name for an object, the following rules apply:
Names must begin with a letter.
Important: Do not start names by using an underscore (_) character even though it is
possible. The use of the underscore as the first character of a name is a reserved
naming convention that is used by the system configuration restore process.
190 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a volume
called ABC and an MDisk called ABC, but you cannot have two volumes that are called
ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed to their current names.
To rename the system from the System window, complete the following steps:
1. Click Actions in the upper-left corner of the SVC System pane, as shown in Figure 5-75.
2. The Rename System pane opens (Figure 5-76). Specify a new name for the system and
click Rename.
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The clustered system name can be 1 - 63 characters.
3. Click Yes.
Warning: When you rename your system, the iSCSI name (IQN) automatically changes
because it includes system name by default. Therefore, this change needs additional
actions on iSCSI-attached hosts.
3. Enter the new name of the node and click Rename (Figure 5-78).
Warning: Changing the SVC node name causes an automatic IQN update and requires
the reconfiguration of all iSCSI-attached hosts.
Renaming sites
The SVC supports configuration of site settings that describe the location of the nodes and
storage systems that are deployed in a stretched system configuration. This site information
configuration is only part of the configuration process for enhanced systems. The site
information makes it possible for the SVC to manage and reduce the amount of data that is
transferred between the two sides of the system, which reduces the costs of maintaining the
system.
Three site objects are automatically defined by the SVC and numbered 1, 2, and 3. The SVC
creates the corresponding default names, site1, site2, and site3, for each of the site
objects. site1 and site2 are the two sites that make up the two halves of the enhanced
system, and site3 is the quorum disk. You can rename the sites to describe your data center
locations.
192 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To rename the sites, complete these steps:
1. On the System pane, select Actions in the upper-left corner.
2. The Actions menu opens. Select Rename Sites, as shown in Figure 5-79.
3. The Rename Sites pane with the site information opens, as shown in Figure 5-80.
2. The wizard opens informing you about options to change topology to either Stretched
cluster or HyperSwap (Figure 5-82).
3. The system requires a definition of three sites: Primary, Secondary, and Quorum site.
Assign reasonable names to sites for easy identification as shown in our example
(Figure 5-83).
194 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. Choose the wanted topology. While Stretched Cluster is optimal for Disaster Recovery
solutions with asynchronous replication of primary volumes, HyperSwap is ideal for high
availability solutions with near-real-time replication. In our case, we decided on a
Stretched System (Figure 5-84).
5. Assign hosts to one of the sites as primary. Right-click each host and modify sites for them
one by one (Figure 5-85). Assign primary sites also to offline hosts as they might be just
down for maintenance or any other reason.
6. Similarly, assign backend storage to sites from where the primary volumes will be
provisioned (that is, where the hosts are primarily located) (Figure 5-86). At least one
storage device must be assigned to the site planned for Quorum volumes.
8. The SVC creates a set of commands based on input from the wizard and eventually
switches the topology to the entered configuration (Figure 5-88).
As a validation step, verify that all hosts have the correctly mapped and active online volumes
and no error appears in the event log.
Detailed information about resilient solutions with your IBM SAN Volume Controller
environment is available in IBM Spectrum Virtualize and SAN Volume Controller Enhanced
Stretched Cluster with VMware, SG24-8211, and for HyperSwap in IBM Storwize V7000,
Spectrum Virtualize, HyperSwap, and VMware Implementation, SG24-8317.
196 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6
Figure 6-1 provides an overview of how storage pools, MDisks, and volumes are related. This
pane is available by browsing to Monitoring → System and clicking the Overview button on
the upper-right corner of the pane. In the example in Figure 6-1, the system has four LUs from
internal disks arrays, no LUs from external storage, four storage pools, and 93 defined
volumes, mapped to four hosts.
SVC organizes storage into pools to ease storage management and make it more efficient.
All MDisks in a pool are split into extents of the same size and volumes are created out of the
available extents. The extent size is a property of the storage pool and cannot be changed
after the pool is created. It is possible to add MDisks to an existing pool to provide additional
extents.
Storage pools can be further divided into subcontainers that are called child pools. Child
pools inherit the properties of the parent pool (extent size, throttle, reduction feature) and can
also be used to provision volumes.
198 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Storage pools are managed either by using the Pools pane or the MDisks by Pool pane. Both
panes allow you to run the same actions on parent pools. However, actions on child pools can
be performed only through the Pools pane. To access the Pools pane, click Pools → Pools,
as shown in Figure 6-2.
The pane lists all storage pools available in the system. If a storage pool has child pools, you
can toggle the sign to the left of the storage pool icon to either show or hide the child pools.
Figure 6-4 Option to create a storage pool in the MDisks by Pools pane
All alternatives open the dialog box that is shown in Figure 6-5.
200 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Every storage pool that is created using the GUI has a default extent size of 1 GB. The size of
the extent is selected at creation time and cannot be changed later. If you want to specify a
different extent size, browse to Settings → GUI Preferences and select Advanced pool
settings, as shown in Figure 6-6.
If encryption is enabled, you can additionally select whether the storage pool is encrypted, as
shown in Figure 6-8.
Note: The encryption setting of a storage pool is selected at creation time and cannot be
changed later. By default, if encryption is enabled, encryption is selected. For more
information about encryption and encrypted storage pools, see Chapter 12, “Encryption”
on page 633.
202 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Enter the name for the pool and click Create. The new pool is created and is included in the
list of storage pools with zero bytes, as shown in Figure 6-9.
Naming rules: When you choose a name for a pool, the following rules apply:
Names must begin with a letter.
The first character cannot be numeric.
The name can be a maximum of 63 characters.
Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9),
underscore (_), period (.), hyphen (-), and space.
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a
volume named ABC and an MDisk called ABC, but you cannot have two volumes called
ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed to their current names.
Modify Threshold
The storage pool threshold refers to the percentage of storage capacity that must be in use
for a warning event to be generated. When using thin-provisioned volumes that auto-expand
(automatically use available extents from the pool), monitor the capacity usage and get
warnings before the pool runs out of free extents, so you can add storage. If a
thin-provisioned volume does not have sufficient extents to expand, it goes offline and a 1865
error is generated.
204 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The threshold can be modified by selecting Modify Threshold and entering the new value,
as shown in Figure 6-12. The default threshold is 80%. Warnings can be disabled by setting
the threshold to 0%.
The threshold is visible in the pool properties and is indicated with a red bar as shown on
Figure 6-13.
Add storage
Selecting Add Storage starts the wizard to assign storage to the pool. For a detailed
description of this wizard, see 6.2.1, “Assigning managed disks to storage pools” on
page 214.
Edit Throttle
When clicking this option, a new window opens allowing you to set the Pool’s throttle.
Throttles can be defined for storage pools to control I/O operations on storage systems.
Storage pool throttles can be used to avoid overwhelming the storage system (either external
storage or internal storage) and be used with virtual volumes. Because virtual volumes use
child pools, and throttle limit for the child pool can control the I/O operations from that virtual
volume. Parent and child pool throttles are independent of each other so a child pool can
have higher throttle limits than its parent pool. See 6.1.3, “Child storage pools” on page 208
for information about child pools.
If more than one throttle applies to an I/O operation, the lowest and most stringent throttle is
used. For example, if a throttle of 200 MBps is defined on a pool and 100 MBps throttle is
defined on a Volume of that pool, then the I/O operations are limited to 100 MBps.
Note: The storage pool throttle objects for a child pool and a parent pool work
independently of each other.
A child pool throttle is independent of its parent pool throttle. However, volumes of that child
pool inherit the throttle from the pool they are in. In the example on Figure 6-15,
T3_SASNL_child has a throttle of 200 MBps defined, its parent pool, T3_SASNL has a throttle of
100 MBps, and Volume TEST_ITSO has one of 1000 IOps. If the workload applied on the
Volume is greater than 200 MBps, then it will be capped by the T3_SASNL_child throttle.
206 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Delete
A storage pool can be deleted using the GUI only if no volumes are associated with it.
Selecting Delete deletes the pool immediately without any additional confirmation.
Note: If there are volumes in the pool, Delete cannot be selected. If that is the case, either
delete the volumes or move them to another storage pool before proceeding. To move a
volume, you can either migrate it or use volume mirroring. For information about volume
migration and volume mirroring, see Chapter 7, “Volumes” on page 251.
Properties
Selecting Properties displays information about the storage pool. Additional information is
available by clicking View more details and by hovering over the elements on the window, as
shown in Figure 6-16.
Unlike a parent pool, a child pool does not contain MDisks. Its capacity is provided exclusively
by the parent pool in the form of extents. The capacity of a child pool is set at creation time,
but can be modified later nondisruptively. The capacity must be a multiple of the parent pool
extent size and must be smaller than the free capacity of the parent pool.
Child pools are useful when the capacity allocated to a specific set of volumes must be
controlled. For example, child pools can be used with VMware vSphere Virtual Volumes
(VVols). Storage administrators can restrict access of VMware administrators to only a part of
the storage pool and prevent volumes creation from affecting the rest of the parent storage
pool.
Child pools can also be useful when strict control over thin-provisioned volumes expansion is
needed. You could, for example, create a child pool with no volumes in it that would act as an
emergency set of extents. That way, if the parent pool ever runs out of free extent, you can
use the ones from the child pool.
Child pools can also be used when a different encryption key is needed for different sets of
volumes.
Child pools inherit most properties from their parent pools, and these cannot be changed. The
inherited properties include the following:
Extent size
Easy Tier setting
Encryption setting, but only if the parent pool is encrypted
Note: For information about encryption and encrypted child storage pools, see Chapter 12,
“Encryption” on page 633.
208 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Creating a child storage pool
To create a child pool, browse to Pools → Pools, right-click the parent pool that you want to
create a child pool from, and select Create Child Pool, as shown in Figure 6-17.
When the dialog window opens, enter the name and capacity of the child pool and click
Create, as shown in Figure 6-18.
To select an action, right-click the child storage pool, as shown in Figure 6-20. Alternatively,
select the storage pool and click Actions.
210 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Resize
Selecting Resize allows you to increase or decrease the capacity of the child storage pool, as
shown in Figure 6-21. Enter the new pool capacity and click Resize.
Note: You cannot shrink a child pool below its real capacity. Thus, the new size of a child
pool needs to be larger than the capacity used by its volumes.
When the child pool is shrunk, the system resets the warning threshold and issues a warning
if the threshold is reached.
Delete
Deleting a child pool is a task quite similar to deleting a parent pool. As with a parent pool, the
Delete action is disabled if the child pool contains volumes, as shown in Figure 6-22.
After deleting a child pool, the extents that it occupied return to the parent pool as free
capacity.
Volumes migration
To move a volume to another pool, you can use migration or volume mirroring in the same
way you use them for parent pools. For information about volume migration and volume
mirroring, see Chapter 7, “Volumes” on page 251.
In the example on Figure 6-23, volume TEST ITSO has been created in child pool
T3_SASNL_child. Note that child pools appear exactly like parent pools in the Volumes by Pool
pane.
212 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
A volume from a child pool can only be migrated to its parent pool or to another child pool of
the same parent pool. As shown on Figure 6-24, the volume TEST ITSO can only be migrated
to its parent pool (T3_SASNL) or a child pool with same parent pool (T3_SASNL_child0). This
migration limitation does not apply to volumes belonging to parent pools.
During a volume migration within a parent pool (between a child and its parent or between
children with same parent), there is no data movement but only extent reassignments.
Arrays are created from internal storage using RAID technology to provide redundancy and
increased performance. The system supports two types of RAID: Traditional RAID and
distributed RAID. Arrays are assigned to storage pools at creation time and cannot be moved
between storage pools. You cannot have an array that does not belong to any storage pool.
MDisks are managed by using the MDisks by Pools pane. To access the MDisks by Pools
pane, browse to Pools → MDisks by Pools, as shown in Figure 6-25.
The pane lists all the MDisks available in the system under the storage pool to which they
belong. Both arrays and external MDisks are listed.
Additionally, external MDisks can be managed through the External Storage pane. To access
the External Storage pane, browse to Pools → External Storage.
For more information about IBM Easy Tier feature, see Chapter 10, “Advanced features for
storage efficiency” on page 407.
214 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: When Easy Tier is turned on for a pool, movement of extents between tiers of
storage (inter-tier) or between MDisks within a single tier (intra-tier) is based on the activity
that is monitored. Therefore, when adding an MDisk to a pool, extent migration will not be
performed immediately. No migration of extents will occur until there is sufficient activity to
trigger it.
If balancing of extents within the pool is needed immediately after the MDisks are added,
then a manual extents placement is needed. Because this manual process can be quite
complex, IBM provides a script available here:
https://www.ibm.com/marketing/iwm/iwm/web/preLogin.do?source=swg-SVCTools
This script provides a solution to the problem of rebalancing the extents in a pool after a
new MDisk has been added. The script uses available free space to shuffle extents until
the number of extents from each volume on each MDisk is directly proportional to the size
of the MDisk.
To assign MDisks to a storage pool, navigate to Pools → MDisks by Pools and choose one
of the following options:
Option 1: Select Add Storage on the right side of the storage pool, as shown in
Figure 6-26. The Add Storage button is shown only when the pool has no capacity
assigned or when the pool capacity usage is over the warning threshold.
Option 2: Right-click the pool and select Add Storage, as shown in Figure 6-27.
Both options 1 and 2 start the configuration wizard shown in Figure 6-29. If no external
storage is attached, the External option is not shown. If internal is chosen, then the system
guides you through MDisks creation. If external is selected, then MDisks are already created
and the systems guides you through the selection of external storage. Option 3 allows you to
select the pool you want to add new MDisks to.
The Quick internal configuration option of assigning storage to a pool guides the user into the
steps of creating one or many MDisks and then affects them to the selected pool. Because it
is possible to assign multiple MDisks at the same time during this process, or because the
existing pool has an already configured set of MDisks, compatibility checks are done by the
system when it creates the new MDisks.
For example, if you have a set of 10K RPM drives and another set of 15K RPM drives
available, you cannot place an MDisk made of 10K RPM drives and an MDisk made of 15K
RPM disks to the same pool. You would need to create two separate pools.
216 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Selecting Quick internal automatically defaults parameters such as stripe width, number of
spares (for traditional RAID), number of rebuild areas (for distributed RAID), and number of
drives of each class. The number of drives is the only value that can be adjusted when
creating the array. Depending on the number of drives selected for the new array, the RAID
level automatically adjusts.
For example, if you select two drives only, the system will automatically create a RAID-10
array, with no spare drive. For more control of the array creation steps, you can select the
Internal Custom option. For more information, see “Advanced internal configuration” on
page 218.
By default, if there are enough candidate drives, the system recommends traditional arrays for
most new configurations of MDisks. However, use Distributed RAID when possible, with the
Advanced Internal Custom option. For information about traditional and Distributed RAID,
see 6.2.2, “Traditional and distributed RAID” on page 220. Figure 6-30 shows an example of a
Quick internal configuration.
Figure 6-30 Quick internal configuration: Pool with a single class of drives
If the system has multiple drives classes (like Flash and Enterprise disks for example), the
default option is to create multiple arrays of different tiers and assign them to the pool to take
advantage of the Easy Tier functionality. However, this configuration can be adjusted by
setting the number of drives of different classes to zero. For information about Easy Tier see
Chapter 10, “Advanced features for storage efficiency” on page 407.
When you are satisfied with the configuration presented, click Assign. The RAID arrays, or
MDisks, are then created and start initializing in the background. The progress of the
initialization process can be monitored by selecting the correct task under Running Tasks in
the upper-right corner, as shown in Figure 6-31. The array is available for I/O during this
process.
By clicking View in the Running tasks list, you can see the initialization progress and the time
remaining as shown in Figure 6-32. Note that the array creation depends on the type of drives
it is made of. Initializing an array of Flash drives will be much quicker than with NL-SAS drives
for example.
218 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-33 shows an example with nine drives ready to be configured as DRAID 6, with the
equivalent of one drive capacity of spare (distributed over the nine disks).
Figure 6-33 Adding internal storage to a pool using the Advanced option
To return to the default settings, click the Refresh button next to the pool capacity. To create
and assign the arrays, click Assign.
Attention: If you need to preserve existing data on an unmanaged MDisk, do not assign it
to a storage pool because this action deletes the data on the MDisk. Use Import instead.
See “Import” on page 230 for information about this action.
Note: Use Distributed RAID whenever possible. The distributed configuration dramatically
reduces rebuild times and decreases the exposure volumes have to the extra load of
recovering redundancy.
Traditional RAID
In a traditional RAID approach, whether it is RAID10, RAID5, or RAID6, data is spread among
drives in an array. However, the spare space is constituted by spare drives, which are global
and sit outside of the array. When one of the drives within the array fails, all data is read from
the mirrored copy (for RAID10), or is calculated from remaining data stripes and parity (for
RAID5 or RAID6), and written to one single spare drive.
220 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-35 shows a traditional RAID6 array with two global spare drives, and data and parity
striped among five drives.
If a drive fails, data is calculated from the remaining strips in a stripe and written to the spare
strip in the same stripe on a spare drive, as shown in Figure 6-36.
Distributed RAID also has the ability to distribute data and parity strips among more drives
than traditional RAID. This feature means more drives can be used to create one array,
improving performance of a single managed disk.
Figure 6-37 shows a distributed RAID6 array with the stripe width of five distributed among
10 physical drives. The reserved spare space is marked as yellow and is equivalent to two
spare drives. Both distributed RAID5 and distributed RAID6 divide the physical drives into
rows and packs. The row has the size of the array width and has only one stripe from each
drive in an array. A pack is a group of several continuous rows, and its size depends on the
number of strips in a stripe.
222 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In case of a drive failure, all data is calculated using the remaining data stripes and parities
and written to a spare space within each row, as shown in Figure 6-38.
The following are the minimum number of drives needed to build a Distributed Array:
Six drives for a Distributed RAID6 array
Four drives for a Distributed RAID5 array
To choose an action, select the array (MDisk) and click Actions, as shown in Figure 6-39.
Alternatively, right-click the array.
Swap drive
Selecting Swap Drive allows the user to replace a drive in the array with another drive. The
other drive needs to have use of Candidate or Spare. This action can be used to replace a
drive that is expected to fail soon, for example, as indicated by an error message in the event
log.
224 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-40 shows the dialog box that opens. Select the member drive to be replaced and the
replacement drive, and click Swap.
The exchange of the drives starts running in the background. The volumes on the affected
MDisk remain accessible during the process.
Figure 6-41 If there are insufficient spare drives available, an error 1690 is logged
Delete
Selecting Delete removes the array from the storage pool and deletes it.
Remember: An array or an MDisk does not exist outside of a storage pool. Therefore, an
array cannot be removed from the pool without being deleted.
226 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If there are no volumes using extents from this array, the command runs immediately without
additional confirmation. If there are volumes using extents from this array, you are prompted
to confirm the action, as shown in Figure 6-42.
Confirming the action starts the migration of the volumes to extents from other MDisks that
remain in the pool. After the action completes, the array is removed from the storage pool and
deleted. When an MDisk is deleted from a storage pool, extents in use are migrated to
MDisks in the same tier as the MDisk being removed, if possible. If insufficient extents exist in
that tier, extents from the other tier are used.
Note: Ensure that you have enough available capacity remaining in the storage pool to
allocate the data being migrated from the removed array, or else the command will fail.
Dependent Volumes
Volumes are entities made of extents from a storage pool. The extents of the storage pool
come from various MDisks. A volume can then be spread over multiple MDisks, and MDisks
can serve multiple volumes. Clicking the Dependent Volumes Action menu of an MDisk lists
the volumes that are depending on that MDisk as shown in Figure 6-43.
228 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To choose an action, right-click the external MDisk, as shown in Figure 6-45. Alternatively,
select the external MDisk and click Actions.
Assign
This action is available only for unmanaged MDisks. Selecting Assign opens the dialog box
that is shown in Figure 6-46. This action is equivalent to the wizard described in Quick
external configuration, but acts only on the selected MDisk or MDisks.
Attention: If you need to preserve existing data on an unmanaged MDisk, do not assign it
to a storage pool because this action deletes the data on the MDisk. Use Import instead.
For information about storage tiers and their importance, see Chapter 10, “Advanced features
for storage efficiency” on page 407.
Modify encryption
This option is available only when encryption is enabled. Selecting Modify Encryption allows
the user to modify the encryption setting for the MDisk, as shown in Figure 6-48.
For example, if the external MDisk is already encrypted by the external storage system,
change the encryption state of the MDisk to Externally encrypted. This setting stops the
system from encrypting the MDisk again if the MDisk is part of an encrypted storage pool.
For information about encryption, encrypted storage pools, and self-encrypting MDisks, see
Chapter 12, “Encryption” on page 633.
Import
This action is available only for unmanaged MDisks. Importing an unmanaged MDisk allows
the user to preserve the data on the MDisk, either by migrating the data to a new volume or by
keeping the data on the external system.
230 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: This is the preferred method to migrate data from legacy storage to the SVC. When
an MDisk presents as imported, the data on the original disks is not modified. The system
acts as a pass-through and the extents of the imported MDisk do not contribute to storage
pools.
Selecting Import allows you to choose one of the following migration methods:
Import to temporary pool as image-mode volume does not migrate data from the
source MDisk. It creates an image-mode volume that has a direct block-for-block
translation of the MDisk. The existing data is preserved on the external storage system,
but it is also accessible from the SVC system.
If this method is selected, the image-mode volume is created in a temporary migration
pool and presented through the SVC. Choose the extent size of the temporary pool and
click Import, as shown in Figure 6-49.
The MDisk is imported and listed as an image mode MDisk in the temporary migration
pool, as shown in Figure 6-50.
The image-mode volume can then be mapped to the original host. The data is still
physically present on the physical disk of the original external storage controller system
and no automatic migration process is currently running. The original host sees no
difference and the applications can continue to run. The image-mode Volume can now be
handled by Spectrum Virtualize. If needed, the image-mode volume can be migrated
manually to another storage pool by using volume migration or volume mirroring later.
232 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The data migration begins automatically after the MDisk is imported successfully as an
image-mode volume. You can check the migration progress by clicking the task under
Running Tasks, as shown in Figure 6-53.
After the migration completes, the volume is available in the chosen destination pool, as
shown in Figure 6-54. This volume is no longer an image-mode volume. Instead, it is a
normal striped volume.
Figure 6-54 The migrated MDisk is now a Volume in the selected pool
At this point, all data has been migrated off the source MDisk and the MDisk is no longer in
image mode, as shown in Figure 6-55. The MDisk can be removed from the temporary
pool. It returns in the list of external MDisks and can be used as a regular MDisk to host
volumes, or the legacy storage bay can be decommissioned.
Include
The system can exclude an MDisk with multiple I/O failures or persistent connection errors
from its storage pool to ensure that these errors do not interfere with data access. If an MDisk
has been automatically excluded, run the fix procedures to resolve any connection and I/O
failure errors. Drives used by the excluded MDisk with multiple errors might require replacing
or reseating.
After the problems have been fixed, select Include to add the excluded MDisk back into the
storage pool.
Remove
In some cases, you might want to remove external MDisks from storage pools to reorganize
your storage allocation. Selecting Remove removes the MDisk from the storage pool. After
the MDisk is removed, it goes back to unmanaged. If there are no volumes in the storage pool
to which this MDisk is allocated, the command runs immediately without additional
confirmation. If there are volumes in the pool, you are prompted to confirm the action, as
shown in Figure 6-56.
Confirming the action starts the migration of the volumes to extents from other MDisks that
remain in the pool. When the action completes, the MDisk is removed from the storage pool
and returns to unmanaged. When an MDisk is removed from a storage pool, extents in use are
migrated to MDisks in the same tier as the MDisk being removed, if possible. If insufficient
extents exist in that tier, extents from the other tier are used.
Ensure that you have enough available capacity remaining in the storage pool to allocate the
data being migrated from the removed MDisk or else the command fails.
Important: The MDisk being removed must remain accessible to the system while all data
is copied to other MDisks in the same storage pool. If the MDisk is unmapped before the
migration finishes, all volumes in the storage pool go offline and remain in this state until
the removed MDisk is connected again.
234 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6.3 Working with internal drives
The SVC system provides an Internal Storage pane for managing all internal drives. To
access the Internal Storage pane, browse to Pools → Internal Storage, as shown in
Figure 6-57.
The pane gives an overview of the internal drives in the SVC system. Selecting All Internal in
the drive class filter displays all drives that are managed in the system, including all I/O
groups and expansion enclosures. Selecting the class of the drives on the left side of the
pane filters the list and display only the drives of the selected class.
You can find information regarding the capacity allocation of each drive class in the upper
right corner, as shown in Figure 6-58:
Total Capacity shows the overall capacity of the selected drive class.
MDisk Capacity shows the storage capacity of the selected drive class that is assigned to
MDisks.
Spare Capacity shows the storage capacity of the selected drive class that is used for
spare drives.
If All Internal is selected under the drive class filter, the values shown refer to the entire
internal storage.
The percentage bar indicates how much of the total capacity is allocated to MDisks and spare
drives, with MDisk capacity being represented by dark blue and spare capacity by light blue.
The actions available depend on the status of the drive or drives selected. Some actions can
only be run for individual drives.
Fix error
This action is only available if the drive selected is in an error condition. Selecting Fix Error
starts the Directed Maintenance Procedure (DMP) for the defective drive. For more
information about DMPs, see Chapter 13, “RAS, monitoring, and troubleshooting” on
page 689.
236 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Take offline
Selecting Take Offline allows the user to take a drive offline. Select this action only if there is
a problem with the drive and a spare drive is available. When selected you are prompted to
confirm the action, as shown in Figure 6-60.
If a spare drive is available and the drive is taken offline, the MDisk of which the failed drive is
a member remains Online and the spare is automatically reassigned. If no spare drive is
available and the drive is taken offline, the array of which the failed drive is a member gets
Degraded. Consequently, the storage pool to which the MDisk belongs gets Degraded as well,
as shown in Figure 6-61.
Figure 6-61 Degraded Pool and MDisk in case there is no more spare in the array
Figure 6-62 Taking a drive offline fails in case there is no spare in the array
Losing another drive in the MDisk results in data loss. Figure 6-63 shows the error in this
case.
Figure 6-63 Taking a drive offline fails if there is a risk of losing data
238 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
A drive that is taken offline is considered Failed, as shown in Figure 6-64.
Mark as
Selecting Mark as allows you to change the usage assigned to the drive. The following use
options are available as shown in Figure 6-65:
Unused: The drive is not in use and cannot be used as a spare.
Candidate: The drive is available to be used in an MDisk.
Spare: The drive can be used as a hot spare, if required.
Identify
Selecting Identify turns on the LED light so you can easily identify a drive that must be
replaced or that you want to troubleshoot. Selecting this action opens a dialog box like the
one shown in Figure 6-67.
Upgrade
Selecting Upgrade allows the user to update the drive firmware as shown in Figure 6-68. You
can choose to update individual drives or all the drives that have available updates.
For information about updating drive firmware, see Chapter 13, “RAS, monitoring, and
troubleshooting” on page 689.
240 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-69 shows the list of volumes dependent on a set of three drives that belong to the
same MDisk. This configuration means that all listed volumes will go offline if all selected
drives go offline. If only one drive goes offline, then there is no volume dependency.
Note: A lack of dependent volumes does not imply that there are no volumes using the
drive. Volume dependency actually shows the list of volumes that would become
unavailable if the drive alone or the group of selected drive become unavailable.
Checking Show Details on the left corner of the window shows more details, including vendor
ID, product ID, and part number. You can also display drive slot details by selecting Drive
Slot.
External storage controllers with both types of attachment can be managed through the
External Storage pane. To access the External Storage pane, browse to Pools → External
Storage, as shown in Figure 6-71.
242 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The pane lists the external controllers that are connected to the SVC system and all the
external MDisks detected by the system. The MDisks are organized by the external storage
system that presents them. You can toggle the sign to the left of the controller icon to either
show or hide the MDisks associated with the controller.
If you have configured logical unit names on your external storage systems, it is not possible
for the system to determine this name because it is local to the external storage system.
However, you can use the external storage system WWNNs and the LU number to identify
each device.
If the external controller is not detected, ensure that the SVC is cabled and zoned into the
same storage area network (SAN) as the external storage system. If you are using Fibre
Channel, connect the Fibre Channel cables to the Fibre Channel ports of the canisters in your
system, and then to the Fibre Channel network. If you are using Fibre Channel over Ethernet,
connect Ethernet cables to the 10 Gbps Ethernet ports.
Attention: If the external controller is a Storwize system, the SVC must be configured at
the replication layer and the external controller must be configured at the storage layer.
The default layer for a Storwize system is storage. Make sure that the layers are correct
before zoning the two systems together. Changing the system layer is not available in the
GUI. You need to use the command-line interface (CLI).
Ensure that the layer of both systems is correct by entering the following command:
svcinfo lssystem
If needed, change the layer of the SVC to replication by entering the following command:
chsystem -layer replication
If needed, change the layer of the Storwize controller to storage by entering the following
command:
chsystem -layer storage
For more information about layers and how to change them, see Chapter 11, “Advanced
Copy Services” on page 461.
Figure 6-72 Fully redundant iSCSI connection between a Storwize system and SVC
For an example of how to cable the IBM Spectrum Accelerate to the SVC, see IBM
Knowledge Center:
https://ibm.biz/BdjS9u
For an example of how to cable the Dell EqualLogic to the SVC, see IBM Knowledge
Center at:
https://ibm.biz/BdjS9C
244 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The ports used for iSCSI attachment are enabled for external storage connections. By
default, Ethernet ports are disabled for external storage connections. You can verify the
setting of your Ethernet ports by navigating to Settings → Network and selecting
Ethernet Ports, as shown in Figure 6-73.
To enable the port for external storage connections, select the port, click Actions and select
Modify Storage Ports, as shown in Figure 6-74.
Set the port as Enabled for either IPv4 or IPv6, depending on the protocol version configured
for the connection, as shown in Figure 6-75.
Attention: Unlike Fibre Channel connections, iSCSI connections require the SVC to be
configured at the replication layer for every type of external controller. However, as with
Fibre Channel, if the external controller is a Storwize system, the controller must be
configured at the storage layer. The default layer for a Storwize system is storage.
If the SVC is not configured at the replication layer when Add External iSCSI Storage is
selected, you are prompted to do so, as shown in Figure 6-77 on page 247.
If the Storwize controller is not configured at the storage layer, this must be changed by
using the CLI.
Ensure that the layer of the Storwize controller is correct by entering the following
command:
svcinfo lssystem
If needed, change the layer of the Storwize controller to storage by entering the following
command:
chsystem -layer storage
For more information about layers and how to change them see Chapter 11, “Advanced
Copy Services” on page 461.
246 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 6-77 Converting the system layer to replication to add iSCSI external storage
Select Convert the system to the replication layer and click Next.
Select the type of external storage, as shown in Figure 6-78. For this example, the IBM
Storwize type is chosen. Click Next.
248 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The fields available vary depending on the configuration of your system and external
controller type. However, the meaning of each field is always kept. The following fields can
also be available:
Site: Enter the site associated with the external storage system. This field is shown only
for configurations by using HyperSwap.
User name: Enter the user name associated with this connection. If the target storage
system uses CHAP to authenticate connections, you must enter a user name. If you
specify a user name, you must specify a CHAP secret. This field is not required if you do
not use CHAP. This field is shown only for IBM Spectrum Accelerate and Dell EqualLogic
controllers.
Click Finish. The system attempts to discover the target ports and establish iSCSI sessions
between source and target. If the attempt is successful, the controller is added. Otherwise,
the action fails.
To select any action, right-click the controller, as shown in Figure 6-80. Alternatively, select
the controller and click Actions.
Discover storage
When you create or remove LUs on an external storage system, the change is not always
automatically detected. If that is the case select Discover Storage for the system to rescan
the Fibre Channel or iSCSI network. The rescan process discovers any new MDisks that were
added to the system and rebalances MDisk access across the available ports. It also detects
any loss of availability of the controller ports.
Naming rules: When you choose a name for a controller, the following rules apply:
Names must begin with a letter.
The first character cannot be numeric.
The name can be a maximum of 63 characters.
Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9),
underscore (_), period (.), hyphen (-), and space.
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a
volume named ABC and an MDisk called ABC, but you cannot have two volumes called
ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed to their current names.
Modify site
This action is available only for systems that use HyperSwap. Selecting Modify Site allows
the user to modify the site with which the external controller is associated, as shown in
Figure 6-82.
250 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
7
Chapter 7. Volumes
This chapter describes how to create and provision volumes for IBM Spectrum Virtualize
systems. In this case, a volume is a logical disk provisioned out of a storage pool and is
recognized by a host with a unique identifier (UID) field and a parameter list.
The first part of this chapter provides a brief overview of IBM Spectrum Virtualize volumes,
the classes of volumes available, and the topologies that they are associated with. It also
provides an overview of the advanced customization available.
The second part describes how to create volumes using the GUI and shows you how to map
these volumes to defined hosts.
The third part provides an introduction to the new volume manipulation commands, which are
designed to facilitate the creation and administration of volumes used for the IBM HyperSwap
and Enhanced Stretched Cluster topologies.
Note: A managed disk (MDisk) is a logical unit of physical storage. MDisks are either
Redundant Arrays of Independent Disks (RAIDs) from internal storage, or external physical
disks that are presented as a single logical disk on the SAN. Each MDisk is divided into
several extents, which are numbered, from 0, sequentially from the start to the end of the
MDisk. The extent size is a property of the storage pools that the MDisks are added to.
Volumes have two major modes: Managed mode and image mode. Managed mode volumes
have two policies: The sequential policy and the striped policy. Policies define how the extents
of a volume are allocated from a storage pool.
252 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The type attribute of a volume defines the allocation of extents that make up the volume copy:
A striped volume contains a volume copy that has one extent allocated in turn from each
MDisk that is in the storage pool. This is the default option. However, you can also supply
a list of MDisks to use as the stripe set as shown in Figure 7-1.
Attention: By default, striped volume copies are striped across all MDisks in the
storage pool. If some of the MDisks are smaller than others, the extents on the smaller
MDisks are used up before the larger MDisks run out of extents. Manually specifying
the stripe set in this case might result in the volume copy not being created.
If you are unsure if sufficient free space is available to create a striped volume copy,
select one of the following options:
Check the free space on each MDisk in the storage pool by using the lsfreeextents
command.
Let the system automatically create the volume copy by not supplying a specific
stripe set.
A sequential volume contains a volume copy that has extents allocated sequentially on
one MDisk.
Image-mode volumes are a special type of volume that has a direct relationship with one
MDisk.
An image mode MDisk is mapped to one, and only one, image mode volume.
The volume capacity that is specified must be equal to the size of the image mode MDisk.
When you create an image mode volume, the specified MDisk must be in unmanaged mode
and must not be a member of a storage pool. The MDisk is made a member of the specified
storage pool (Storage Pool_IMG_xxx) as a result of creating the image mode volume.
An image mode MDisk is associated with exactly one volume. If the (image mode) MDisk is
not a multiple of the MDisk Group’s extent size, the last extent is partial (not filled). An image
mode volume is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and
it does not have any IBM Spectrum Virtualize system metadata extents assigned to it.
Managed or image mode MDisks are always members of a storage pool.
It is a preferred practice to put image mode MDisks in a dedicated storage pool and use a
special name for it (for example, Storage Pool_IMG_xxx). The extent size that is chosen for
this specific storage pool must be the same as the extent size into which you plan to migrate
the data. All of the IBM Spectrum Virtualize copy services functions can be applied to image
mode disks. See Figure 7-2.
254 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 7-3 shows this mapping. It also shows a volume that consists of several extents that
are shown as V0 - V7. Each of these extents is mapped to an extent on one of the MDisks:
A, B, or C. The mapping table stores the details of this indirection.
Several of the MDisk extents are unused. No volume extent maps to them. These unused
extents are available for use in creating volumes, migration, expansion, and so on.
The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm:
If the set of MDisks from which to allocate extents contains more than one MDisk, extents
are allocated from MDisks in a round-robin fashion.
If an MDisk has no free extents when its turn arrives, its turn is missed and the round-robin
moves to the next MDisk in the set that has a free extent.
When a volume is created, the first MDisk from which to allocate an extent is chosen in a
pseudo-random way rather than by choosing the next disk in a round-robin fashion. The
pseudo-random algorithm avoids the situation where the striping effect places the first extent
for many volumes on the same MDisk. This effect is inherent in a round-robin algorithm.
Placing the first extent of several volumes on the same MDisk can lead to poor performance
for workloads that place a large I/O load on the first extent of each volume, or that create
multiple sequential streams.
Note: Having cache-disabled volumes makes it possible to use the native copy services
in the underlying RAID array controller for MDisks (LUNs) that are used as IBM
Spectrum Virtualize image mode volumes. Consult with IBM Support before turning off
the cache for volumes in your production environment to avoid any performance
degradation.
The two copies of the volume often are allocated from separate storage pools or by using
image-mode copies. The volume can participate in FlashCopy and remote copy relationships.
It is serviced by an I/O Group, and has a preferred node.
Each copy is not a separate object and cannot be created or manipulated except in the
context of the volume. Copies are identified through the configuration interface with a copy ID
of their parent volume. This copy ID can be 0 or 1.
This feature provides a point-in-time copy function that is achieved by “splitting” a copy from
the volume. However, the mirrored volume feature does not address other forms of mirroring
that are based on remote copy, which is sometimes called IBM HyperSwap, that mirrors
volumes across I/O Groups or clustered systems. It is also not intended to manage mirroring
or remote copy functions in back-end controllers.
256 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 7-4 provides an overview of volume mirroring.
A second copy can be added to a volume with a single copy or removed from a volume with
two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A
newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as “fresh” and the secondary copy
is defined as “stale”.
The synchronization process updates the secondary copy until it is fully synchronized. This
update is done at the default synchronization rate or at a rate that is defined when the volume
is created or modified. The synchronization status for mirrored volumes is recorded on the
quorum disk.
If a two-copy mirrored volume is created with the format parameter, both copies are formatted
in parallel. The volume comes online when both operations are complete with the copies in
sync.
If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk.
If it is known that MDisk space (which is used for creating copies) is already formatted or if the
user does not require read stability, a no synchronization option can be selected that
declares the copies as synchronized even when they are not.
To minimize the time that is required to resynchronize a copy that is out of sync, only the
256 kibibyte (KiB) grains that were written to since the synchronization was lost are copied.
This approach is known as an incremental synchronization. Only the changed grains must be
copied to restore synchronization.
Important: An unmirrored volume can be migrated from one location to another by adding
a second copy to the wanted destination, waiting for the two copies to synchronize, and
then removing the original copy 0. This operation can be stopped at any time. The two
copies can be in separate storage pools with separate extent sizes.
Placing the primary copy on a high-performance controller maximizes the read performance
of the volume.
Figure 7-5 Data flow for write I/O processing in a mirrored volume
As shown in Figure 7-5, all the writes are sent by the host to the preferred node for each
volume (1). Then, the data is mirrored to the cache of the partner node in the I/O Group (2),
and acknowledgment of the write operation is sent to the host (3). The preferred node then
destages the written data to the two volume copies (4).
With version 7.3, the cache architecture changed from an upper-cache design to a two-layer
cache design. With this change, the data is only written once, and is then directly destaged
from the controller to the locally attached disk system.
258 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 7-6 shows the data flow in a stretched environment.
Site1 Site2
Preferred Node IO group Node Pair Non-Preferred Node
Write Data with location
UCA UCA
A volume with copies can be checked to see whether all of the copies are identical or
consistent. If a medium error is encountered while it is reading from one copy, it is repaired by
using data from the other copy. This consistency check is performed asynchronously with
host I/O.
Important: Mirrored volumes can be taken offline if no quorum disk is available. This
behavior occurs because the synchronization status for mirrored volumes is recorded on
the quorum disk.
Mirrored volumes use bitmap space at a rate of 1 bit per 256 KiB grain, which provides 1 MiB
of bitmap space supporting 2 TiB of mirrored volumes. The default allocation of bitmap space
is 20 MiB, which supports 40 TiB of mirrored volumes. If all 512 MiB of variable bitmap space
is allocated to mirrored volumes, 1 PiB of mirrored volumes can be supported.
The sum of all bitmap memory allocation for all functions except FlashCopy must not exceed
552 MiB.
In a fully allocated volume, these two values are the same. Therefore, the real capacity
determines the quantity of MDisk extents that is initially allocated to the volume. The virtual
capacity is the capacity of the volume that is reported to all other IBM Spectrum Virtualize
components (for example, FlashCopy, cache, and remote copy), and to the host servers.
The real capacity is used to store the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value, or as a percentage of the
virtual capacity.
Thin-provisioned volumes can be used as volumes that are assigned to the host, by
FlashCopy to implement thin-provisioned FlashCopy targets, and with the mirrored volumes
feature.
When a thin-provisioned volume is initially created, a small amount of the real capacity is
used for initial metadata. I/Os are written to grains of the thin volume that were not previously
written, which causes grains of the real capacity to be used to store metadata and the actual
user data. I/Os are written to grains that were previously written, which updates the grain
where data was previously written.
260 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The grain size is defined when the volume is created. The grain size can be 32 KiB, 64 KiB,
128 KiB, or 256 KiB. The default grain size is 256 KiB, which is the preferred option. If you
select 32 KiB for the grain size, the volume size cannot exceed 260 TiB. The grain size cannot
be changed after the thin-provisioned volume is created. Generally, smaller grain sizes save
space, but they require more metadata access, which can adversely affect performance.
When using thin-provisioned volume as a FlashCopy source or target volume, use 256 KiB to
maximize performance. When using thin-provisioned volume as a FlashCopy source or target
volume, specify the same grain size for the volume and for the FlashCopy function.
Thin-provisioned volumes store user data and metadata. Each grain of data requires
metadata to be stored. Therefore, the I/O rates that are obtained from thin-provisioned
volumes are less than the I/O rates that are obtained from fully allocated volumes.
The metadata storage used is never greater than 0.1% of the user data. The resource usage
is independent of the virtual capacity of the volume. If you are using the thin-provisioned
volume directly with a host system, use a small grain size.
The real capacity of a thin volume can be changed if the volume is not in image mode.
Increasing the real capacity enables a larger amount of data and metadata to be stored on
the volume. Thin-provisioned volumes use the real capacity that is provided in ascending
order as new data is written to the volume. If the user initially assigns too much real capacity
to the volume, the real capacity can be reduced to free storage for other uses.
The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and it must expand.
Autoexpand does not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity is recalculated.
To support the auto expansion of thin-provisioned volumes, the storage pools from which they
are allocated have a configurable capacity warning. When the used capacity of the pool
exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% is
specified, the event is logged when 20% of the free capacity remains.
262 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
With HyperSwap topology, which is a three-site HA configuration, you can create a basic
volume or a HyperSwap volume.
HyperSwap volumes create copies on separate sites for systems that are configured with
HyperSwap topology. Data that is written to a HyperSwap volume is automatically sent to
both copies so that either site can provide access to the volume if the other site becomes
unavailable.
With Stretched topology, which is a three-site disaster resilient configuration, you can
create a basic volume or a Stretched volume, Stretched Cluster, Enhanced Stretched
Cluster, and HyperSwap.
Virtual Volumes: The IBM Spectrum Virtualize v7.6 release also introduces Virtual
Volumes. These volumes are available in a system configuration that supports VMware
vSphere VVols. These allow VMware vCenter to manage system objects, such as
volumes and pools. The IBM Spectrum Virtualize system administrators can create
volume objects of this class, and assign ownership to VMware administrators to simplify
management.
For more information about configuring VVol with IBM Spectrum Virtualize, see Configuring
VMware Virtual Volumes for Systems Powered by IBM Spectrum Virtualize, SG24-8328.
Note: With V7.4 and later, it is possible to prevent accidental deletion of volumes if they
have recently performed any I/O operations. This feature is called Volume protection, and it
prevents active volumes, or host mappings from being deleted inadvertently. This process
is done by using a global system setting. For more information, see the “Enabling volume
protection” topic in IBM Knowledge Center:
https://ibm.biz/BdsKgn
To start the process of creating a volume, navigate to the Volumes menu and click the
Volumes option of the IBM Spectrum Virtualize graphical user interface as shown in
Figure 7-8.
To define a new basic volume, click Create Volumes as shown in Figure 7-9.
The Create Volumes tab opens the Create Volumes menu, which displays three potential
creation methods: Basic, Mirrored, and Custom.
Note: The volume classes that are displayed on the Create Volumes menu depend on the
topology of the system.
264 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In the previous example, the Create Volumes menu showed submenus that allow the
creation of Basic, Mirrored, and Custom volumes (in standard topology):
For a HyperSwap topology, the Create Volumes menu is displayed as shown in
Figure 7-11.
Independent of the topology of the system, the Create Volume menu displays a Basic
volume menu and a Custom volume menu that can be used to customize parameters of
volumes. Custom volumes are described in more detail later in this section.
Notes:
A Basic volume is a volume whose data is striped across all available managed disks
(MDisks) in one storage pool.
A Mirrored volume is a volume with two physical copies, where each volume copy can
belong to a different storage pool.
A Custom volume, in the context of this menu, is either a Basic or Mirrored volume with
customization from the default parameters.
The Create Volumes menu also provides, using the Capacity Savings parameter, the
ability to change the default provisioning of a Basic or Mirrored Volume to
Thin-provisioned or Compressed. For more information, see “Quick Volume Creation
with Capacity Saving options” on page 273.
266 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
7.3 Creating volumes
This section focuses on using the Create Volumes menu to create Basic and Mirrored
volumes in a system with standard topology. It also covers creating host-to-volume mappings.
As previously stated, volume creation is available on four different volume classes:
Basic
Mirrored
HyperSwap
Custom
Note: The ability to create HyperSwap volumes by using the GUI simplifies creation and
configuration. This simplification is enhanced by the GUI by using the mkvolume command.
Create a Basic volume by clicking Basic as shown in Figure 7-10 on page 264. This action
opens Basic volume menu where you can define the following information:
Pool: The Pool in which the volume is created (drop-down)
Quantity: Number of volumes to be created (numeric up/down)
Capacity: Size of the volume in Units (drop-down)
Capacity Savings (drop-down):
– None
– Thin-provisioned
– Compressed
Name: Name of the Volume (cannot start with a numeric)
I/O group
Use an appropriate naming convention for volumes for easy identification of their association
with the host or host cluster. At a minimum, it should contain the name of the pool or some tag
that identifies the underlying storage subsystem. It can also contain the host name that the
volume is mapped to, or perhaps the content of this volume, such as the name of the
applications to be installed.
When all of the characteristics of the Basic volume have been defined, it can be created by
selecting one of the following options:
Create
Create and Map
Note: The Plus sign (+) icon, highlighted in green in Figure 7-13, can be used to create
more volumes in the same instance of the volume creation wizard.
268 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In this example, the Create option has been selected. The volume-to-host mapping can be
performed later. At the end of the volume creation, a confirmation window appears, as shown
in Figure 7-14.
Success is also indicated by the state of the Basic volume being reported as formatting in the
Volumes pane as shown in Figure 7-15.
Notes:
Fully allocated volumes are automatically formatted through the quick initialization
process after the volume is created. This process makes fully allocated volumes
available for use immediately.
Quick initialization requires a small amount of I/O to complete, and limits the number of
volumes that can be initialized at the same time. Some volume actions, such as moving,
expanding, shrinking, or adding a volume copy, are disabled when the specified volume
is initializing. Those actions are available after the initialization process completes.
The quick initialization process can be disabled in circumstances where it is not
necessary. For example, if the volume is the target of a Copy Services function, the
Copy Services operation formats the volume. The quick initialization process can also
be disabled for performance testing so that the measurements of the raw system
capabilities can take place without waiting for the process to complete.
For more information, see the Fully allocated volumes topic in IBM Knowledge Center:
https://ibm.biz/BdsKht
Normally, this is the primary copy (as indicated in the management GUI by an asterisk (*)). If
one of the mirrored volume copies is temporarily unavailable (for example, because the
storage system that provides the pool is unavailable), the volume remains accessible to
servers. The system remembers which areas of the volume are written and resynchronizes
these areas when both copies are available.
Note: Volume mirroring is not a true disaster recovery (DR) solution because both copies
are accessed by the same node pair and addressable by only a single cluster. However, it
can improve availability.
270 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Generally, keep mirrored volumes on a separate set of physical disks (Pools). Leave the
I/O group option at its default setting of Automatic (see Figure 7-16).
Note: When creating a Mirrored volume using this menu, you are not required to
specify the Mirrored Sync rate. It defaults to 2 MBps. Customization of the
synchronization rate can be done by using the Custom menu.
272 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Quick Volume Creation with Capacity Saving options
The Create Volumes menu also provides, using the Capacity Savings option, the ability to
alter the provisioning of a Basic or Mirrored volume into Thin-provisioned or Compressed.
Select either Thin-provisioned or Compressed from the drop-down menu as shown in
Figure 7-18.
Tip: An alternative way of opening the Actions menu is to highlight (select) a volume
and use the right mouse button.
3. This action opens a Create Mapping window. In this window, use the radio buttons to
select whether to create a mapping to Hosts or Host Clusters. Then, select which
volumes to create the mapping for. You can also select whether to Self Assign SCSI LUN
IDs or let the System Assign them. In this example, we map a single volume to an iSCSI
Host and have the system assign the SCSI LUN IDs, as shown in Figure 7-20. Click Next.
274 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. A summary of the proposed volume mappings is displayed. To confirm the mappings, click
Map Volumes, as shown in Figure 7-21.
5. The Modify Mappings window displays the command details and then a Task completed
message as shown in Figure 7-22.
Work through these submenus to customize your Custom volume as wanted, and then
commit these changes by using Create as shown in Figure 7-23.
276 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
7.5.1 Creating a custom thin-provisioned volume
A thin-provisioned volume can be defined and created by using the Custom menu.
Regarding application reads and writes, thin-provisioned volumes behave as though they
were fully allocated. When creating a thin-provisioned volume, you can specify two capacities:
The real physical capacity allocated to the volume from the storage pool. The real capacity
determines the quantity of extents that are initially allocated to the volume.
Its virtual capacity available to the host. The virtual capacity is the capacity of the volume
that is reported to all other components (for example, FlashCopy, cache, and remote copy)
and to the hosts.
The Thin Provisioning options are as follows (defaults are displayed in parentheses):
– Real capacity: (2%) Specify the size of the real capacity space used during creation.
– Automatically Expand: (Enabled) This option enables the automatic expansion of real
capacity, if more capacity is to be allocated.
– Warning threshold: (Enabled) Enter a threshold for receiving capacity alerts.
– Grain Size: (256 kibibytes (KiB)) Specify the grain size for real capacity. This option
describes the size of chunk of storage to be added to used capacity. For example,
when the host writes 1 megabyte (MB) of new data, the capacity is increased by adding
four chunks of 256 KiB each.
Important: If you do not use the autoexpand feature, the volume will go offline after
reaching real capacity. The default grain size is 256 KiB. The optimum choice of
grain size is dependent upon volume use type. Consider these points:
If you are not going to use the thin-provisioned volume as a FlashCopy source or
target volume, use 256 KiB to maximize performance.
If you are going to use the thin-provisioned volume as a FlashCopy source or
target volume, specify the same grain size for the volume and for the FlashCopy
function.
For more information, see the “Performance Problem When Using EasyTier With
Thin Provisioned Volumes” topic at:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003982
278 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Confirm the settings and click Create to define the volume. A Task completed message is
displayed as shown in Figure 7-26.
4. Alternatively, you can create and immediately map this volume to a host by using the
Create and Map option instead.
Figure 7-27 Defining a volume as compressed using the Capacity savings option
2. Open the Compression subsection and verify that Real Capacity is set to a minimum of
the default value of 2%. Leave all other parameter at their defaults. See Figure 7-28.
280 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Confirm the settings and click Create to define the volume (Figure 7-29).
The progress of formatting and synchronization of a newly created Mirrored Volume can be
checked from the Running Tasks menu. This menu reports the progress of all currently
running tasks, including Volume Format and Volume Synchronization (Figure 7-31).
282 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Creating a Custom Thin-provisioned Mirrored volume
The Custom option in the Create Volumes window is used to customize volume creation.
Using this feature, the default options can be overridden and volume creation can be tailored
to the specifics of the clients environment.
The Mirror sync rate can be changed from the default setting by using the Custom option,
changing the Volume copy type to Mirrored in the Volume Location subsection of the
Create Volumes window. This option sets the priority of copy synchronization progress,
enabling a preferential rate to be set for more important volumes.
The summary shows you the capacity information and the allocated space. You can click
Custom and customize the thin-provision settings or the mirror synchronization rate. After
you create the volume, the confirmation window opens as shown in Figure 7-32.
The initial synchronization of thin-mirrored volumes is fast when a small amount of real and
virtual capacity is used.
284 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Assign Site Names as shown in Figure 7-34. All three fields must be assigned before
proceeding by clicking Next.
286 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. Next, Hosts and Storage must be assigned sites. Each host must be assigned to a site as
shown in Figure 7-36 for an example host.
288 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. A summary of new topology configuration is displayed before the change is committed, as
shown in Figure 7-38. Click Finish to commit the changes.
When the HyperSwap topology is configured, the GUI uses the mkvolume command to create
volumes instead of the traditional mkvdisk command. This section describes the mkvolume
command that is used in HyperSwap topology. The GUI continues to use mkvdisk when all
other classes of volumes are created.
290 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In this section, the new mkvolume command and how the GUI uses this command when the
HyperSwap topology has been configured are described, rather than the “traditional” mkvdisk
command.
Note: It is still possible to create HyperSwap volumes as in the V7.5 release, as described
in the following white paper:
http://www.ibm.com/support/docview.wss?uid=tss1wp102538
You can also get more information in IBM Storwize V7000, Spectrum Virtualize,
HyperSwap, and VMware Implementation, SG24-8317.
HyperSwap volumes are a new type of HA volume that is supported by IBM Spectrum
Virtualize. They are built from two existing IBM Spectrum Virtualize technologies:
Metro Mirror
(VDisk) Volume Mirroring
The GUI simplifies the complexity of HyperSwap volume creation by only presenting the
volume class of HyperSwap as a volume creation option after HyperSwap topology has been
configured.
The capacity and name characteristics are defined as for a Basic volume (in the Volume
Details section) and the mirroring characteristics are defined by the HyperSwap site
parameters (in the Hyperswap Details section).
A summary (lower left of the creation window) indicates the actions that are carried out when
the Create option is selected. As shown in Figure 7-41, a single volume is created, with
volume copies in site1 and site2. This volume is in an active-active (Metro Mirror)
relationship with extra resilience provided by two change volumes.
292 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The command that is issued to create this volume is shown in Figure 7-42, and can be
summarized as follows:
svctask mkvolume -name <name_of_volume> -pool <X:Y> -size <Size_of_volume> -unit
<units>
With a single mkvolume command, a HyperSwap volume is created. Previously (using IBM
Spectrum Virtualize release V7.5), this result was only possible with careful planning and
issuing the following commands:
mkvdisk master_vdisk
mkvdisk aux_vdisk
mkvdisk master_change_volume
mkvdisk aux_change_volume
mkrcrelationship –activeactive
chrcrelationship -masterchange
chrcrelationship -auxchange
addvdiskacces
In addition, the lsvdisk and GUI functionality are available. The lsvdisk command now
includes volume_id, volume_name, and function fields to easily identify the individual VDisk
that make up a HyperSwap volume. These views are “rolled-up” in the GUI to provide views
that reflect the client’s view of the HyperSwap volume and its site-dependent copies, as
opposed to the “low-level” VDisks and VDisk Change Volumes.
Likewise, the status of the HyperSwap volume is reported at a “parent” level. If one of the
copies is syncing or offline, the parent HyperSwap volume reflects this state as shown in
Figure 7-44.
294 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
rmvolumecopy
Remove a copy of a volume. This command leaves the volume intact. It also converts a
Mirrored, Stretched, or HyperSwap volume into a basic volume. For a HyperSwap volume,
this command includes deleting the active-active relationship and the change volumes.
This command enables a copy to be identified simply by its site.
The -force parameter with rmvdiskcopy is replaced by individual override parameters,
making it clearer to the user exactly what protection they are bypassing.
See IBM Knowledge Center for more details:
https://ibm.biz/BdsKgy
7.8.1 Mapping newly created volumes to the host using the wizard
This section explains how to map a volume that was created in 7.3, “Creating volumes” on
page 267. It is assumed that you followed that procedure and the Volumes menu is displayed
showing a list of volumes, as shown in Figure 7-45.
2. Select the host or host cluster to which the new volume should be mapped, as shown in
Figure 7-47. Click Next.
296 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. A summary window is displayed showing the volume to be mapped along with existing
volumes already mapped to the host or host cluster, as shown in Figure 7-48. Click Map
Volumes.
4. The confirmation window shows the result of the volume mapping task, as shown in
Figure 7-49.
6. Right-click the host or host cluster to which the volume was mapped and select Modify
Volume Mappings, or Modify Shared Volume Mappings for host clusters, as shown in
Figure 7-51.
298 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
7. A window is displayed showing a list of volumes already mapped to the host or host
cluster, as shown in Figure 7-52.
The host is now able to access the volumes and store data on them. For host clusters, all
hosts in the cluster are able to access the shared volumes. See 7.9, “Migrating a volume to
another storage pool” on page 299 for information about discovering the volumes on the host
and making additional host settings, if required.
Multiple volumes can also be created in preparation for discovering them and customizing
their mappings later.
The migration process itself is a low priority process that does not affect the performance of
the IBM Spectrum Virtualize system. However, it moves one extent after another to the new
storage pool, so the performance of the volume is affected by the performance of the new
storage pool after the migration process.
2. The Migrate Volume Copy window opens. If your volume consists of more than one copy,
select the copy that you want to migrate to another storage pool, as shown in Figure 7-54.
If the selected volume consists of one copy, this selection menu is not available.
300 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Select the new target storage pool and click Migrate, as shown in Figure 7-55.
4. The volume copy migration starts as shown in Figure 7-56. Click Close to return to the
Volumes window.
After the migration task completes, the Background Tasks menu displays a Recently
Completed Task. Figure 7-58 shows that the volume was migrated to Pool0.
In the Pools Volumes By Pool menu, the volume is now displayed in the target storage
pool, as shown in Figure 7-59.
The volume copy has now been migrated without any host or application downtime to the new
storage pool. It is also possible to migrate both volume copies to other online pools.
Another way to migrate volumes to another pool is by performing the migration by using the
volume copy feature, as described in 7.10, “Migrating volumes using the volume copy feature”
on page 303.
302 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: Migrating a volume between storage pools with different extent sizes is not
supported. If you need to migrate a volume to a storage pool with a different extent size,
use the volume copy feature instead.
To migrate a volume using the volume copy feature, complete the following steps:
1. Select the volume to create a copy of, and in the Actions menu select Add Volume Copy,
as shown in Figure 7-60.
3. Change the roles of the volume copies by making the new copy the primary copy as
shown in Figure 7-63. The current primary copy is displayed with an asterisk next to its
name.
Figure 7-63 Making the new copy in a different storage pool the primary
304 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. Split or delete the old copy from the volume as shown in Figure 7-64.
5. Ensure that the new copy is in the target storage pool by double-clicking the volume or
highlighting the volume and selecting Properties from the Actions menu. The properties
of the volume in the target storage pool are shown in Figure 7-65.
Figure 7-65 Verifying the new copy in the target storage pool
The migration of volumes using the volume copy feature requires more user interaction, but
provides some benefits for particular use cases. One such example is migrating a volume
from a tier 1 storage pool to a lower performance tier 2 storage pool. First, the volume copy
feature can be used to create a copy on the tier 2 pool (steps 1 and 2). All reads are still
performed in the tier 1 pool to the primary copy. After the volume copies have synchronized
(step 3), all writes are destaged to both pools, but the reads are still only done from the
primary copy.
Appendix B, “CLI setup” on page 769 gives details about how to set up SAN boot.
Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.
You must know the following information before you start to create the volume:
In which storage pool the volume has its extents
From which I/O Group the volume is accessed
Which IBM Spectrum Virtualize node is the preferred node for the volume
Size of the volume
Name of the volume
Type of the volume
Whether this volume is managed by IBM Easy Tier to optimize its performance
When you are ready to create your striped volume, use the mkvdisk command. In
Example 7-1, this command creates a 10 gigabyte (GB) striped volume with volume ID 8
within the storage pool Pool0 and assigns it to the io_grp0 I/O Group. Its preferred node is
node 1.
To verify the results, use the lsvdisk command, as shown in Example 7-2.
306 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool0
capacity 10.00GB
type striped
formatted no
formatting yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076400F580049800000000000010
preferred_node_id 2
fast_write_state not_empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
owner_type none
owner_id
owner_name
encrypt yes
volume_id 8
volume_name Tiger
function
throttle_id
throttle_name
IOPs_limit
bandwidth_limit_MB
volume_group_id
volume_group_name
cloud_backup_enabled no
cloud_account_id
cloud_account_name
backup_status off
last_backup_time
restore_status none
backup_grain_size
deduplicated_copy_count 0
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
308 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6 MIRRORED_SYNC_RATE_16 0 io_grp0 online many many 10.00GB many
6005076400F580049800000000000008 0 2 empty 0 no 0 many many no yes 6 MIRRORED_SYNC_RATE_16
7 THIN_PROVISION_MIRRORED_VOL 0 io_grp0 online many many 10.00GB many
6005076400F580049800000000000009 0 2 empty 2 no 0 many many no yes 7
THIN_PROVISION_MIRRORED_VOL
8 Tiger 0 io_grp0 online 0 Pool0 10.00GB striped 6005076400F580049800000000000010 0 1
not_empty 0 no 0 0 Pool0 yes yes 8 Tiger
12 vdisk0_restore 0 io_grp0 online 0 Pool0 10.00GB striped
6005076400F58004980000000000000E 0 1 empty 0 no 0 0 Pool0 no yes 12 vdisk0_restore
13 vdisk0_restore1 0 io_grp0 online 0 Pool0 10.00GB striped
6005076400F58004980000000000000F 0 1 empty 0 no 0 0 Pool0 no yes 13 vdisk0_restore1
This command creates a thin-provisioned10 GB volume. The volume belongs to the storage
pool that is named Site1_Pool and is owned by input/output (I/O) Group io_grp0. The real
capacity automatically expands until the volume size of 10 GB is reached. The grain size is
set to 256 KB, which is the default.
Disk size: When the -rsize parameter is used, you have the following options: disk_size,
disk_size_percentage, and auto.
Specify the units for a disk_size integer by using the -unit parameter. The default is MB.
The -rsize value can be greater than, equal to, or less than the size of the volume.
The auto option creates a volume copy that uses the entire size of the MDisk. If you specify
the -rsize auto option, you must also specify the -vtype image option.
You can use this command to bring a non-virtualized disk under the control of the clustered
system. After it is under the control of the clustered system, you can migrate the volume from
the single managed disk.
When the first MDisk extent is migrated, the volume is no longer an image mode volume. You
can add an image mode volume to an already populated storage pool with other types of
volumes, such as striped or sequential volumes.
Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That
is, the minimum size that can be specified for an image mode volume must be the same as
the storage pool extent size to which it is added, with a minimum of 16 MiB.
You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode volume.
Capacity: If you create a mirrored volume from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks.
The remaining space on the larger MDisk is inaccessible.
If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.
Use the mkvdisk command to create an image mode volume, as shown in Example 7-5.
This command creates an image mode volume that is called Image_Volume_A that uses the
mdisk25 MDisk. The volume belongs to the storage pool ITSO_Pool1, and the volume is
owned by the io_grp0 I/O Group.
If you run the lsvdisk command again, the volume that is named Image_Volume_A has a
status of image, as shown in Example 7-6.
310 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
empty 0 no 0 5
ITSO_Pool1 no no 6 Image_Volume_A
In addition, you can use volume mirroring as an alternative method of migrating volumes
between storage pools. For example, if you have a non-mirrored volume in one storage pool
and want to migrate that volume to another storage pool, you can add a copy of the volume
and specify the second storage pool. After the copies are synchronized, you can delete the
copy on the first storage pool. The volume is copied to the second storage pool while
remaining online during the copy.
To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds
a copy of the chosen volume to the selected storage pool, which changes a non-mirrored
volume into a mirrored volume.
The following scenario shows the creation of a mirrored volume copy from one storage pool to
another storage pool. As you can see in Example 7-7, the volume has a single copy with
copy_id 0 in pool Pool0.
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
312 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB
Example 7-8 shows adding the volume copy mirror by using the addvdiskcopy command.
During the synchronization process, you can see the status by using the
lsvdisksyncprogress command. As shown in Example 7-9, the first time that the status is
checked, the synchronization progress is at 48%, and the estimated completion time is
161026203918. The estimated completion time is displayed in the YYMMDDHHMMSS format.
In our example, it is 2016, Oct-26 20:39:18. The second time that the command is run, the
progress status is at 100%, and the synchronization is complete.
As you can see in Example 7-10, the new mirrored volume copy (copy_id 1) was added and
can be seen by using the lsvdisk command.
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
314 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB
copy_id 1
status online
sync yes
auto_delete no
primary no
mdisk_grp_id 1
mdisk_grp_name Pool1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name Pool1
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB
While you are adding a volume copy mirror, you can define a mirror with different parameters
to the original volume copy. For example, you can define a thin-provisioned volume copy of a
fully allocated volume copy to migrate a thick-provisioned volume to a thin-provisioned
volume, and vice versa.
Volume copy mirror parameters: To change the parameters of a volume copy mirror, you
must delete the volume copy and redefine it with the new values.
This section shows the usage of addvdiskcopy with the -autodelete flag set. The
-autodelete flag specifies that the primary copy is deleted after the secondary copy is
synchronized.
copy_id 0
status online
sync yes
auto_delete no
primary yes
...
compressed_copy no
...
Example 7-13 adds a compressed copy with the -autodelete flag set.
Example 7-14 shows the lsvdisk output with an additional compressed volume (copy 1) and
volume copy 0 being set to auto_delete yes.
316 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
copy_id 0
status online
sync yes
auto_delete yes
primary yes
...
copy_id 1
status online
sync no
auto_delete no
primary no
...
When copy 1 is synchronized, copy 0 is deleted. You can monitor the progress of volume copy
synchronization by using lsvdisksyncprogress.
Note: Consider the compression guidelines in 10.5, “Real-time Compression” on page 445
before adding the first compressed copy to a system.
Example 7-15 shows the splitvdiskcopy command, which is used to split a mirrored volume.
It creates a volume that is named SPLIT_VOL from the volume that is named
VOLUME_WITH_MIRRORED_COPY.
As you can see in Example 7-16, the new volume that is named SPLIT_VOL was created as an
independent volume.
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 1
mdisk_grp_name Pool1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
318 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 0.00MB
tier tier_nearline
tier_capacity 10.00GB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 1
parent_mdisk_grp_name Pool1
encrypt yes
deduplicated_copy no
used_capacity_before_reduction 0.00MB
By issuing the command that is shown in Example 7-15 on page 317, Volume_mirrored does
not have its mirrored copy and a volume is created automatically.
You can specify a new name or label. The new name can be used to reference the volume.
The I/O Group with which this volume is associated can be changed. Changing the I/O Group
with which this volume is associated requires a flush of the cache within the nodes in the
current I/O Group to ensure that all data is written to disk. I/O must be suspended at the host
level before you perform this operation.
Tips: If the volume has a mapping to any hosts, it is impossible to move the volume to an
I/O Group that does not include any of those hosts.
This operation fails if insufficient space exists to allocate bitmaps for a mirrored volume in
the target I/O Group.
If the -force parameter is used and the system is unable to destage all write data from the
cache, the contents of the volume are corrupted by the loss of the cached data.
If the -force parameter is used to move a volume that has out-of-sync copies, a full
resynchronization is required.
If any remote copy, IBM FlashCopy, or host mappings still exist for this volume, the delete fails
unless the -force flag is specified. This flag ensures the deletion of the volume and any
volume to host mappings and copy mappings.
If the volume is the subject of a “migrate to image mode” process, the delete fails unless the
-force flag is specified. This flag halts the migration and then deletes the volume.
If the command succeeds (without the -force flag) for an image mode volume, the underlying
back-end controller logical unit is consistent with the data that a host might previously read
from the image mode volume. That is, all fast write data was flushed to the underlying LUN. If
the -force flag is used, consistency is not guaranteed.
If any non-destaged data exists in the fast write cache for this volume, the deletion of the
volume fails unless the -force flag is specified. Now, any non-destaged data in the fast write
cache is deleted.
Use the rmvdisk command to delete a volume from your IBM Spectrum Virtualize
configuration, as shown in Example 7-17.
This command deletes the volume_A volume from the IBM Spectrum Virtualize configuration.
If the volume is assigned to a host, you must use the -force flag to delete the volume, as
shown in Example 7-18.
Use the chsystem command to set an interval at which the volume must be idle before it can
be deleted from the system. Any command that is affected by this setting fails. If the -force
parameter is used, and the volume has not been idle for the specified interval, the deletion
fails.
320 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
rmhostiogrp
rmhost
rmhostport
Assuming that your operating systems support expansion, you can use the expandvdisksize
command to increase the capacity of a volume, as shown in Example 7-19.
This command expands the volume_C volume (which was 35 GB) by another 5 GB to give it a
total size of 40 GB.
To expand a thin-provisioned volume, you can use the -rsize option, as shown in
Example 7-20. This command changes the real size of the volume_B volume to a real capacity
of 55 GB. The capacity of the volume is unchanged.
Important: If a volume is expanded, its type becomes striped even if it was previously
sequential or in image mode. If there are not enough extents to expand your volume to the
specified size, the following error message is displayed:
CMMVC5860E The action failed because there were not enough extents in the
storage pool.
When the HBA on the host scans for devices that are attached to it, the HBA discovers all of
the volumes that are mapped to its FC ports. When the devices are found, each one is
allocated an identifier (SCSI LUN ID).
For example, the first disk that is found is generally SCSI LUN 1. You can control the order in
which the HBA discovers volumes by assigning the SCSI LUN ID as required. If you do not
specify a SCSI LUN ID the system automatically assigns the next available SCSI LUN ID,
based on any mappings that exist with that host.
By using the volume and host definition created in the previous sections, assign volumes to
hosts that are ready to use them. Use the mkvdiskhostmap command, as shown in
Example 7-21.
This command displays volume_B and volume_C that are assigned to host Almaden, as shown
in Example 7-22.
322 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
id:name:SCSI_id:vdisk_id:vdisk_name:vdisk_UID
2:Almaden:0:26:volume_B:6005076801AF813F1000000000000020
2:Almaden:1:27:volume_C:6005076801AF813F1000000000000021
Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can
help assign a specific LUN ID to a volume that is to be associated with a host. The default
(if nothing is specified) is to increment based on what is already assigned to the host.
Certain HBA device drivers stop when they find a gap in the SCSI LUN IDs, as shown in the
following examples:
Volume 1 is mapped to Host 1 with SCSI LUN ID 1.
Volume 2 is mapped to Host 1 with SCSI LUN ID 2.
Volume 3 is mapped to Host 1 with SCSI LUN ID 4.
When the device driver scans the HBA, it might stop after discovering volumes 1 and 2
because no SCSI LUN is mapped with ID 3.
It is not possible to map a volume to a host more than once as different LUNs (Example 7-23).
This command maps the volume that is called volume_A to the host that is called Siam. All
tasks that are required to assign a volume to an attached host are complete.
If you are using host clusters, use the mkvolumehostclustermap command to map a volume to
a host cluster instead (Example 7-24).
From this command, you can see that the host Siam has only one assigned volume that is
called volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is
presented to the host. If no host is specified, all defined host-to-volume mappings are
returned.
You can also use the lshostclustervolumemap command to show the volumes that are
assigned to a specific host cluster, as shown in Example 7-26.
This command unmaps the volume that is called volume_D from the host that is called Tiger.
You can also use the rmvolumehostclustermap command to delete a volume mapping from a
host cluster, as shown in Example 7-28.
This command unmaps the volume called UNCOMPRESSED_VOL from the host cluster called
vmware_cluster.
324 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
As you can see from the parameters that are shown in Example 7-29, before you can migrate
your volume, you must know the name of the volume that you want to migrate and the name
of the storage pool to which you want to migrate it. To discover the names, run the lsvdisk
and lsmdiskgrp commands.
After you know these details, you can run the migratevdisk command, as shown in
Example 7-29.
Tips: If insufficient extents are available within your target storage pool, you receive an
error message. Ensure that the source MDisk group and target MDisk group have the
same extent size.
By using the optional threads parameter, you can assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.
You can run the lsmigrate command at any time to see the status of the migration process,
as shown in Example 7-30.
IBM_2145:ITSO_CLUSTER:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0
In this example, you migrate the data from volume_A onto mdisk10, and the MDisk must be put
into the STGPool_IMAGE storage pool.
You can use this command to shrink the physical capacity that is allocated to a particular
volume by the specified amount. You can also use this command to shrink the virtual capacity
of a thin-provisioned volume without altering the physical capacity that is assigned to the
volume. Use the following parameters:
For a non-thin-provisioned volume, use the -size parameter.
For a thin-provisioned volume’s real capacity, use the -rsize parameter.
For a thin-provisioned volume’s virtual capacity, use the -size parameter.
When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.
The system arbitrarily reduces the capacity of the volume by removing a partial extent, one
extent, or multiple extents from those extents that are allocated to the volume. You cannot
control which extents are removed. Therefore, you cannot assume that it is unused space that
is removed.
Image mode volumes cannot be reduced in size. Instead, they must first be migrated to fully
managed mode. To run the shrinkvdisksize command on a mirrored volume, all copies of
the volume must be synchronized.
326 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Important: Consider the following guidelines when you are shrinking a disk:
If the volume contains data, do not shrink the disk.
Certain operating systems or file systems use the outer edge of the disk for
performance reasons. This command can shrink a FlashCopy target volume to the
same capacity as the source.
Before you shrink a volume, validate that the volume is not mapped to any host objects.
If the volume is mapped, data is displayed. You can determine the exact capacity of the
source or master volume by issuing the svcinfo lsvdisk -bytes vdiskname command.
Shrink the volume by the required amount by issuing the shrinkvdisksize -size
disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command.
Assuming that your operating system supports it, you can use the shrinkvdisksize command
to decrease the capacity of a volume, as shown in Example 7-32.
This command shrinks a volume that is called volume_D from a total size of 80 GB by 44 GB,
to a new total size of 36 GB.
This command displays a list of all of the volume IDs that correspond to the volume copies
that use mdisk8. To correlate the IDs that are displayed in this output to volume names, issue
the lsvdisk command.
If you want to know more about these MDisks, you can run the lsmdisk command using the
ID that is displayed in Example 7-35 rather than the name.
7.11.21 Showing from which storage pool a volume has its extents
Use the lsvdisk command to show to which storage pool a specific volume belongs, as
shown in Example 7-36.
328 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
vdisk_UID 6005076400F580049800000000000002
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 4660
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0
owner_type none
owner_id
owner_name
encrypt yes
volume_id 0
volume_name A_MIRRORED_VOL_1
function
throttle_id 1
throttle_name throttle1
IOPs_limit 233
bandwidth_limit_MB 122
volume_group_id
volume_group_name
cloud_backup_enabled no
cloud_account_id
cloud_account_name
backup_status off
last_backup_time
restore_status none
backup_grain_size
deduplicated_copy_count 0
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status measured
To learn more about these storage pools, issue the lsmdiskgrp command, as described in
Chapter 6, “Storage pools” on page 197.
This command shows the host or hosts to which the volume_B volume was mapped. Duplicate
entries are normal because more paths exist between the clustered system and the host. To
be sure that the operating system on the host sees the disk only one time, you must install
and configure a multipath software application, such as IBM Subsystem Device Driver (SDD).
Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.
This command shows which volumes are mapped to the host called Almaden.
330 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.
State: In Example 7-39, the state of each path is OPEN. Sometimes, the state is
CLOSED. This state does not necessarily indicate a problem because it might be a
result of the path’s processing stage.
2. Run the lshostvdiskmap command to return a list of all assigned volumes, as shown in
Example 7-40.
4. Query the MDisks with the lsmdisk mdiskID command to discover their controller and
LUN information, as shown in Example 7-42. The output displays the controller name and
the controller LUN ID to help you to track back to a LUN within the disk subsystem (if you
gave your controller a unique name, such as a serial number).
332 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
7.12 I/O throttling
You can set a limit on the number of I/O operations that are accepted for a volume. The limit is
set in terms of I/O operations per second (IOPS) or bandwidth (MBps, GBps, or TBps). By
default, no I/O throttling rate is set when a volume is created. I/O throttling is also referred to
as I/O governing. Each volume can have up to two throttles defined: One for bandwidth and
one for IOPS.
When deciding between using IOPS or bandwidth as the I/O governing throttle, consider the
disk access profile of the application. Database applications generally issue large amounts of
I/O, but transfer only a relatively small amount of data. In this case, setting an I/O governing
throttle that is based on bandwidth might not achieve much. A throttle based on IOPS would
be better suited to the volume.
Alternately, a video streaming application generally issues a small amount of I/O, but transfers
large amounts of data. In contrast to the database example, setting an I/O governing throttle
that is based on IOPS might not achieve much, so it is better to use a bandwidth throttle for
the volume.
An I/O governing rate of 0 does not mean that zero IOPS or bandwidth can be achieved for
this volume, but rather that no throttle is set for this volume.
Note:
I/O governing does not affect FlashCopy and data migration I/O rates
I/O governing on Metro Mirror or Global Mirror secondary volumes does not affect the
rate of data copy from the primary volume
3. After the Edit Throttle task completes successfully, you are redirected to the Edit Throttle
menu. It is possible to set another throttle for the same volume based on bandwidth. The
Edit Throttle menu can be closed by clicking Close.
334 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. The View All Throttles menu shows all defined system throttles, filtered to show only
Volume Throttles, as shown in Figure 7-69.
You can view other throttles by selecting a different throttle type in the drop-down menu, as
shown in Figure 7-70.
2. In the Edit Throttle window, click Remove for the throttle you want to remove. In our
example, we remove the IOPS throttle of 10,000 as shown in Figure 7-72.
After the Edit Throttle task completes successfully, you are redirected to the Edit Throttle
menu. It is possible to remove another throttle for the same volume based on bandwidth if one
exists. The Edit Throttle menu can be closed by clicking Close.
336 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
8
Chapter 8. Hosts
This chapter describes the host configuration procedures that are required to attach
supported hosts to the IBM Spectrum Virtualize system. The chapter also introduces new
concepts about Host Clusters, and N-Port Virtualization ID (NPIV) support from a host’s
perspective.
The ability to consolidate storage for attached open systems hosts provides the following
benefits:
Easier storage management
Increased utilization rate of the installed storage capacity
Advanced Copy Services functions offered across storage systems from separate vendors
Only one multipath driver is required for attached hosts
Hosts can be connected to the IBM Spectrum Virtualize system by using any of the following
protocols:
Fibre Channel (FC)
Fibre Channel over Ethernet (FCoE)
Internet Small Computer System Interface (iSCSI)
Hosts that connect to the IBM Spectrum Virtualize system by using the fabric switches that
use FC or FCoE protocol must be zoned appropriately, as indicated in Chapter 3, “Planning”
on page 51.
Hosts that connect to the IBM Spectrum Virtualize system with iSCSI protocol must be
configured appropriately, as indicated in Chapter 3, “Planning” on page 51.
Note: Certain host operating systems can be directly connected to the IBM Spectrum
Virtualize system without the need for FC fabric switches. For more information, go to the
IBM System Storage Interoperation Center (SSIC):
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
For load balancing and access redundancy on the host side, the use of a host multipathing
driver is required. A host multipathing I/O driver is required in the following situations:
Protection from fabric link failures, including port failures on the IBM Spectrum Virtualize
system nodes
Protection from a host HBA failure (if two HBAs are in use)
Protection from fabric failures if the host is connected through two HBAs to two separate
fabrics
Provide load balancing across the host HBAs
To learn about various host operating systems and versions that are supported by SVC, go to
the IBM System Storage Interoperation Center (SSIC):
https://www.ibm.com/systems/support/storage/ssic/interoperability.wss
To learn about how to attach various supported host operating systems to SVC, see:
https://ibm.biz/BdjKvw
338 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If your host operating system is not in SSIC, you can request an IBM representative to submit
a special request for support with the Storage Customer Opportunity REquest (SCORE) tool
for evaluation:
https://www.ibm.com/systems/support/storage/scorerpq/int/logon.do
Volumes that are mapped to a host cluster are assigned to all members of the host cluster
with the same SCSI ID.
A typical user case is to define a host cluster containing all of the WWPNs belonging to the
hosts that are participating in a host operating system-based cluster, such as IBM
PowerHA®, Microsoft Cluster Server (MSCS), and so on.
The following new commands have been added to deal with host clusters:
lshostcluster
lshostclustermember
lshostclustervolumemap
mkhost (modified to put host in a host cluster on creation)
rmhostclustermember
rmhostcluster
rmvolumehostclustermap
Note: Host clusters enable you to create individual hosts and add them to a host cluster.
Care must be taken to make sure that no loss of access occurs when changing to host
clusters.
Traditionally, if one node fails or is removed for some reason, the paths that are presented for
volumes from that node would go offline. In this case, it is up to the native OS multipathing
software to failover from using both sets of WWPN to just those that remain online. While this
process is exactly what multipathing software is designed to do, occasionally it can be
problematic, particularly if paths are not seen as coming back online for some reason.
Starting with IBM Spectrum Virtualize V7.7, the system can be enabled into NPIV mode.
When NPIV mode is enabled on the IBM Spectrum Virtualize system, ports do not come
online until they are ready to service I/O, which improves host behavior around node
unpends. In addition, path failures due to an offline node are masked from hosts and their
multipathing driver will not need to do any path recovery.
Primary NPIV Port This is the WWPN that communicates with backend storage, and can
be used for node to node traffic. (Local or remote.)
Primary Host Attach Port This is the WWPN that communicates with hosts. It is a target port
only. This is the primary port, so it is based on this local node’s
WWNN.
Failover Host Attach Port This is a standby WWPN that communicates with hosts and is only
brought online if the partner node within the I/O Group goes offline.
This is the same as the Primary Host Attach WWPN of the partner
node.
Figure 8-1 depicts the three WWPNs associated with an SVC port when NPIV is enabled.
Figure 8-1 Allocation of NPIV virtual WWPN ports per physical port
340 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The failover host attach port is not currently active. Figure 8-2 shows what happens when the
partner node fails. After the node failure, the failover host attach ports on the remaining node
become active and take on the WWPN of the failed node’s primary host attach port.
Note: Figure 8-2 shows only two ports per node in detail, but the same applies for all
physical ports.
Figure 8-2 Allocation of NPIV virtual WWPN ports per physical port after a node failure
With V7.7 onwards, this process happens automatically when NPIV has been enabled at a
system level in SVC. This failover only happens between the two nodes in the same I/O
Group. Similar NPIV capabilities have been introduced with V8.1, allowing for a Hot Spare
Node to swap into an I/O Group.
A transitional mode allows migration of hosts from previous non NPIV enabled systems to
enabled NPIV system, allowing for a transition period as hosts are rezoned to the primary
host attach WWPNs.
The process for enabling NPIV on a new IBM Spectrum Virtualize system is slightly different
than on an existing system. For more information, see IBM Knowledge Center:
https://ibm.biz/BdjKbi
Note: NPIV is only supported for Fibre Channel protocol. It is not supported for FCoE or
iSCSI protocols.
2. Run the lsiogrp <id> | grep fctargetportmode command for the specific I/O group ID
to display the fctargetportmode setting. If this is enabled, as shown in Example 8-2, NPIV
host target port mode is enabled.
3. The virtual WWPNs can be listed using the lstargetportfc command, as shown in
Example 8-3. Look for the host_io_permitted and virtualized columns to be yes,
meaning the WWPN in those lines is a primary host attach port and should be used when
zoning the hosts to the SVC.
342 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
39 500507680C24041D 500507680C00041D 4 2 2 011000 no no
40 500507680C28041D 500507680C00041D 4 2 2 011001 yes yes
IBM_2145:DH8SVC_B:superuser>
4. At this point, you can zone the hosts using the primary host attach ports (virtual WWPNs)
of the SAN Volume Controller ports, as shown in bold in the output of Example 8-3 on
page 342.
5. If the status of fctargetportmode is disabled and this is a new installation, run the chiogrp
command to set enabled NPIV mode, as shown in Example 8-4.
You can now configure zones for hosts by using the primary host attach ports (virtual
WWPNs) of the IBM Spectrum Virtualize ports, as shown in bold in the output of Example 8-3
on page 342.
If your system is not running with NPIV enabled for host attachment, enable NPIV by
completing the following steps after ensuring that you meet the prerequisites:
1. Audit your SAN fabric layout and zoning rules because NPIV has stricter requirements.
Ensure that equivalent ports are on the same fabric and in the same zone.
2. Check the path count between your hosts and the IBM Spectrum Virtualize system to
ensure that the number of paths is half of the usual supported maximum.
For more information, see the topic about zoning considerations for N_Port ID
Virtualization in IBM Knowledge Center:
https://ibm.biz/BdjK7U
3. Run the lstargetportfc command to discover the primary host attach WWPNs (virtual
WWPNs), as shown in bold in Example 8-7. You can identify these virtual WWPNs
because they currently do not allow host I/O or have an nportid assigned because you
have not yet enabled NPIV.
Example 8-7 Using the lstargetportfc command to get primary host WWPNs (virtual WWPNs)
IBM_2145:ITSO_DH8:superuser>lstargetportfc
id WWPN WWNN port_id owning_node_id current_node_id nportid host_io_permitted
virtualized
1 500507680C110000 500507680C000000 1 1 1 010813 yes no
2 500507680C150000 500507680C000000 1 1 000000 no yes
3 500507680C120000 500507680C000000 2 1 1 020800 yes no
4 500507680C160000 500507680C000000 2 1 000000 no yes
5 500507680C130000 500507680C000000 3 1 1 010913 yes no
6 500507680C170000 500507680C000000 3 1 000000 no yes
7 500507680C140000 500507680C000000 4 1 1 020900 yes no
8 500507680C180000 500507680C000000 4 1 000000 no yes
33 500507680C110508 500507680C000508 1 2 2 010C13 yes no
34 500507680C150508 500507680C000508 1 2 000000 no yes
35 500507680C120508 500507680C000508 2 2 2 020C00 yes no
36 500507680C160508 500507680C000508 2 2 000000 no yes
37 500507680C130508 500507680C000508 3 2 2 010D13 yes no
38 500507680C170508 500507680C000508 3 2 000000 no yes
39 500507680C140508 500507680C000508 4 2 2 020D00 yes no
40 500507680C180508 500507680C000508 4 2 000000 no yes
344 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. Enable transitional mode for NPIV on IBM Spectrum Virtualize system (Example 8-8).
5. Ensure that the primary host attach WWPNs (virtual WWPNs) now allow host traffic, as
shown in bold in Example 8-9.
Example 8-9 Host attach WWPNs (virtual WWPNs) permitting host traffic
IBM_2145:ITSO_DH8:superuser>lstargetportfc
id WWPN WWNN port_id owning_node_id current_node_id nportid host_io_permitted
virtualized
1 500507680C110000 500507680C000000 1 1 1 010813 yes no
2 500507680C150000 500507680C000000 1 1 1 010814 yes yes
3 500507680C120000 500507680C000000 2 1 1 020800 yes no
4 500507680C160000 500507680C000000 2 1 1 020801 yes yes
5 500507680C130000 500507680C000000 3 1 1 010913 yes no
6 500507680C170000 500507680C000000 3 1 1 010914 yes yes
7 500507680C140000 500507680C000000 4 1 1 020900 yes no
8 500507680C180000 500507680C000000 4 1 1 020901 yes yes
33 500507680C110508 500507680C000508 1 2 2 010C13 yes no
34 500507680C150508 500507680C000508 1 2 2 010C14 yes yes
35 500507680C120508 500507680C000508 2 2 2 020C00 yes no
36 500507680C160508 500507680C000508 2 2 2 020C01 yes yes
37 500507680C130508 500507680C000508 3 2 2 010D13 yes no
38 500507680C170508 500507680C000508 3 2 2 010D14 yes yes
39 500507680C140508 500507680C000508 4 2 2 020D00 yes no
40 500507680C180508 500507680C000508 4 2 2 020D01 yes yes
6. Add the primary host attach ports (virtual WWPNs) to your existing host zones, but do
not remove the current SVC WWPNs already in the zones. Example 8-10 shows an
existing host zone to the Primary Port WWPNs of the SVC nodes.
Example 8-11 shows that we have added the primary host attach ports (virtual
WWPNs) to our example host zone to allow us to change the host without disrupting its
availability.
7. With the transitional zoning active in your fabrics, ensure that the host is using the new
NPIV ports for host I/O. Example 8-12 shows the before and after pathing for our host.
Notice that the select count now increases on the new paths and has stopped on the old
paths.
Total Devices : 1
Total Devices : 1
346 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Remember:
8. After all hosts have been rezoned and the pathing validated, change the system NPIV to
enabled mode by entering the command shown in Example 8-13.
Now NPIV has been enabled on the IBM Spectrum Virtualize system, and you have
confirmed the hosts are using the virtualized WWPNs for I/O. To complete the NPIV
implementation, the host zones can be amended to remove the old primary attach port
WWPNs. Example 8-14 shows our final zone with the host HBA and the SVC virtual WWPNs.
To create a host, click Add Host. If you want to create a Fibre Channel host, continue with
“Creating Fibre Channel hosts” on page 349. To create an iSCSI host, go to “Creating iSCSI
hosts” on page 351.
348 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Creating Fibre Channel hosts
To create Fibre Channel hosts, complete the following steps:
1. Select Fibre Channel. The Fibre Channel host configuration window opens (Figure 8-4).
2. Enter a name for your host and click the Host Port (WWPN) menu to get a list of all
discovered WWPNs (Figure 8-5).
3. Select one or more WWPNs for your host. The IBM Spectrum Virtualize should have the
host port WWPNs available if the host is prepared as described in IBM Knowledge Center
for host attachment. If they do not appear in the list, scan for new disks as required on the
respective operating system and click the Rescan icon in the WWPN box. If they still do
not appear, check the SAN zoning and repeat the scanning.
4. If you want to add more ports to your Host, click the Plus sign (+) to add all ports that
belong to the specific host.
5. If you are creating a Hewlett-Packard UNIX (HP-UX) or Target Port Group Support (TPGS)
host, click the Host Type menu (Figure 8-6). Select your host type. If your specific host
type is not listed, select generic.
After you add Fibre Channel hosts, go to Chapter 7, “Volumes” on page 251 to create
volumes and map them to the created hosts.
350 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Creating iSCSI hosts
When creating an iSCSI attached host, consider the following points:
iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This
design reduces the need for multipathing support in the iSCSI host.
The IQN of the host is added to an IBM Spectrum Virtualize host object in the same way
that you add FC WWPNs.
Host objects can have both WWPNs and iSCSI qualified names (IQNs).
Standard iSCSI host connection procedures can be used to discover and configure IBM
Spectrum Virtualize as an iSCSI target.
IBM Spectrum Virtualize supports the Challenge Handshake Authentication Protocol
(CHAP) authentication methods for iSCSI.
Note that iqn.1986-03.com.ibm:2076.<cluster_name>.<node_name> is the IQN for an IBM
Spectrum Virtualize node. Because the IQN contains the clustered system name and the
node name, it is important not to change these names after iSCSI is deployed.
Each node can be given an iSCSI alias, as an alternative to the IQN.
2. Enter a host name, and the iSCSI initiator name into the iSCSI host IQN box. Click the
plus sign (+) if you want to add more initiator names to one host.
3. If you are connecting an HP-UX or TPGS host, click the Host type menu and then select
the correct host type. For our ESX host, we selected VVOL. However, generic is good if you
are not using VVOLs.
4. Click Add and then click Close to complete the host object definition.
352 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5. Repeat these steps for every iSCSI host that you want to create. Figure 8-9 shows the
Hosts window after creating three Fibre Channel hosts and one iSCSI host.
Although the iSCSI host is now configured, to provide connectivity, the iSCSI Ethernet ports
must also be configured.
3. The window displays an Apply Changes prompt to apply any changes you have made
before continuing.
4. Lower on the configuration window, you can also configure Internet Storage Name Service
(iSNS) addresses and CHAP if you need these in your environment.
Note: The authentication of hosts is optional. By default, it is disabled. The user can
choose to enable CHAP or CHAP authentication, which involves sharing a CHAP
secret between the cluster and the host. If the correct key is not provided by the host,
the Storwize V7000 does not allow it to perform I/O to volumes. Also, you can assign a
CHAP secret to the cluster.
5. Click the Ethernet Ports tab to set the iSCSI IP address for each node (Figure 8-12).
Repeat this step for each port on each node that you want to use for iSCSI traffic.
354 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. After entering the IP address for a port, click Modify to enable the configuration. After the
changes are successfully applied, click Close.
7. You can see that iSCSI is enabled for host I/O on the required interfaces with the yes
under the Host Attach column. See Figure 8-13.
8. By default, iSCSI host connection is enabled after setting the IP address To disable any
interfaces you do not want to be used for host connections, click Actions and select
Modify iSCSI Hosts (Figure 8-14).
The IBM SAN Volume Controller is now configured and ready for iSCSI host use. Note the
initiator IQN names of your SVC nodes (Figure 8-11 on page 354) because you need them
when adding storage on your host. To create volumes and map them to a host, go to
Chapter 7, “Volumes” on page 251.
As a result, the volume or set of volumes gets mapped to each individual host object that is
part of the host cluster. Importantly, each of the volumes gets mapped with the same SCSI ID
to each host that is part of the host cluster with single command.
Even though a host is part of a host cluster, volumes can still be assigned to an individual
host in a non-shared manner. A policy can be devised that can pre-assign a standard set of
SCSI IDs for volumes to be assigned to the host cluster, and another set of SCSI IDs to be
used for individual assignments to hosts.
Note: For example, SCSI IDs 0 - 100 for individual host assignment, and SCSI IDs above
100 can be used for host cluster. By employing such a policy, wanted volumes will not be
shared, and others can be. For example, the boot volume of each host can be kept private,
while data and application volumes can be shared.
This section describes how to create a host cluster. It is assumed that individual hosts have
already been created as described in the previous section.
1. From the menu on the left, select Hosts → Host Clusters (Figure 8-15).
356 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Click Create Host Cluster to open the wizard shown in Figure 8-16.
3. Enter a cluster name and select the individual nodes that you want in the cluster object,
and click Next.
5. After the task completes, click Close to return to the Host Cluster view, where you can see
the cluster that you just created, (Figure 8-18).
Note: Our cluster shows as Degraded because we have added an offline host. This status
will display as Online when all hosts in the cluster are available.
358 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
From the Host Clusters view, you have many options to manage and configure the host
cluster. These options are accessed by selecting a cluster and clicking Actions (Figure 8-19).
In the Hosts → Hosts view, three hosts have been created and volumes are already mapped
to them in our example. If needed you can now modify these hosts with the following options.
1. Select a host and click Actions or right-click the host to see the available tasks
(Figure 8-21).
360 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Modifying Mappings menu
To modify what volumes are mapped to a specific host, complete the following steps:
1. From the Actions menu, select Modify Volume Mappings (Figure 8-21). The window
shown in Figure 8-22 opens. At the upper left, you can confirm that the correct host is
targeted. The list shows all volumes currently mapped to the selected host. In our
example, one volume with SCSI ID 0 is mapped to the host ESX_62.
2. By selecting a listed volume, you can remove that Volume map from the host. However, in
our case we want to add an additional volume to our host. Continue by clicking Add
Volume Mapping (Figure 8-23).
If you select a SCSI ID already in use for the host, you cannot proceed. In Figure 8-24, we
have selected SCSI ID 0. However, in the right column you can see SCSI ID 0 is already
allocated. By changing to SCSI ID 1, we are able to click Next.
362 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. A summary window opens showing the new mapping details (Figure 8-25). After
confirming that this is what you planned, click Map Volumes and then click Close.
Note: The SCSI ID of the volume can be changed only before it is mapped to a host.
Changing it after is not possible unless the volume is unmapped again.
2. You are prompted to confirm the number of mappings to be removed. To confirm your
action, enter the number of volumes to be removed and click Unmap (Figure 8-27). In this
example, we remove two volume mappings.
364 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Unmapping: If you click Unmap, all access for this host to volumes that are controlled
by SVC system is removed. Ensure that you run the required procedures on your host
operating system (OS), such as unmounting file system, taking disk offline, or disabling
volume group, before removing the volume mappings from your host object on IBM
Spectrum Virtualize GUI.
3. The changes are applied to the system. Click Close. Figure 8-28 shows that the selected
host no longer has any host mappings.
Figure 8-28 All mappings for host ESX_62 have been removed
2. The Duplicate Mappings window opens. Select a listed target host object to which you
want to map all the existing source host volumes and click Duplicate (Figure 8-30).
366 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: You can duplicate mappings only to a host that has no volumes mapped.
3. After the task completion is displayed, verify the new mappings on the new host object.
From the Hosts menu (Figure 8-29 on page 366), right-click the target host and select
Properties.
4. Click the Mapped Volumes tab and verify that the required volumes have been mapped to
the new host (Figure 8-31).
2. The import mappings window appears. Select the appropriate source host that you want
to import the volume mappings from. In Figure 8-33, we select the host ESX_62 and click
Import.
3. After the task completes, verify that the mappings are as expected from the Hosts menu
(Figure 8-29 on page 366), right-click the target host, and select Properties. Then, click
the Mapped Volumes tab and verify that the required volumes have been mapped to the
new host (Figure 8-31 on page 367).
368 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Renaming a host
To rename a host, complete the following steps:
1. Select the host, and then right-click and select Rename (Figure 8-34).
2. Enter a new name and click Rename (Figure 8-35). If you click Reset, the changes are
reset to the original host name.
2. Confirm that the window displays the correct list of hosts that you want to remove by
entering the number of hosts to remove, and click Delete (Figure 8-37).
3. If the host that you are removing has volumes mapped to it, force the removal by selecting
the check box in the lower part of the window. By selecting this check box, the host is
removed and it no longer has access to any volumes on this system.
4. After the task is completed, click Close.
370 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Host properties
To view a host object’s properties, complete the following steps:
1. From the IBM Spectrum Virtualize GUI Hosts pane, select a host and right-click it or click
Actions → Properties (Figure 8-38).
372 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Click Edit to change the host properties (Figure 8-41).
In the window shown in Figure 8-41, you can modify the following properties:
– Host Name. Change the host name.
– Host Type. If you are going to attach HP-UX, OpenVMS, or TPGS hosts, change this
setting.
– I/O Group. Host has access to volumes mapped from selected I/O Groups.
– iSCSI CHAP Secret. Enter or change the iSCSI CHAP secret if this host is using
iSCSI.
4. When finished making changes, click Save to apply them. The editing window closes.
The Mapped Volumes tab shows a summary of which volumes are currently mapped with
which SCSI ID and UID to this host (Figure 8-42). The Show Details slider does not show
any additional information for this list.
This window offers the option to Add or Delete Port on the host, as described in 8.4.4,
“Adding and deleting host ports” on page 374.
5. Click Close to close the Host Details window.
374 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. A list of all the hosts is displayed. The function icons indicate whether the host is Fibre
Channel, iSCSI, or SAS attached. The port details of the selected host are shown to the
right. You can add a new host object by clicking Add Host. If you click Actions
(Figure 8-45), the tasks that are described in “Modifying Mappings menu” on page 361
can be selected.
2. Click the drop-down menu to display a list of all discovered Fibre Channel WWPNs. If the
WWPN of your host is not available in the menu, enter it manually or check the SAN
zoning to ensure that connectivity has been configured, then rescan storage from the host.
3. Select the WWPN that you want to add and click Add Port to List (Figure 8-48).
376 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The port is unverified (Figure 8-49) because it is not logged on to the SVC. The first time
that it logs on, its state is automatically changed to online, and the mapping is applied to
this port.
5. To remove a port from the adding list, click the red X next to the port. In this example, we
delete the manually added FC port so only the detected port remains.
6. Click Add Ports to Host to apply the changes and click Close.
3. Click Add Ports to Host to apply the changes to the system and click Close.
378 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You can also press the Ctrl key while you select several host ports to delete (Figure 8-53).
2. Click Delete and confirm the number of host ports that you want to remove by entering
that number in the Verify field (Figure 8-54).
This window lists all hosts and volumes. This example shows that the host Win2012_FC has
two mapped volumes, and their associated SCSI ID, Volume Name, and Volume Unique
Identifier (UID). If you have more than one caching I/O group, you also see which volume is
handled by which I/O group.
If you select one line and click Actions (Figure 8-56), the following tasks are available:
Unmap Volumes
Properties (Host)
Properties (Volume)
380 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Unmapping a volume
This action removes the mappings for all selected entries. From the Actions menu shown in
Figure 8-56, select one or more lines (while holding the Ctrl key), click Unmap Volumes.
Confirm how many volumes are to be unmapped by entering that number in the Verify field
(Figure 8-57), and then click Unmap.
Properties (Host)
Selecting an entry and clicking Properties (Host), as shown in Figure 8-56 on page 380,
opens the Host Properties window. The contents of this window are described in “Host
properties” on page 371.
Properties (Volume)
Selecting an entry and clicking Properties (Volume), as shown in Figure 8-56 on page 380,
opens the Volume Properties view. The contents of this window are described in Chapter 7,
“Volumes” on page 251.
3. Run the mkhost command with the required parameters, as shown in Example 8-17.
2. The iSCSI host can be verified by using the lshost command, as shown in Example 8-19.
382 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: When the host is initially configured, the default authentication method is set to no
authentication and no CHAP secret is set. To set a CHAP secret for authenticating the
iSCSI host with the SVC system, use the chhost command with the chapsecret parameter.
If you must display a CHAP secret for a defined server, use the lsiscsiauth command.
The lsiscsiauth command lists the CHAP secret that is configured for authenticating an
entity to the SVC system.
Note: FC hosts and iSCSI hosts are handled in the same way operationally after they are
created.
2. The volume mapping can then be checked by issuing the lshostvdiskmap command
against that particular host, as shown in Example 8-21.
Note: Before unmapping a volume from a host on SVC, ensure that the host-side action is
completed on that volume by using the respective host operating system platform
commands, such as unmounting the file system or removing the volume or volume group.
Otherwise, it could potentially result in data corruption.
Renaming a host
To rename an existing host definition issue chhost -name, as shown in Example 8-25.
In Example 8-25, the host RHEL_HOST has now been renamed to FC_RHEL_HOST.
Removing a host
To remove a host from the SVC, use the rmhost command, as shown in Example 8-26.
Note: Before removing a host from SVC, ensure that all of the volumes are unmapped
from that host, as described in Example 8-24.
384 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Host properties
To get more details about a particular host, the lshost command can be used with the
hostname or host id as a parameter, as shown in Example 8-27.
If the host is connected through SAN with FC, and if the WWPN is zoned to the SVC system,
issue the lshbaportcandidate command to compare with the information that is available
from the server administrator, as shown in Example 8-28.
Use host or SAN switch utilities to verify whether the WWPN matches the information for the
new WWPN. If the WWPN matches, use the addhostport command to add the port to the
host, as shown in Example 8-29.
Example 8-29 Adding the newly discovered WWPN to the host definition
IBM_2145:ITSO:superuser>addhostport -hbawwpn 210000E08B054CAA FC_RHEL_HOST
If the new HBA is not connected or zoned, the lshbaportcandidate command does not
display your WWPN. In this case, you can manually enter the WWPN of your HBA or HBAs,
and use the -force flag to create the host, as shown in Example 8-30.
Example 8-30 Adding a WWPN to the host definition using the -force option
IBM_2145:ITSO:superuser>addhostport -hbawwpn 210000E08B054CAA -force FC_RHEL_HOST
This command forces the addition of the WWPN that is named 210000E08B054CAA to the host
called FC_RHEL_HOST.
The host port count can be verified by running the lshost command again. The host
FC_RHEL_HOST has an updated port count of 3, as shown in Example 8-31.
If the host uses iSCSI as a connection method, the new iSCSI IQN ID should be used to add
the port. Unlike FC-attached hosts, with iSCSI, available candidate ports cannot be checked.
After getting the other iSCSI IQN, issue the addhostport, command as shown in
Example 8-32.
386 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Before removing the WWPN, ensure that it is the correct WWPN by issuing the lshost
command, as shown in Example 8-33.
When the WWPN or iSCSI IQN that needs to be deleted is known, use the rmhostport
command to delete a host port, as shown in Example 8-34.
Use the command to remove the iSCSI IQN, as shown in Example 8-35.
This command removes the WWPN of 210000E08B054CAA from the FC_RHEL_HOST host and the
iSCSI IQN iqn.1994-05.com.redhat:e6dd277b58 from the host iSCSI_RHEL_HOST.
Note: Multiple ports can be removed at one time by using the separator or colon (:)
between the port names, as shown in the following example:
rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola
Note: While creating the host cluster, if you want it to inherit the volumes mapped to a
particular host, then use the -seedfromhost flag option. Any volume mapping that does not
need to be shared can be kept private by using -ignoreseedvolume flag option.
In Example 8-37, the hosts ITSO_HOST_1 and ITSO_HOST_2 are added as part of host cluster
ITSO_HOST_CLUSTER.
Note: When a volume is mapped to a host cluster, that volume gets mapped to all the
members of the host cluster with the same SCSI_ID.
388 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Listing the volumes that are mapped to a host cluster
To list the volumes that are mapped to a host cluster, the lshostclustervolumemap command
can be used, as shown in Example 8-40.
Example 8-40 Listing volumes that are mapped to a host cluster by using lshostclustervolumemap
IBM_2145:ITSO:superuser>lshostclustervolumemap ITSO_HOST_CLUSTER
id name SCSI_id volume_id volume_name volume_UID IO_group_id
IO_group_name
0 ITSO_HOST_CLUSTER 0 86 ITSO_VD_1 60050768018786C188000000000001E1 0 io_grp0
0 ITSO_HOST_CLUSTER 1 87 ITSO_VD_2 60050768018786C188000000000001E2 0 io_grp0
0 ITSO_HOST_CLUSTER 2 88 ITSO_VD_3 60050768018786C188000000000001E3 0 io_grp0
Note: The lshostvdiskmap command can be issued against each host that is part of host
cluster to ensure that the mapping type for the shared volume is shared and is private for
the non-shared volume.
In Example 8-41, volume ITSO_VD_1 has been unmapped from the host cluster
ITSO_HOST_CLUSTER. The current volume mapping can be checked to ensure that it is
unmapped, as shown in Example 8-40.
Note: To specify the host or hosts that acquire private mappings from the volume that is
being removed from the host cluster, specify the -makeprivate flag.
In Example 8-42, the host ITSO_HOST_2 has been removed as a member from the host
cluster ITSO_HOST_CLUSTER, along with the associated volume mappings due to the
-removemappings flag being specified.
The -removemappings flag also causes the system to remove any host mappings to volumes
that are shared. The mappings are deleted before the host cluster is deleted.
390 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
9
Storage migration uses the volume mirroring functionality to allow reads and writes during the
migration, and minimizing disruption and downtime. After the migration is complete, the
existing system can be retired. SVC supports migration through Fibre Channel and Internet
Small Computer Systems Interface (iSCSI) connections. Storage migration can be used to
migrate data from other storage systems and IBM SVC.
Note: This chapter does not cover migration outside of the storage migration wizard. To
migrate data outside of the wizard, you must use Import. For information about the Import
action, see Chapter 6, “Storage pools” on page 197.
Attention: The system does not require a license for its own control and expansion
enclosures. However, a license is required for each enclosure of any external systems that
are being virtualized. Data can be migrated from existing storage systems to your system
by using the external virtualization function within 45 days of purchase of the system
without purchase of a license. After 45 days, any ongoing use of the external virtualization
function requires a license for each enclosure in each external system.
Set the license temporarily during the migration process to prevent messages that indicate
that you are in violation of the license agreement from being sent. When the migration is
complete, or after 45 days, either reset the license to its original limit or purchase a new
license.
392 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
9.1.1 Interoperability and compatibility
Interoperability is an important consideration when a new storage system is set up in an
environment that contains existing storage infrastructure. Before attaching any external
storage systems to the SVC, see the IBM System Storage Interoperation Center (SSIC):
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
Select IBM System Storage SAN Volume Controller in Storage Family, then SVC Storage
Controller Support in Storage Model. You can then refine your search by selecting the
external storage controller that you want to use in the Storage Controller menu.
The matrix results give you indications about the external storage that you want to attach to
the SVC, such as minimum firmware level or support for disks greater than 2 TB.
9.1.2 Prerequisites
Before the storage migration wizard can be started, the external storage system must be
visible to the SVC. You also need to confirm that the restrictions and prerequisites are met.
Administrators can migrate data from the external storage system to the system that uses
either iSCSI connections or Fibre Channel or Fibre Channel over Ethernet connections. For
more details about how to manage external storage, see Chapter 6, “Storage pools” on
page 197.
Note: Test the setting changes on a non production server. The LUN has a different unique
identifier after it is imported. It looks like a mirrored volume to the VMware server.
Note: Test the setting changes on a non-production server. The LUN has a different unique
identifier after it is imported. It appears as a mirrored volume to the VMware server.
If the external storage system is not detected, the warning message shown in Figure 9-1 is
displayed when you attempt to start the migration wizard. Click Close and correct the problem
before you try to start the migration wizard again.
Attention: The risk of losing data when the storage migration wizard is used correctly is
low. However, it is prudent to avoid potential data loss by creating a backup of all the data
that is stored on the hosts, the existing storage systems, and the SVC before the wizard is
used.
394 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Complete the following steps to complete the migration by using the storage migration wizard:
1. Navigate to Pools → System Migration, as shown in Figure 9-2. The System Migration
pane provides access to the storage migration wizard and displays information about the
migration progress.
2. Click Start New Migration to begin the storage migration wizard, as shown in Figure 9-3.
Note: Starting a new migration adds a volume to be migrated in the list displayed in the
pane. After a volume is migrated, it will remain in the list until you “finalize” the
migration.
3. If both Fibre Channel and iSCSI external systems are detected, a dialog is shown asking
you which protocol should be used. Select the type of attachment between the SVC and
the external system from which you want to migrate volumes and click Next. If only one
type of attachment is detected, this dialog is not displayed.
396 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5. Prepare the environment migration by following the on-screen instructions that are shown
in Figure 9-5.
398 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Before you migrate storage, record the hosts and their WWPNs or IQNs for each volume
that is being migrated, and the SCSI LUN when mapped to the SVC.
Table 9-1 shows an example of a table that is used to capture information that relates to
the external storage system LUs.
Note: Make sure to record the SCSI ID of the LUs to which the host is originally
mapped. Some operating systems do not support changing the SCSI ID during the
migration.
Click Next and wait for the system to discover external devices.
7. The next window shows all of the MDisks that were found. If the MDisks to be migrated are
not in the list, check your zoning or IP configuration, as applicable, and your LUN
mappings. Repeat the previous step to trigger the discovery procedure again.
Select the MDisks that you want to migrate, as shown in Figure 9-7. In this example, only
mdisk7 and mdisk1 are being migrated. Detailed information about an MDisk is visible by
double-clicking it. To select multiple elements from the table, use the standard
Shift+left-click or Ctrl+left-click actions. Optionally, you can export the discovered MDisks
list to a CSV file, for further use, by clicking Export to CSV.
8. Click Next and wait for the MDisks to be imported. During this task, the system creates a
new storage pool called MigrationPool_XXXX and adds the imported MDisks to the
storage pool as image-mode volumes.
9. The next window lists all of the hosts that are configured on the system and enables you to
configure new hosts. This step is optional and can be bypassed by clicking Next. In this
example, the host ITSO_Host is already configured, as shown in Figure 9-8. If no host is
selected, you will be able to create a host after the migration completes and map the
imported volumes to it.
400 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.If the host that needs access to the migrated data is not configured, select Add Host to
begin the Add Host wizard. Enter the host connection type, name, and connection details.
Optionally, click Advanced to modify the host type and I/O group assignment. Figure 9-9
shows the Add Host wizard with the details completed.
For more information about the Add Host wizard, see Chapter 8, “Hosts” on page 337.
Figure 9-9 If not already defined, you can create a host during the migration process
11.Click Add. The host is created and is now listed in the Configure Hosts window, as shown
in Figure 9-8 on page 400. Click Next to proceed.
12.The next window lists the new volumes and enables you to map them to hosts. The
volumes are listed with names that were automatically assigned by the system. The
names can be changed to reflect something more meaningful to the user by selecting the
volume and clicking Rename in the Actions menu.
402 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You can manually assign a SCSI ID to the LUNs you are mapping. This technique is
particularly useful when the host needs to have the same LUN ID for a LUN before and
after it is migrated. To assign the SCSI ID manually, select the Self Assign option and
follow the instructions as shown in Figure 9-11.
When your LUN mapping is ready, click Next. A new window is displayed with a summary
of the new and existing mappings, as shown in Figure 9-12.
The migration starts. This task continues running in the background and uses the volume
mirroring function to place a generic copy of the image-mode volumes in the selected
storage pool. For more information about Volume Mirroring, see Chapter 7, “Volumes” on
page 251.
Note: With volume mirroring, the system creates two copies (Copy0 and Copy1) of a
volume. Typically, Copy0 is located in the Migration Pool, and Copy1 is created in the
target pool of the migration. When the host generates a write I/O on the volume, data is
written at the same time on both copies. Read I/Os are performed on the preferred
copy. In the background, a mirror synchronization of the two copies is performed and
runs until the two copies are synchronized. The speed of this background
synchronization can be changed in the volume properties.
See Chapter 7, “Volumes” on page 251 for more information about volume mirroring
synchronization rate.
404 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
15.Click Finish to end the storage migration wizard, as shown in Figure 9-14.
16.The end of the wizard is not the end of the migration task. You can find the progress of the
migration in the Storage Migration window, as shown in Figure 9-15. The target storage
pool and the progress of the volume copy synchronization is also displayed there.
Figure 9-15 Ongoing Migrations are listed in the Storage Migration window
17.When the migration completes, select all of the migrations that you want to finalize,
right-click the selection, and click Finalize, as shown in Figure 9-16.
18.When finalized, the image-mode copies of the volumes are deleted and the associated
MDisks are removed from the migration pool. The status of those MDisks returns to
unmanaged. You can verify the status of the MDisks by navigating to Pools → External
Storage, as shown in Figure 9-18. In the example, mdisk7 has been migrated and
finalized, it appears as unmanaged in the external storage window. Mdisk1 is still being
migrated and has not been finalized. It appears as image and belongs to the Migration
Pool.
All the steps that are described in the Storage Migration wizard can be performed
manually, but generally use the wizard as a guide.
Note: For a “real-life” demonstration of the storage migration capabilities offered with IBM
Spectrum Virtualize, see the following page:
http://ibm.biz/VirtualizeDataMigration
The demonstration includes three different step-by-step scenarios showing the integration
of an SVC cluster into an existing environment with one Microsoft Windows Server (image
mode), one IBM AIX server (LVM mirroring), and one VMware ESXi server (storage
vMotion).
406 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10
All of these issues deal with data placement and relocation capabilities or data volume
reduction. Most of these challenges can be managed by having spare resources available,
moving data, and by using data mobility tools or operating systems features (such as host
level mirroring) to optimize storage configurations.
However, all of these corrective actions are expensive in terms of hardware resources, labor,
and service availability. Relocating data among the physical storage resources that
dynamically or effectively reduces the amount of data, transparently to the attached host
systems, is becoming increasingly important.
408 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.2 EasyTier
In today’s storage market, Flash drives and Flash arrays are emerging as an attractive
alternative to hard disk drives (HDDs). Because of their low response times, high throughput,
and IOPS-energy-efficient characteristics, Flash drives and Flash arrays have the potential to
allow your storage infrastructure to achieve significant savings in operational costs.
However, the current acquisition cost per gibibyte (GiB) for Flash drives or Flash array is
higher than for HDDs. Flash drive and flash array performance depends on workload
characteristics. Therefore, they should be used with HDDs for optimal cost / performance.
Choosing the correct mix of drives and the correct data placement is critical to achieve
optimal performance at low cost. Maximum value can be derived by placing “hot” data with
high input/output (I/O) density and low response time requirements on Flash drives or Flash
arrays, and targeting HDDs for “cooler” data that is accessed more sequentially and at lower
rates.
EasyTier automates the placement of data among different storage tiers and it can be
enabled for internal and external storage. This IBM Spectrum Virtualize feature boosts your
storage infrastructure performance to achieve optimal performance through a software,
server, and storage solution.
Additionally, the no-charge feature called storage pool balancing, introduced in the IBM
Spectrum Virtualize V7.3, automatically moves extents within the same storage tier, from
overloaded to less loaded managed disks (MDisks). Storage pool balancing ensures that your
data is optimally placed among all MDisks within a storage pool.
This function includes the ability to automatically and non-disruptively relocate data (at the
extent level) from one tier to another tier (or even within the same tier), in either direction. This
feature helps achieve the best available storage performance for your workload in your
environment. EasyTier reduces the I/O latency for hot spots, but it does not replace storage
cache.
Both EasyTier and storage cache solve a similar access latency workload problem. However,
these two methods weigh differently in the algorithmic construction that is based on locality of
reference, recency, and frequency. Because EasyTier monitors I/O performance from the
device end (after cache), it can pick up the performance issues that cache cannot solve, and
complement the overall storage system performance.
In general, the storage environments’ I/O is monitored at a volume level, and the entire
volume is always placed inside one appropriate storage tier. Determining the amount of I/O,
moving part of the underlying volume to an appropriate storage tier, and reacting to workload
changes are too complex for manual operation. This area is where the EasyTier feature can
be used.
410 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 10-2 shows the basic EasyTier principle of operation.
You can enable EasyTier on a volume basis. It monitors the I/O activity and latency of the
extents on all EasyTier enabled volumes over a 24-hour period. Based on the performance
log, EasyTier creates an extent migration plan and dynamically moves (promotes) high
activity or hot extents to a higher disk tier within the same storage pool.
It also moves (demotes) extents whose activity dropped off, or cooled, from a higher disk tier
MDisk back to a lower tier MDisk. When EasyTier runs in a storage pool rebalance mode, it
moves extents from busy MDisks to less busy MDisks of the same type.
As is the case for HDDs, the Flash drive RAID array format helps to protect against individual
Flash drive failures. Depending on your requirements, you can achieve more high availability
(HA) protection, beyond the RAID level, by using volume mirroring.
The internal storage configuration of flash arrays can differ depending on an array vendor.
Regardless of the methods that are used to configure flash-based storage, the flash system
maps a volume to a host, in this case to the SVC. From the SVC perspective, a volume that is
presented from flash storage is also seen as a normal managed disk.
Starting with SVC 2145-DH8 nodes and IBM Storwize V7.3, up to two expansion drawers can
be connected to the one SVC I/O Group. Each drawer can have up to 24 SDDs, and only
SDD drives are supported. The SDD drives are then gathered together to form RAID arrays in
the same way that RAID arrays are formed in the IBM Storwize systems.
After creation of a Flash drive array, it appears as an MDisk but with a tier of tier0_flash or a
RI Flash array will appear with a tier of tier1_flash, which differs from MDisks presented from
external storage systems. Because IBM Spectrum Virtualize cannot know from what type of
physical drives that the presented MDisk is formed from, the default MDisk tier that SVC adds
to each external MDisk is tier_enterprise. It is up to the user or administrator to change the
tier of each external MDisk to tier0_flash, tier1_flash, tier_enterprise, or tier_nearline as
appropriate.
To change a tier of an MDisk in the CLI, use the chmdisk command, as shown in
Example 10-1.
412 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
It is also possible to change the MDisk tier from the graphical user interface (GUI) but this
technique can only be used for external MDisks. To change the tier, complete these steps:
1. Click Pools → External Storage and click the expand sign (>) next to the controller that
owns the MDisks for which you want to change the tier.
2. Right-click the target MDisk and select Modify Tier (Figure 10-3).
3. A new window opens with options to change the tier (Figure 10-4).
The tier change happens online and has no effect on host or volumes availability.
The SVC does not automatically detect the type of MDisks, except for MDisks that are formed
of Flash drives from integrated expansion drawers. Instead, all external MDisks are initially
put into the enterprise tier by default. Then, the administrator must manually change the tier
of MDisks and add them to storage pools. Depending on what type of disks are gathered to
form a storage pool, the following types of storage pools are distinguished:
Single-tier
Multitier
414 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Single-tier storage pools
Figure 10-6 shows a scenario in which a single storage pool is populated with MDisks that are
presented by an external storage controller. In this solution, the striped volumes can be
measured by EasyTier, and can benefit from storage pool balancing mode, which moves
extents between MDisks of the same type.
MDisks that are used in a single-tier storage pool should have the same hardware
characteristics, for example, the same RAID type, RAID array size, disk type, disk RPM, and
controller performance characteristics.
Although this example shows RAID 5 arrays, other RAID types can be used as well.
Adding Flash drives to the pool also means that more space is now available for new volumes
or volume expansion.
Note: Image mode and sequential volumes are not candidates for EasyTier automatic data
placement because all extents for those types of volumes must be on one specific MDisk
and cannot be moved.
The EasyTier setting can be changed on a storage pool and volume level. Depending on the
EasyTier setting and the number of tiers in the storage pool, EasyTier services might function
in a different way.
416 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Table 10-1 shows possible combinations of EasyTier settings.
Table notes:
1. If the volume copy is in image or sequential mode or is being migrated, the volume copy
EasyTier status is measured rather than active.
2. When the volume copy status is inactive, no EasyTier functions are enabled for that
volume copy.
3. When the volume copy status is measured, the EasyTier function collects usage
statistics for the volume, but automatic data placement is not active.
4. When the volume copy status is balanced, the EasyTier function enables
performance-based pool balancing for that volume copy.
5. When the volume copy status is active, the EasyTier function operates in automatic
data placement mode for that volume.
6. The default EasyTier setting for a storage pool is Auto, and the default EasyTier setting
for a volume copy is On. Therefore, EasyTier functions, except pool performance
balancing, are disabled for storage pools with a single tier. Automatic data placement
mode is enabled by default for all striped volume copies in a storage pool with two or
more tiers.
tier0_flash tier1_flash
tier0_flash tier_enterprise
tier0_flash tier_nearline
tier1_flash tier_enterprise
tier1_flash tier_nearline
tier_enterprise tier_nearline
tier0_flash
tier1_flash
tier_enterprise
tier_nearline
Read Intensive flash drive support for IBM Spectrum Virtualize/Storwize systems was
introduced with V7.7 and then enhanced with V7.8 introducing, among other things, EasyTier
support for RI MDisks.
Even though EasyTier remains a three tier storage architecture, 7.8 added a new tier
specifically for the RI MDisks. From a user perspective, then, there are now four tiers:
T0 or tier0_flash that represents the enterprise flash technology
T1 or tier1_flash that represents the RI flash drive technology
T2 or tier_enterprise that represents the enterprise HDD technology
T3 or tier_nearline that represents the nearline HDD technology
418 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
These user tiers are mapped to EasyTier tiers depending on the pool configuration.
Figure 10-8 shows the possible combinations for the pool configuration with four user tiers
(the configurations containing the RI user tier is highlighted in orange).
The table columns represent all the possible pool configurations, while the rows represent
reports in which EasyTier tier each user tier is mapped. For example, consider a pool with all
the possible tiers configured that corresponds with the T0+T1+T2+T3 configuration in the
table. With this configuration, the T1 and T2 are mapped to the same EasyTier tier (tier 2).
Note that the Tier1_flash tier is only mapped to EasyTier 1 or 2 tier.
For more information about planning and configuration considerations or best practices, see
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521, and Implementing IBM Easy Tier with IBM Real-time
Compression, TIPS1072.
For more information about RI Flash drives, see Read Intensive Flash Drives, REDP-5380.
When enabled, EasyTier performs the following actions between the three tiers presented in
Figure 10-9 on page 421:
Promote
Moves the relevant hot extents to a higher performing tier.
Swap
Exchange cold extent in upper tier with hot extent in lower tier.
Warm demote
– Prevents performance overload of a tier by demoting a warm extent to the lower tier.
– Triggered when bandwidth or IOPS exceeds predefined threshold.
Demote or cold demote
Coldest data is moved to a lower tier. Only supported between HDD tiers.
Expanded cold demote
Demotes appropriate sequential workloads to the lowest tier to better use nearline tier
bandwidth.
Storage pool balancing
– Redistribute extents within a tier to balance usage across MDisks for maximum
performance.
– Moves hot extents from high usage MDisks to low usage MDisks.
– Exchanges extents between high usage MDisks and low usage MDisks.
EasyTier attempts to migrate the most active volume extents up to flash tier first.
A previous migration plan and any queued extents that are not yet relocated
are abandoned.
Note: Extent promotion / demotion only occurs between adjacent tiers. In a three-tier
storage pool, EasyTier does not move extents from a flash tier directly to nearline tier or
vice versa without moving to the enterprise tier first.
420 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
EasyTier extent migration types are presented in Figure 10-9.
Dynamic data movement is not apparent to the host server and application users of the data,
other than providing improved performance. Extents are automatically migrated, as explained
in “Implementation rules” on page 423. The statistic summary file is also created in this mode.
This file can be offloaded for input to the advisor tool. The tool produces a report on the
extents that are moved to a higher tier and a prediction of performance improvement that can
be gained if more higher tier disks are available.
Options: The EasyTier function can be turned on or off at the storage pool level and at the
volume level.
The process automatically balances existing data when new MDisks are added into an
existing pool even if the pool only contains a single type of drive. However, the process does
not migrate extents from existing MDisks to achieve even extent distribution among all, old
and new, MDisks in the storage pool. The EasyTier RB process within a tier migration plan is
based on performance, and not on the capacity of the underlying MDisks.
Note: Storage pool balancing can be used to balance extents when mixing different size
disks of the same performance tier. For example, when adding larger capacity drives to a
pool with smaller capacity drives of the same class, storage pool balancing redistributes
the extents to take advantage of the additional performance of the new MDisks.
422 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Implementation rules
Remember the following implementation and operational rules when you use the IBM System
Storage EasyTier function on the SVC:
EasyTier automatic data placement is not supported on image mode or sequential
volumes. I/O monitoring for such volumes is supported, but you cannot migrate extents on
these volumes unless you convert image or sequential volume copies to striped volumes.
Automatic data placement and extent I/O activity monitors are supported on each copy of
a mirrored volume. EasyTier works with each copy independently of the other copy.
If possible, the SVC creates volumes or expands volumes by using extents from MDisks
from the tier_enterprise tier. However, it uses extents from MDisks from the tier0_flash
or tier1_flash tiers, if necessary.
When a volume is migrated out of a storage pool that is managed with EasyTier, EasyTier
automatic data placement mode is no longer active on that volume. Automatic data
placement is also turned off while a volume is being migrated, even when it is between pools
that both have EasyTier automatic data placement enabled. Automatic data placement for the
volume is reenabled when the migration is complete.
Limitations
When you use EasyTier on the SVC, keep in mind the following limitations:
Removing an MDisk by using the -force parameter
When an MDisk is deleted from a storage pool with the -force parameter, extents in use
are migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from the other tier are used.
Migrating extents
When EasyTier automatic data placement is enabled for a volume, you cannot use the
svctask migrateexts CLI command on that volume.
Migrating a volume to another storage pool
When the SVC migrates a volume to a new storage pool, EasyTier automatic data
placement between the two tiers is temporarily suspended. After the volume is migrated to
its new storage pool, EasyTier automatic data placement between the tier0_flash and the
tier_enterprise resumes for the moved volume, if appropriate.
When the SVC migrates a volume from one storage pool to another, it attempts to migrate
each extent to an extent in the new storage pool from the same tier as the original extent.
In several cases, such as where a target tier is unavailable, another tier is used. For
example, the tier0_flash tier might be unavailable in the new storage pool.
Migrating a volume to an image mode
EasyTier automatic data placement does not support image mode. When a volume with
active EasyTier automatic data placement mode is migrated to an image mode, EasyTier
automatic data placement mode is no longer active on that volume.
Image mode and sequential volumes cannot be candidates for automatic data placement.
However, EasyTier supports evaluation mode for image mode volumes.
copy_id 0
status online
sync yes
424 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier off
easy_tier_status measured
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 1.00GB
tier tier_nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 1.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
encrypt no
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier tier0_flash
tier_capacity 0.00MB
tier tier1_flash
tier_capacity 0.00MB
tier tier_enterprise
tier_capacity 1.00GB
tier tier_nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 1.00GB
parent_mdisk_grp_id 0
426 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
parent_mdisk_grp_name Pool0_Site1
encrypt no
428 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
used_capacity_after_reduction 0.00MB
deduplication_capacity_saving 0.00MB
reclaimable_capacity 0.00MB
Tuning EasyTier
It is also possible to change more advanced parameters of EasyTier. Use these parameters
with caution because changing the default values can affect system performance.
EasyTier acceleration
EasyTier acceleration is a system-wide setting that is disabled by default. Turning on this
setting makes EasyTier move extents up to four times faster than the default setting. In
accelerate mode, EasyTier can move up to 48 GiB per 5 minutes, whereas in normal mode it
moves up to 12 GiB. Enabling EasyTier acceleration is advised only during periods of low
system activity. The following are the two most probable use cases for acceleration:
When adding new capacity to the pool, accelerating EasyTier can quickly spread existing
volumes onto the new MDisks.
Migrating the volumes between the storage pools when the target storage pool has more
tiers than the source storage pool, so EasyTier can quickly promote or demote extents in
the target pool.
This setting can be changed online, without any effect on host or data availability. To turn on
or off EasyTier acceleration mode, use the chsystem command, as shown in Example 10-3.
430 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
vdisk_protection_time 15
vdisk_protection_enabled no
product_name IBM SAN Volume Controller
odx off
max_replication_delay 0
partnership_exclusion_threshold 315
gen1_compatibility_mode_enabled
ibm_customer
ibm_component
ibm_country
tier0_flash_compressed_data_used 0.00MB
tier1_flash_compressed_data_used 0.00MB
tier_enterprise_compressed_data_used 0.00MB
tier_nearline_compressed_data_used 0.00MB
total_reclaimable_capacity 0.00MB
unmap on
used_capacity_before_reduction 0.00MB
used_capacity_after_reduction 0.00MB
deduplication_capacity_saving 0.00MB
432 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
max_replication_delay 0
partnership_exclusion_threshold 315
gen1_compatibility_mode_enabled
ibm_customer
ibm_component
ibm_country
tier0_flash_compressed_data_used 0.00MB
tier1_flash_compressed_data_used 0.00MB
tier_enterprise_compressed_data_used 0.00MB
tier_nearline_compressed_data_used 0.00MB
total_reclaimable_capacity 0.00MB
unmap on
used_capacity_before_reduction 0.00MB
used_capacity_after_reduction 0.00MB
deduplication_capacity_saving 0.00MB
The system uses the default setting based on the discovered storage system the MDisk is
presented from. Change the default setting to any other value only when you are certain that
a particular MDisk is underutilized and can handle more load, or that the MDisk is overutilized
and the load should be lowered. Change this setting to very high only for SDD and flash
MDisks.
This setting can be changed online, without any effect on the hosts or data availability. To
change this setting, use the chmdisk command, as shown in Example 10-4.
434 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
strip_size
spare_goal
spare_protection_min
balanced
tier nearline
slow_write_priority
fabric_type fc
site_id 1
site_name ITSO_DC1
easy_tier_load medium
encrypt no
distributed no
drive_class_id
drive_count 0
stripe_width 0
rebuild_areas_total
rebuild_areas_available
rebuild_areas_goal
dedupe no
preferred_iscsi_port_id
active_iscsi_port_id
replacement_date
Heat data files are produced approximately once a day (that is, every 24 hours) when
EasyTier is active on one or more storage pools. They summarize the activity per volume
since the prior heat data file was produced. On the SVC, the heat data file is in the
/dumps/easytier directory on the configuration node and named
dpa_heat.<node_name>.<time_stamp>.data. Existing heat data file are erased after seven
days.
From the Download New Support Package or Log File window, click Download Existing
Package, as shown in Figure 10-11.
Figure 10-11 Downloading EasyTier heat data file: Download Existing Package
436 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You can filter for heat files and select the most recent heat data file from the window shown in
Figure 10-12. Click Download and save the file wherever you want. In our example, we save
the file on our workstation to the STAT\input_files directory.
Figure 10-12 Downloading EasyTier heat data file: Filter, select, and download
STAT must be started from a Windows command prompt with the file specified as a
parameter, as shown in Example 10-5.
You can also specify the output directory if you want. STAT creates a set of Hypertext Markup
Language (HTML) files, and the user can then open the STAT\index.html file in a browser to
view the results. Additionally, three comma-separated values (CSV) files are created and
placed in the Data_files directory.
In addition to the STAT tool, another utility is available that is a Microsoft Excel file for creating
additional graphical reports of the workload that EasyTier performs. The IBM STAT Charting
Utility takes the previous three CSV output files and turns them into graphs for simple
reporting.
The STAT Charting Utility can be downloaded from the IBM Support website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251
438 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The graphs display the following information:
Workload Categorization report
Workload visuals help you compare activity across tiers within and across pools to help
determine the optimal drive mix for the current workloads. The output is illustrated in
Figure 10-14.
440 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.2.10 More information
For more information about planning and configuration considerations, best practices, and
monitoring and measurement tools, see IBM System Storage SAN Volume Controller and
Storwize V7000 Best Practices and Performance Guidelines, SG24-7521, and Implementing
IBM Easy Tier with IBM Real-time Compression, TIPS1072.
Thin provisioning presents more storage space to the hosts or servers that are connected to
the storage system than is available on the storage system. The IBM SVC has this capability
for FC and Internet Small Computer System Interface (iSCSI) provisioned volumes.
An example of thin provisioning is when a storage system contains 5000 GiB of usable
storage capacity, but the storage administrator mapped volumes of 500 GiB each to 15 hosts.
In this example, the storage administrator makes 7500 GiB of storage space visible to the
hosts, even though the storage system has only 5000 GiB of usable space, as shown in
Figure 10-17.
In this case, all 15 hosts cannot immediately use all 500 GiB that are provisioned to them.
The storage administrator must monitor the system and add storage, as needed.
Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the
capacity of the volume that is reported to other SVC components (such as FlashCopy or
remote copy) and to the hosts. For example, you can create a volume with a real capacity of
only 100 GiB but a virtual capacity of 1 tebibyte (TiB). The actual space that is used by the
volume on the SVC is 100 GiB, but hosts see a 1 TiB volume.
A directory maps the virtual address space to the real address space. The directory and the
user data share the real capacity.
The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used up (reached) and the
volume must expand.
Warning threshold: Enable the warning threshold by using email, such as Simple Mail
Transfer Protocol (SMTP), or a Simple Network Management Protocol (SNMP) trap, when
you work with thin-provisioned volumes. You can enable the warning threshold on the
volume, and on the storage pool side, especially when you do not use the autoexpand
mode. Otherwise, the thin volume goes offline if it runs out of space.
Autoexpand mode does not cause real capacity to grow much beyond the virtual capacity.
The real capacity can be manually expanded to more than the maximum that is required by
the current virtual capacity, and the contingency capacity is recalculated.
442 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Tip: Consider the use of thin-provisioned volumes as targets in FlashCopy mappings.
Space allocation
When a thin-provisioned volume is created, a small amount of the real capacity is used for
initial metadata. Write I/Os to the grains of the thin volume (that were not previously written to)
cause grains of the real capacity to be used to store metadata and user data. Write I/Os to the
grains (that were previously written to) update the grain where data was previously written.
The grain is defined when the volume is created, and can be 32 KiB, 64 KiB, 128 KiB, or
256 KiB.
Smaller granularities can save more space, but they have larger directories. When you use
thin-provisioning with FlashCopy, specify the same grain size for the thin-provisioned volume
and FlashCopy.
Some file systems, such as New Technology File System (NTFS), write to the whole volume
before overwriting deleted files. Other file systems reuse space in preference to allocating
new space. File system problems can be moderated by tools, such as defrag, or by managing
storage by using host Logical Volume Managers (LVMs).
The thin-provisioned volume also depends on how applications use the file system. For
example, some applications delete log files only when the file system is nearly full.
Starting with V7.3, the cache subsystem architecture was redesigned. Now, thin-provisioned
volumes can benefit from lower cache functions (such as coalescing writes or prefetching),
which greatly improve performance.
Table 10-3 Maximum thin-provisioned volume virtual capacities for an extent size
Extent size Maximum volume real capacity Maximum thin-provisioned
(in MiB) (in GiB) volume virtual capacity
(in GiB)
16 2,048 2,000
32 4,096 4,000
64 8,192 8,000
Table 10-4 shows the maximum thin-provisioned volume virtual capacities for a grain size.
Table 10-4 Maximum thin-provisioned volume virtual capacities for a grain size
Grain size Maximum thin-provisioned volume virtual capacity
(in KiB) (in GiB)
32 260,000
64 520,000
128 1,040,000
256 2,080,000
For more information and detailed performance considerations for configuring thin
provisioning, see IBM System Storage SAN Volume Controller and Storwize V7000 Best
Practices and Performance Guidelines, SG24-7521. You can also go to IBM Knowledge
Center:
https://ibm.biz/BdjGMT
444 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.4 Unmap
There is an industry trend toward host operating systems having more control over the
storage that it is using. VMWare’s VAAI/VASA/VVOLs and Microsoft’s ODX are examples of
such technologies. These technologies allow host operating systems to manage data (for
example, provision, move data around) on the controller without a storage administrator
needing to do anything on the storage controller. This change also means that a user on the
host operating system does not need to know anything about the underlying technologies.
When a host allocates storage, the data is placed in a volume. To free the allocated space
back to the storage pools, human intervention is needed on the storage controller. The SCSI
Unmap feature is used to allow host operating systems to unprovision storage on the storage
controller, which means that the resources can automatically be freed up in the storage pools
and used for other purposes.
A SCSI unmappable volume is a volume that can have storage unprovision and space
reclamation being triggered by the host operating system. With the release of V8.1 code, the
SCSI unmap command is passed through to back-end storage controllers that support the
function.
Note: Some host types will respond to this by issuing WRITE SAME UNMAP commands,
generating large amounts of I/O. Offload throttling must be enabled before upgrading to
V8.1 to prevent this extra workload from overloading MDisks. For more information, see:
https://www.ibm.com/support/docview.wss?uid=ssg1S1010697
Tip: Implementing compression in IBM Spectrum Virtualize provides the same benefits
to internal Flash drives and externally virtualized storage systems.
Demonstration: The IBM Client Demonstration Center shows how easy it is to reduce
your data footprint in the Data Reduction: reduce easily data footprint on your existing “IBM
Spectrum Virtualize based” storage with IBM Real-Time Compression demo available at:
https://ibm.biz/Bdjhzx
General-purpose volumes
Most general-purpose volumes are used for highly compressible data types, such as home
directories, CAD/CAM, oil and gas geo-seismic data, and log data. Storing such types of data
in compressed volumes provides immediate capacity reduction to the overall used space.
More space can be provided to users without any change to the environment.
Many file types can be stored in general-purpose servers. However, for practical information,
the estimated compression ratios are based on actual field experience. Expected
compression ratios are 50% - 60%.
446 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
File systems that contain audio, video files, and compressed files are not good candidates for
compression. The overall capacity savings on these file types are minimal.
Databases
Database information is stored in table space files. High compression ratios are common in
database volumes. Examples of databases that can greatly benefit from RtC are IBM DB2,
Oracle, and Microsoft SQL Server. Expected compression ratios are 50% - 80%.
Virtualized infrastructures
The proliferation of open systems virtualization in the market has increased the use of storage
space, with more virtual server images and backups kept online. The use of compression
reduces the storage requirements at the source.
Examples of virtualization solutions that can greatly benefit from RtC are VMware,
Microsoft Hyper-V, and kernel-based virtual machine (KVM). Expected compression ratios
are 45% - 75%.
Tip: Virtual machines (VMs) with file systems that contain compressed files are not good
compression candidates, as described in “Databases”.
At a high level, the IBM RACE component compresses data that is written into the storage
system dynamically. This compression occurs transparently, so Fibre Channel and iSCSI
connected hosts are not aware of the compression. RACE is an inline compression
technology, which means that each host write is compressed as it passes through IBM
Spectrum Virtualize to the disks. This technology has a clear benefit over other compression
technologies that are post-processing based.
These technologies do not provide immediate capacity savings. Therefore, they are not a
good fit for primary storage workloads, such as databases and active data set applications.
Capacity is saved when the data is written by the host because the host writes are smaller
when they are written to the storage pool. IBM RtC is a self-tuning solution that is similar to
the SVC system. It is adapting to the workload that runs on the system at any particular
moment.
Compression utilities
Compression is probably most known to users because of the widespread use of
compression utilities. At a high level, these utilities take a file as their input, and parse the
data by using a sliding window technique. Repetitions of data are detected within the sliding
window history, most often 32 KiB. Repetitions outside of the window cannot be referenced.
Therefore, the file cannot be reduced in size unless data is repeated when the window
“slides” to the next 32 KiB slot.
Figure 10-18 shows compression that uses a sliding window, where the first two repetitions of
the string ABCD fall within the same compression window, and can therefore be compressed
by using the same dictionary. The third repetition of the string falls outside of this window, and
therefore cannot be compressed by using the same compression dictionary as the first two
repetitions, reducing the overall achieved compression ratio.
448 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Traditional data compression in storage systems
The traditional approach that is taken to implement data compression in storage systems is
an extension of how compression works in the compression utilities previously mentioned.
Similar to compression utilities, the incoming data is broken into fixed chunks, and then each
chunk is compressed and extracted independently.
However, drawbacks exist to this approach. An update to a chunk requires a read of the
chunk followed by a recompression of the chunk to include the update. The larger the chunk
size chosen, the heavier the I/O penalty to recompress the chunk. If a small chunk size is
chosen, the compression ratio is reduced because the repetition detection potential is
reduced.
Figure 10-19 shows an example of how the data is broken into fixed-size chunks (in the
upper-left corner of the figure). It also shows how each chunk gets compressed
independently into variable length compressed chunks (in the upper-right side of the figure).
The resulting compressed chunks are stored sequentially in the compressed output.
Location-based compression
Both compression utilities and traditional storage systems compression compress data by
finding repetitions of bytes within the chunk that is being compressed. The compression ratio
of this chunk depends on how many repetitions can be detected within the chunk. The
number of repetitions is affected by how much the bytes stored in the chunk are related to
each other. Furthermore, the relationship between bytes is driven by the format of the object.
For example, an office document might contain textual information, and an embedded
drawing, such as this page.
Because the chunking of the file is arbitrary, it has no notion of how the data is laid out within
the document. Therefore, a compressed chunk can be a mixture of the textual information
and part of the drawing. This process yields a lower compression ratio because the different
data types mixed together cause a suboptimal dictionary of repetitions. That is, fewer
repetitions can be detected because a repetition of bytes in a text object is unlikely to be
found in a drawing.
This traditional approach to data compression is also called location-based compression. The
data repetition detection is based on the location of data within the same chunk.
This challenge was addressed with the predecide mechanism introduced from V7.1.
Predecide mechanism
Certain data chunks have a higher compression ratio than others. Compressing some of the
chunks saves little space but still requires resources, such as processor (CPU) and memory.
To avoid spending resources on uncompressible data and to provide the ability to use a
different, more effective (in this particular case) compression algorithm, IBM invented a
predecide mechanism that was introduced in V7.1.
450 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The chunks that are below a certain compression ratio are skipped by the compression
engine, saving CPU time and memory processing. These chunks are not compressed with
the main compression algorithm, but that can still be compressed with another algorithm.
They are marked and processed. The results can vary because predecide does not check the
entire block, but only a sample of it.
Temporal compression
RACE offers a technology leap beyond location-based compression, called temporal
compression. When host writes arrive at RACE, they are compressed and fill fixed-size
chunks that are also called compressed blocks. Multiple compressed writes can be
aggregated into a single compressed block. A dictionary of the detected repetitions is stored
within the compressed block.
When applications write new data or update existing data, the data is typically sent from the
host to the storage system as a series of writes. Because these writes are likely to originate
from the same application and be from the same data type, more repetitions are usually
detected by the compression algorithm. This type of data compression is called temporal
compression because the data repetition detection is based on the time that the data was
written into the same compressed block.
Temporal compression adds the time dimension that is not available to other compression
algorithms. It offers a higher compression ratio because the compressed data in a block
represents a more homogeneous set of input data.
When the same three writes are sent through RACE, as shown on Figure 10-23, the writes
are compressed together by using a single dictionary. This approach yields a higher
compression ratio than location-based compression.
452 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.5.4 Dual RACE instances
In V7.4, the compression code was enhanced by the addition of a second RACE instance per
SVC node. This feature takes advantage of multi-core processor architecture, and uses the
compression accelerator cards more effectively. The second RACE instance works in parallel
with the first instance, as shown in Figure 10-24.
With dual RACE enhancement, the compression performance can be boosted up to two times
for compressed workloads when compared to previous versions.
To take advantage of dual RACE, several software and hardware requirements must be met:
The software must be at or above V7.4.
Only 2145-DH8 and 2145-SV1 nodes are supported.
A second eight-core processor must be installed per SVC 2145-DH8 node.
An additional 32 gigabytes (GB) must be installed per SVC 2145-DH8 node.
At least one compression accelerator card must be installed per SVC 2145-DH8 node.
The second acceleration card is not required.
2145- SV1 configurations with the optional cache upgrade feature, expanding the total
system memory to 256 GB. Compression workloads can also benefit from the
hardware-assisted acceleration offered by the addition of up to two compression
accelerator cards.
Tip: Use two compression accelerator cards for the best performance.
When using the dual RACE feature, the acceleration cards are shared between RACE
instances, which means that the acceleration cards are used simultaneously by both RACE
instances. The rest of the resources, such as CPU cores and random access memory (RAM),
are evenly divided between the RACE components.
RACE technology is implemented into the IBM Spectrum Virtualize thin provisioning layer,
and is an organic part of the stack. The IBM Spectrum Virtualize software stack is shown in
Figure 10-25. Compression is transparently integrated with existing system management
design. All of the IBM Spectrum Virtualize advanced features are supported on compressed
volumes. You can create, delete, migrate, map (assign), and unmap (unassign) a compressed
volume as though it were a fully allocated volume.
In addition, you can use Real-time Compression with EasyTier on the same volumes. This
compression method provides nondisruptive conversion between compressed and
decompressed volumes. This conversion provides a uniform user experience, and eliminates
the need for special procedures when dealing with compressed volumes.
454 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.5.6 Data write flow
When a host sends a write request to the SVC, it reaches the upper-cache layer. The host is
immediately sent an acknowledgment of its I/Os. When the upper cache layer destages to the
RACE, the I/Os are sent to the thin-provisioning layer. They are then sent to the RACE, and if
necessary, to the original host write or writes. The metadata that holds the index of the
compressed volume is updated, if needed, and compressed, as well.
After the copies are fully synchronized, the original volume copy is deleted automatically.
As a result, you have compressed data on the existing volume. This process is nondisruptive,
so the data remains online and accessible by applications and users.
This capability enables clients to regain space from the storage pool, which can then be
reused for other applications.
With the virtualization of external storage systems, the ability to compress already stored data
significantly enhances and accelerates the benefit to users. This capability enables them to
see a tremendous return on their SVC investment. On the initial purchase of an SVC with
Real-time Compression, clients can defer their purchase of new storage. When new storage
needs to be acquired, IT purchases a lower amount of the required storage before
compression.
456 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Using IBM FlashSystem A9000 or IBM FlashSystem A9000R with IBM
FlashSystem V9000
The SVC uses the IBM FlashSystem A9000 as external storage. This goal is accomplished
by the steps described in the following sections.
SVC tasks
Perform these tasks:
1. Discover storage on the SVC. The volumes appear as managed disks.
2. Assign these managed disks to a pool containing only storage from this IBM FlashSystem
A9000 device.
3. Allocate volumes from this pool to SVC hosts that are good deduplication targets.
With SVC, using IBM FlashSystem A9000 deduplication technology is simple. Figure 10-28
shows that the Deduplication attribute of the managed disk is Active.
Deduplication status is important because it also allows SVC to recognize and enforce
restrictions:
Storage pools with deduplication MDisks should only contain MDisks from the same
IBM FlashSystem A9000 or IBM FlashSystem A9000R storage controller.
Deduplication MDisks cannot be mixed in an Easy Tier enabled storage pool.
For information about creating a compressed volume, see Chapter 7, “Volumes” on page 251.
10.5.11 Comprestimator
The Comprestimator utility to estimate expected compression ratios on existing volumes has
been built in to IBM Spectrum Virtualize since V7.6.
The built-in Comprestimator is a command-line function that analyzes an existing volume and
provides output showing an estimate of expected compression rate.
Example 10-6 An example of the command that ran over one volume ID 0
IBM_2145:ITSO_SVC_DH8:superuser>lsvdiskanalysis 0
id 0
name SQL_Data0
state estimated
started_time 151012104343
analysis_time 151012104353
capacity 300.00GB
thin_size 290.85GB
thin_savings 9.15GB
thin_savings_ratio 3.05
compressed_size 141.58GB
compression_savings 149.26GB
compression_savings_ratio 51.32
458 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
total_savings 158.42GB
total_savings_ratio 52.80
accuracy 4.97
You can use FlashCopy to help you solve critical and challenging business needs that require
duplication of data of your source Volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system and its cache. Therefore, the copy is not apparent
to the host unless it is mapped.
While the FlashCopy operation is performed, the source Volume is frozen briefly to initialize
the FlashCopy bitmap, after which I/O can resume. Although several FlashCopy options
require the data to be copied from the source to the target in the background, which can take
time to complete, the resulting data on the target Volume is presented so that the copy
appears to complete immediately. This feature means that the copy can immediately be
mapped to a host and is directly accessible for Read and Write operations.
The business applications for FlashCopy are wide-ranging. Common use cases for
FlashCopy include, but are not limited to, the following examples:
Rapidly creating consistent backups of dynamically changing data
Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts
Rapidly creating copies of production data sets for application development and testing
Rapidly creating copies of production data sets for auditing purposes and data mining
Rapidly creating copies of production data sets for quality assurance
Regardless of your business needs, FlashCopy within the IBM Spectrum Virtualize is flexible
and offers a broad feature set, which makes it applicable to many scenarios.
After the FlashCopy is performed, the resulting image of the data can be backed up to tape,
as though it were the source system. After the copy to tape is complete, the image data is
redundant and the target Volumes can be discarded. For time-limited applications, such as
these examples, “no copy” or incremental FlashCopy is used most often. The use of these
methods puts less load on your servers infrastructure.
462 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
When FlashCopy is used for backup purposes, the target data usually is managed as
read-only at the operating system level. This approach provides extra security by ensuring
that your target data was not modified and remains true to the source.
This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.
In addition to the restore option, which copies the original blocks from the target Volume to
modified blocks on the source Volume, the target can be used to perform a restore of
individual files. To do that, you need to make the target available on a host. We suggest that
you do not make the target available to the source host because seeing duplicates of disks
causes problems for most host operating systems. Copy the files to the source by using
normal host data copy methods for your environment.
For more details about how to use reverse FlashCopy, see 11.1.12, “Reverse FlashCopy” on
page 484.
This method differs from the other migration methods, which are described later in this
chapter. Common uses for this capability are host and back-end storage hardware refreshes.
You can also use the FlashCopy feature to create restart points for long running batch jobs.
This option means that if a batch job fails several days into its run, it might be possible to
restart the job from a saved copy of its data rather than rerunning the entire multiday job.
When a FlashCopy operation starts, a checkpoint creates a bitmap table that indicates that
no part of the source Volume has been copied. Each bit in the bitmap table represents one
region of the source Volume and its corresponding region on the target Volume. Each region
is called a grain.
The relationship between two Volumes defines the way data are copied and is called a
FlashCopy mapping.
Figure 11-1 describes the basic terms used with FlashCopy. All elements are explained in
details further in this chapter.
464 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Clone: Sometimes referred to as full copy. A point in time copy of a Volume with
background copy of the data from the source Volume to the target. All blocks from the
source Volume are copied to the target Volume. The target copy becomes a usable
independent Volume.
Backup: Sometimes referred to as incremental. A backup FlashCopy mapping consists of
a point in time full copy of a source Volume, plus periodic increments or “deltas” of data
that has changed between two points in time.
The FlashCopy mapping has four property attributes (clean rate, copy rate, autodelete,
incremental) and seven different states that are described later in this chapter. Users can
perform these actions on a FlashCopy mapping:
Create: Define a source and target, and set the properties of the mapping.
Prepare: The system needs to be prepared before a FlashCopy copy starts. It basically
flushes the cache and makes it “transparent” for a very short time, so no data is lost.
Start: The FlashCopy mapping is started and the copy begins immediately. The target
Volume is immediately accessible.
Stop: The FlashCopy mapping is stopped (either by the system or by the user).
Depending on the state of the mapping, the target Volume is usable or not.
Modify: Some properties of the FlashCopy mapping can be modified after creation.
Delete: Delete the FlashCopy mapping. This action does not delete any of the Volumes
(source or target) from the mapping.
The source and target Volumes must be the same size. The minimum granularity that IBM
Spectrum Virtualize supports for FlashCopy is an entire Volume. It is not possible to use
FlashCopy to copy only part of a Volume.
Important: As with any point-in-time copy technology, you are bound by operating system
and application requirements for interdependent data and the restriction to an entire
Volume.
The source and target Volumes must belong to the same IBM SVC system, but they do not
have to be in the same I/O Group or storage pool.
Volumes that are members of a FlashCopy mapping cannot have their size increased or
decreased while they are members of the FlashCopy mapping.
All FlashCopy operations occur on FlashCopy mappings. FlashCopy does not alter the
Volumes. However, multiple operations can occur at the same time on multiple FlashCopy
mappings, thanks to the use of Consistency Groups.
Dependent writes
It is crucial to use Consistency Groups when a data set spans multiple Volumes. Consider the
following typical sequence of writes for a database update transaction:
1. A write is run to update the database log, which indicates that a database update is about
to be performed.
2. A second write is run to perform the actual update to the database.
3. A third write is run to update the database log, which indicates that the database update
completed successfully.
The database ensures the correct ordering of these writes by waiting for each step to
complete before the next step is started. However, if the database log (updates 1 and 3) and
the database (update 2) are on separate Volumes, it is possible for the FlashCopy of the
database Volume to occur before the FlashCopy of the database log. This sequence can
result in the target Volumes seeing writes 1 and 3 but not 2 because the FlashCopy of the
database Volume occurred before the write was completed.
In this case, if the database was restarted by using the backup that was made from the
FlashCopy target Volumes, the database log indicates that the transaction completed
successfully. In fact, it did not complete successfully because the FlashCopy of the Volume
with the database file was started (the bitmap was created) before the write completed to the
Volume. Therefore, the transaction is lost and the integrity of the database is in question.
Most of the actions that the user can perform on a FlashCopy mapping are the same for
Consistency Groups.
Both of these layers have various levels and methods of caching data to provide better speed.
Because the IBM SAN Volume Controller and, therefore, FlashCopy sit below these layers,
they are unaware of the cache at the application or operating system layers.
To ensure the integrity of the copy that is made, it is necessary to flush the host operating
system and application cache for any outstanding reads or writes before the FlashCopy
operation is performed. Failing to flush the host operating system and application cache
produces what is referred to as a crash consistent copy.
The resulting copy requires the same type of recovery procedure, such as log replay and file
system checks, that is required following a host crash. FlashCopies that are crash consistent
often can be used after file system and application recovery procedures.
466 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Various operating systems and applications provide facilities to stop I/O operations and
ensure that all data is flushed from host cache. If these facilities are available, they can be
used to prepare for a FlashCopy operation. When this type of facility is unavailable, the host
cache must be flushed manually by quiescing the application and unmounting the file system
or drives.
The target Volumes are overwritten with a complete image of the source Volumes. Before the
FlashCopy mappings are started, any data that is held on the host operating system (or
application) caches for the target Volumes must be discarded. The easiest way to ensure that
no data is held in these caches is to unmount the target Volumes before the FlashCopy
operation starts.
Preferred practice: From a practical standpoint, when you have an application that is
backed by a database and you want to make a FlashCopy of that application’s data, it is
sufficient in most cases to use the write-suspend method that is available in most modern
databases. This is possible because the database maintains strict control over I/O.
This method is as opposed to flushing data from both the application and the backing
database, which is always the suggested method because it is safer. However, this method
can be used when facilities do not exist or your environment includes time sensitivity.
IBM Spectrum Protect Snapshot protects data with integrated, application-aware snapshot
backup and restore capabilities that use FlashCopy technologies in the IBM Spectrum
Virtualize.
You can protect data that is stored by IBM DB2 SAP, Oracle, Microsoft Exchange, and
Microsoft SQL Server applications. You can create and manage Volume-level snapshots for
file systems and custom applications.
For more information about IBM Spectrum Protect Snapshot, see IBM Knowledge Center:
https://www.ibm.com/support/knowledgecenter/en/SSERFV_8.1.0
The grain size can be either 64 KB or 256 KB. The default is 256 KB. The grain size cannot be
selected by the user when creating a FlashCopy mapping from the GUI. The FlashCopy
bitmap contains 1 bit for each grain. The bit records whether the associated grain is split by
copying the grain from the source to the target.
After a FlashCopy mapping is created, the grain size for that FlashCopy mapping cannot be
changed. When a FlashCopy mapping is created, if the grain size parameter is not specified
and one of the Volumes is already part of a FlashCopy mapping, the grain size of that
mapping is used.
If neither Volume in the new mapping is already part of another FlashCopy mapping, and at
least one of the Volumes in the mapping is a compressed Volume, the default grain size is 64
for performance considerations. But other than in this situation, the default grain size is 256.
468 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
As shown in Figure 11-3, when data is written on a source Volume, the grain where the
to-be-changed blocks reside is first copied to the target Volume and then modified on the
source Volume. The bitmap is updated to track the copy.
With IBM FlashCopy, the target Volume is immediately accessible for both Read and Write
operations. Therefore, a target Volume can be modified even if it is part of a FlashCopy
mapping. As shown in Figure 11-4, when a Write operation is performed on the target
Volume, the grain that contains the blocks to be changed is first copied from the source (Copy
on Demand). It is then modified with the new value. The bitmap is modified so the grain from
the source will not be copied again even if it is changed or if there is a background copy
enabled.
Note: If all the blocks of the grain to be modified are changed, then there is no need to
copy the source grain first. There is no copy on demand and it is directly modified.
The indirection Layer intercepts any I/O coming from a host (read or write operation) and
addressed to a FlashCopy Volume (source or target). It determines whether the addressed
Volume is a source or a target, its direction (read or write), and the state of the bitmap table
for the FlashCopy mapping that the addressed Volume is in. It then decides what operation to
perform. Hereunder are the different I/O indirections.
470 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If the bitmap indicates that the grain has not been copied yet, then the grain is first copied
on the target (copy on write), the bitmap table is updated, and the grain is modified on the
source, as shown in Figure 11-6.
If the bitmap indicates the grain to be modified on the target has already been copied, then
it is directly changed. The bitmap is not updated, and the grain is modified on the target
with the new value as shown in Figure 11-8.
Note: The bitmap is not updated in that case. Otherwise, it might be copied from the
source later, if a background copy is ongoing or if write operations are made on the source.
That process would over-write the changed grain on the target.
472 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Read from a target volume
Performing a read operation on the target volume returns the value in the grain on the source
or on the target, depending on the bitmap:
If the bitmap indicates that the grain has already been copied from the source or that the
grain has already been modified on the target, then the grain on the target is read as
shown in Figure 11-9.
If the bitmap indicates that the grain has not been copied yet from the source or was not
modified on the target, then the grain on the source is read as shown in Figure 11-9.
In case the source has multiple targets, the Indirection layer algorithm behaves differently on
Target I/Os. For more information about multi target operations, see 11.1.11, “Multiple target
FlashCopy” on page 480.
This copy-on-write process introduces significant latency into write operations. To isolate the
active application from this additional latency, the FlashCopy indirection layer is placed
logically between upper and lower cache. Therefore, the additional latency that is introduced
by the copy-on-write process is encountered only by the internal cache operations and not by
the application.
474 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The background copy rate property will determine the speed at which grains are copied as a
background operation, immediately after the FlashCopy mapping is started. That speed is
defined by the user when creating the FlashCopy mapping, and can be changed dynamically
for each individual mapping, whatever its state. Mapping copy rate values can be 0 - 150, with
the corresponding speeds shown in Table 11-1.
When the background copy function is not performed (copy rate = 0), the target Volume
remains a valid copy of the source data only while the FlashCopy mapping remains in place.
11 - 20 256 KiB 1 4
21 - 30 512 KiB 2 8
31 - 40 1 MiB 4 16
41 - 50 2 MiB 8 32
51 - 60 4 MiB 16 64
61 - 70 8 MiB 32 128
71 - 80 16 MiB 64 256
The grains per second numbers represent the maximum number of grains that the IBM SAN
Volume Controller copies per second. This amount assumes that the bandwidth to the
managed disks (MDisks) can accommodate this rate.
If the IBM SAN Volume Controller cannot achieve these copy rates because of insufficient
width from the nodes to the MDisks, the background copy I/O contends for resources on an
equal basis with the I/O that is arriving from the hosts. Background copy I/O and I/O that is
arriving from the hosts tend to see an increase in latency and a consequential reduction in
throughput.
Background copy and foreground I/O continue to make progress, and do not stop, hang, or
cause the node to fail.
The background copy is performed by one of the nodes that belong to the I/O group in which
the source Volume resides. This responsibility is moved to the other node in the I/O group if
the node that performs the background and stopping copy fails.
Using the -incremental option when creating the FlashCopy mapping allows the system to
keep the bitmap as it is when the mapping is stopped. Therefore, when the mapping is started
again (at another point-in-time), the bitmap is reused and only changes between the two
copies are applied to the target.
A system that provides Incremental FlashCopy capability allows the system administrator to
refresh a target Volume without having to wait for a full copy of the source Volume to be
complete. At the point of refreshing the target Volume, for a particular grain, if the data has
changed on the source or target Volumes, then the grain from the source Volume will be
copied to the target.
The advantages of Incremental FlashCopy are only useful if a previous full copy of the source
Volume has been obtained. Incremental FlashCopy only helps with further recovery time
objectives (RTOs, time needed to recover data from a previous state), it does not help with the
initial RTO.
For example, in Figure 11-11 on page 477, a FlashCopy mapping has been defined between
a source Volume and a target Volume with the -incremental option:
The mapping is started on Copy1 date. A full copy of the source Volume is made, and the
bitmap is updated every time that a grain is copied. At the end of Copy1, all grains have
been copied and the target Volume is an exact replica of the source Volume at the
beginning of Copy1. Although the mapping is stopped, thanks to the -incremental option,
the bitmap is maintained.
Changes are made on the source Volume and the bitmap is updated, although the
FlashCopy mapping is not active. For example, grains E and C on the source are changed
in G and H, their corresponding bits are changed in the bitmap. The target Volume is
untouched.
The mapping is started again on Copy2 date. The bitmap indicates that only grains E and
C were changed, so only G and H are copied on the target Volume. There is no need to
copy the other grains, as they were already copied the first time. The copy time is much
quicker than for the first copy as only a fraction of the source Volume is copied.
476 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 11-11 Incremental FlashCopy example
When using the CLI to perform FlashCopy on Volumes, before you start a FlashCopy
(regardless of the type and options specified), issue a prestartfcmap or
prestartfcconsistgrp command. These commands put the cache into write-through mode
and provides a flushing of the I/O currently bound for your Volume. After FlashCopy is started,
an effective copy of a source Volume to a target Volume is created.
The content of the source Volume is presented immediately on the target Volume and the
original content of the target Volume is lost.
FlashCopy commands can then be issued to the FlashCopy Consistency Group and,
therefore, simultaneously for all of the FlashCopy mappings that are defined in the
Consistency Group. For example, when a FlashCopy start command is issued to the
Consistency Group, all of the FlashCopy mappings in the Consistency Group are started at
the same time. This simultaneous start results in a point-in-time copy that is consistent across
all of the FlashCopy mappings that are contained in the Consistency Group.
Rather than using prestartfcmap or prestartfcconsistgrp, you can also use the -prep
parameter in the startfcmap or startfcconsistgrp command to prepare and start
FlashCopy in one step.
478 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Suspended
The mapping started, but it did not complete. Access to the metadata is lost, which causes
both the source and target Volume to go offline. When access to the metadata is restored,
the mapping returns to the copying or stopping state and the source and target Volumes
return online. The background copy process resumes. If the data was not flushed and was
written to the source or target Volume before the suspension, it is in the cache until the
mapping leaves the suspended state.
Every single source-target relation is a FlashCopy mapping and is maintained with its own
bitmap table. There is no consistency group bitmap table.
If a particular Volume is the source Volume for multiple FlashCopy mappings, you might want
to create separate Consistency Groups to separate each mapping of the same source
Volume. Regardless of whether the source Volume with multiple target Volumes is in the
same consistency group or in separate consistency groups, the resulting FlashCopy
produces multiple identical copies of the source data.
480 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Dependencies
When a source Volume has multiple target Volumes, a mapping is created for each
source-target relationship. When data is changed on the source Volume, it is first copied on
the target Volume. Thanks to the copy-on-write mechanism used by FlashCopy. you can
create up to 256 targets for a single source volume. Therefore, a single write operation on the
source volume could potentially result in 256 write operations (one per target Volume). This
would generate a large workload that the system would not be able to handle and lead to a
heavy performance impact on front-end operations.
To avoid any significant impact on performance due to multiple targets, FlashCopy creates
dependencies between the targets. Dependencies can be considered as “hidden” FlashCopy
mappings that are not visible to and cannot be managed by the user. A dependency is
created between the most recent target and the previous one (in order of start time).
Figure 11-13 shows an example of a source volume with three targets.
When the three targets are started, Target T0 was started first and considered the “oldest.”
Target T1 was started next and is considered “next oldest,” and finally Target T2 was started
last and considered the “most recent” or “newest.” The “next oldest” target for T2 is T1. The
“next oldest” target for T1 is T0. T1 is newer than T2, and T0 is newer than T1.
Table 11-3 Sequence example of write IOs on a source with multiple targets
Source Volume Target T0 Target T1 Target T2
An intermediate target disk (not the oldest or the newest) treats the set of newer target
Volumes and the true source Volume as a type of composite source. It treats all older
Volumes as a kind of target (and behaves like a source to them).
482 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Target write with Multiple Target FlashCopy (Copy on Demand)
A write to an intermediate or the newest target Volume must consider the state of the grain
within its own mapping, and the state of the grain of the next oldest mapping:
If the grain in the target that is being written is already copied and if the grain of the next
oldest mapping is not yet copied, the grain must be copied before the write can proceed,
to preserve the contents of the next oldest mapping.
For example, in Figure 11-13 on page 481, if the grain “G” is changed on T2, then it needs
to be copied to T1 (next oldest not yet copied) first and then changed on T2.
If the grain in the target that is being written is not yet copied, the grain is copied from the
oldest copied grain in the mappings that are newer than the target, or from the source if
none is copied. For example, in Figure 11-13 on page 481, if the red grain on T0 is written,
then it will first be copied from T1 (data “E”). After this copy is done, the write can be
applied to the target.
Target No If any newer targets exist for Hold the write. Check the
this source in which this grain dependency target Volumes
was copied, read from the to see whether the grain was
oldest of these targets. copied. If the grain is not
Otherwise, read from the copied to the next oldest
source. target for this source, copy
the grain to the next oldest
target. Then, write to the
target.
Yes Read from the target Volume. Write to the target Volume.
For example, if the mapping Source-T2 was stopped, the mapping enters the stopping state
while the cleaning process copies the data of T2 to T1 (next oldest). After all of the data is
copied, Source-T2 mapping enters the stopped state, and T1 is no longer dependent upon
T2. However, T0 remains dependent upon T1.
For example, with Table 11-3 on page 482, if you stop the Source-T2 mapping on “Time 7,”
then the grains that are not yet copied on T1 are copied from T2 to T1. Reading T1 would
then be like reading the Source at the time T1 was started (“Time 2”).
If you stop the Source-T1 mapping while Source-T0 mapping and Source-T2 are still in
copying mode, then the grains that are not yet copied on T0 are copied from T1 to T0 (next
oldest). T0 now depends upon T2.
Your target Volume is still accessible while the cleaning process is running. When the system
is operating in this mode, it is possible that host I/O operations can prevent the cleaning
process from reaching 100% if the I/O operations continue to copy new data to the target
Volumes.
Cleaning Rate
The data rate at which data is copied from the Target of the mapping being stopped to the
next oldest Target is determined by the cleaning rate. This is a property of the FlashCopy
mapping itself and can be changed dynamically. It is measured like the copyrate property, but
both properties are independent. Table 11-5 provides the relationship of the cleaning rate
values to the attempted number of grains to be split per second.
11 - 20 256 KiB 1 4
21 - 30 512 KiB 2 8
31 - 40 1 MiB 4 16
41 - 50 2 MiB 8 32
51 - 60 4 MiB 16 64
61 - 70 8 MiB 32 128
71 - 80 16 MiB 64 256
484 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
A key advantage of the IBM Spectrum Virtualize Multiple Target Reverse FlashCopy function
is that the reverse FlashCopy does not destroy the original target. This feature enables
processes that are using the target, such as a tape backup or tests, to continue uninterrupted.
IBM Spectrum Virtualize also provides the ability to create an optional copy of the source
Volume to be made before the reverse copy operation starts. This ability to restore back to the
original source data can be useful for diagnostic purposes.
The production disk is instantly available with the backup data. Figure 11-14 shows an
example of Reverse FlashCopy with a simple FlashCopy mapping (single target).
This example assumes that a simple FlashCopy mapping has been created between the
“Source” Volume and “Target” Volume, and no background copy is set.
When the FlashCopy mapping starts (Date of Copy1), if Source Volume is changed (write
operations on grain “A”), then the modified grains are first copied to Target, the bitmap table is
updated, and the Source grain is modified (from “A” to “G”).
At a given time (“Corruption Date”), data is modified on another grain (grain “D” below), so it is
first written on the Target Volume and the bitmap table is updated. Unfortunately, the new data
is corrupted on Source Volume.
The storage administrator can then use the Reverse FlashCopy feature by completing these
steps:
1. Creating a new mapping from Target to Source (if not already created). Because
FlashCopy recognizes that the target Volume of this new mapping is already a source in
another mapping, it will not create another bitmap table. It will use the existing bitmap table
instead, with its updated bits.
2. Start the new mapping. Thanks to the existing bitmap table, only the modified grains are
copied.
Consistency Groups are reversed by creating a set of new reverse FlashCopy mappings and
adding them to a new reverse Consistency Group. Consistency Groups cannot contain more
than one FlashCopy mapping with the same target Volume.
This method provides an exact number of bytes because image mode Volumes might not line
up one-to-one on other measurement unit boundaries. Example 11-1 lists the size of the
test_image_vol_1 Volume. The test_image_vol_copy_1 Volume is then created, which
specifies the same size.
Example 11-1 Listing the size of a Volume in bytes and creating a Volume of equal size
IBM_SVC:DH8:superuser>lsvdisk -bytes test_image_vol_1
id 12
name test_image_vol_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 3
mdisk_grp_name temp_migration_pool
capacity 21474836480
type image
formatted no
formatting no
mdisk_id 5
compressed_copy no
uncompressed_used_capacity 21474836480
parent_mdisk_grp_id 3
parent_mdisk_grp_name temp_migration_pool
encrypt no
......
486 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
13 test_image_vol_copy_1 0 io_grp0 online 0 test_pool_1 20.00GB striped
600507680283818B300000000000000F 0 1 not_empty 0 no 0 0 test_pool_1 yes no 13
test_image_vol_copy_1
Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize Volume
commands to modify the size of the Volume.
Overview of a FlashCopy sequence of events: The following tasks show the FlashCopy
sequence:
1. Associate the source data set with a target location (one or more source and target
Volumes).
2. Create a FlashCopy mapping for each source Volume to the corresponding target
Volume. The target Volume must be equal in size to the source Volume.
3. Discontinue access to the target (application dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush the cache for the source.
b. Discard the cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.
Flush done The FlashCopy mapping automatically moves from the preparing
state to the prepared state after all cached data for the source is
flushed and all cached data for the target is no longer valid.
Start When all of the FlashCopy mappings in a Consistency Group are in the
prepared state, the FlashCopy mappings can be started. To preserve
the cross-Volume Consistency Group, the start of all of the FlashCopy
mappings in the Consistency Group must be synchronized correctly
concerning I/Os that are directed at the Volumes by using the
startfcmap or startfcconsistgrp command.
Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the stopped state.
Copy complete After all of the source data is copied to the target and there are no
dependent mappings, the state is set to copied. If the option to
automatically delete the mapping after the background copy completes
is specified, the FlashCopy mapping is deleted automatically. If this
option is not specified, the FlashCopy mapping is not deleted
automatically and can be reactivated by preparing and starting again.
488 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.1.15 Thin provisioned FlashCopy
FlashCopy source and target Volumes can be thin-provisioned.
With this configuration, use a copyrate equal to 0 only. In this state, the virtual capacity of the
target Volume is identical to the capacity of the source Volume, but the real capacity (the one
actually used on the storage system) is lower, as shown on Figure 11-15. The real size of the
target Volume increases with writes that are performed on the source Volume, on not already
copied grains. Eventually, if the entire source Volume is written (unlikely), then the real
capacity of the target Volume will be identical to the source’s one.
Performance: The best performance is obtained when the grain size of the
thin-provisioned Volume is the same as the grain size of the FlashCopy mapping.
However, there is a lock for each grain. The lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared, and the mappings are derived from a particular
source Volume. The lock is used in the following modes under the following conditions:
The lock is held in shared mode during a read from the target Volume, which touches a
grain that was not copied from the source.
The lock is held in exclusive mode while a grain is being copied from the source to the
target.
If the lock is held in shared mode and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.
If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.
Similarly, if the lock is held in exclusive mode, a process wanting to use the lock in shared or
exclusive mode must wait for it to be freed.
Node failure
Normally, two copies of the FlashCopy bitmap are maintained. One copy of the FlashCopy
bitmap is on each of the two nodes that make up the I/O Group of the source Volume. When a
node fails, one copy of the bitmap for all FlashCopy mappings whose source Volume is a
member of the failing node’s I/O Group becomes inaccessible.
FlashCopy continues with a single copy of the FlashCopy bitmap that is stored as non-volatile
in the remaining node in the source I/O Group. The system metadata is updated to indicate
that the missing node no longer holds a current bitmap. When the failing node recovers or a
replacement node is added to the I/O Group, the bitmap redundancy is restored.
Because the storage area network (SAN) that links SVC nodes to each other and to the
MDisks is made up of many independent links, it is possible for a subset of the nodes to be
temporarily isolated from several of the MDisks. When this situation happens, the managed
disks are said to be Path Offline on certain nodes.
490 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Other nodes: Other nodes might see the managed disks as Online because their
connection to the managed disks is still up.
Other configuration events complete synchronously, and no informational events are logged
as a result of the following events:
PREPARE_COMPLETED
This state transition is logged when the FlashCopy mapping or Consistency Group enters
the prepared state as a result of a user request to prepare. The user can now start (or
stop) the mapping or Consistency Group.
COPY_COMPLETED
This state transition is logged when the FlashCopy mapping or Consistency Group enters
the idle_or_copied state when it was in the copying or stopping state. This state
transition indicates that the target disk now contains a complete copy and no longer
depends on the source.
STOP_COMPLETED
This state transition is logged when the FlashCopy mapping or Consistency Group enters
the stopped state as a result of a user request to stop. It is logged after the automatic copy
process completes. This state transition includes mappings where no copying needed to
be performed. This state transition differs from the event that is logged when a mapping or
group enters the stopped state as a result of an I/O error.
For example, you can perform a Metro Mirror copy to duplicate data from Site_A to Site_B,
and then perform a daily FlashCopy to back up the data to another location.
Table 11-7 lists the supported combinations of FlashCopy and remote copy. In the table,
remote copy refers to Metro Mirror and Global Mirror.
Table 11-7 FlashCopy and remote copy interaction
Component Remote copy primary site Remote copy secondary site
492 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Bitmaps that are governing I/O redirection (I/O indirection layer) are maintained in both
nodes of the IBM SAN Volume Controller I/O Group to prevent a single point of failure.
FlashCopy mapping and Consistency Groups can be automatically withdrawn after the
completion of the background copy.
Thin-provisioned FlashCopy (or Snapshot in the graphical user interface (GUI) use disk
space only when updates are made to the source or target data, and not for the entire
capacity of a Volume copy.
FlashCopy licensing is based on the virtual capacity of the source Volumes.
Incremental FlashCopy copies all of the data when you first start FlashCopy, and then only
the changes when you stop and start FlashCopy mapping again. Incremental FlashCopy
can substantially reduce the time that is required to re-create an independent image.
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship, and without having to wait for the original
copy operation to complete.
The size of the source and target Volumes cannot be altered (increased or decreased)
while a FlashCopy mapping is defined.
IBM FlashCopy limitations for IBM Spectrum Virtualize V8.1 are listed in Table 11-8.
Although these presets meet most FlashCopy requirements, they do not support all possible
FlashCopy options. If more specialized options are required that are not supported by the
presets, the options must be performed by using CLI commands.
This section describes the preset options and their use cases.
Use case
The user wants to produce a copy of a Volume without affecting the availability of the Volume.
The user does not anticipate many changes to be made to the source or target Volume; a
significant proportion of the Volumes remains unchanged.
By ensuring that only changes require a copy of data to be made, the total amount of disk
space that is required for the copy is reduced. Therefore, many Snapshot copies can be used
in the environment.
Snapshots are useful for providing protection against corruption or similar issues with the
validity of the data, but they do not provide protection from physical controller failures.
Snapshots can also provide a vehicle for performing repeatable testing (including “what-if”
modeling that is based on production data) without requiring a full copy of the data to be
provisioned.
For example, in Figure 11-16, the source Volume user can still work on the original data
Volume (like a production Volume) and the target Volumes can be accessed instantly. Users
of target Volumes can modify the content and perform “what-if” tests for example (versioning).
Storage administrators do not need to perform full copies of a Volume for temporary tests.
However, the target Volumes need to remain linked to the source. Anytime the link is broken
(FlashCopy mapping stopped or deleted), the target Volumes become unusable.
Clone
The clone preset creates a replica of the Volume, which can be changed without affecting the
original Volume. After the copy completes, the mapping that was created by the preset is
automatically deleted.
494 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Clone uses the following preset parameters:
Background copy rate: 50
Incremental: No
Delete after completion: Yes
Cleaning rate: 50
Primary copy source pool: Target pool
Use case
Users want a copy of the Volume that they can modify without affecting the original Volume.
After the clone is established, there is no expectation that it is refreshed or that there is any
further need to reference the original production data again. If the source is thin-provisioned,
the target is thin-provisioned for the auto-create target.
Backup
The backup preset creates an incremental point-in-time replica of the production data. After
the copy completes, the backup view can be refreshed from the production data, with minimal
copying of data from the production Volume to the backup Volume.
Use case
The user wants to create a copy of the Volume that can be used as a backup if the source
becomes unavailable, such as due to loss of the underlying physical controller. The user
plans to periodically update the secondary copy, and does not want to suffer from the
resource demands of creating a new copy each time. Incremental FlashCopy times are faster
than full copy, which helps to reduce the window where the new backup is not yet fully
effective. If the source is thin-provisioned, the target is also thin-provisioned in this option for
the auto-create target.
Another use case, which is not supported by the name, is to create and maintain (periodically
refresh) an independent image that can be subjected to intensive I/O (for example, data
mining) without affecting the source Volume’s performance.
Note: IBM Spectrum Virtualize in general and FlashCopy in particular are not backup
solutions on their own. FlashCopy backup preset, for example, will not schedule a regular
copy of your Volumes. It over-writes the mapping target and does not make a copy of it
before starting a new “backup” operation. It is the user’s responsibility to handle the target
Volumes (for example, saving them to tapes) and the scheduling of the FlashCopy
operations.
When using the IBM Spectrum Virtualize GUI, FlashCopy components can be seen in
different windows. Three windows are related to FlashCopy and are reachable through the
Copy Services menu as shown in Figure 11-17.
The FlashCopy window is accessible by clicking the Copy Services FlashCopy menu and
displays all the Volumes that are defined in the system. Volumes that are part of a Flashcopy
mapping appear as shown in Figure 11-18. By clicking a source Volume, you can display the
list of its target Volumes.
Figure 11-18 Source and target Volumes displayed in the FlashCopy window
496 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note that all Volumes are listed in this window, and target Volumes appear twice (as a regular
Volume and as a target Volume in a FlashCopy mapping):
The Consistency Group window is accessible by clicking the Copy Services Consistency
Groups menu. Use the Consistency Groups window, as shown in Figure 11-19, to list the
FlashCopy mappings that are part of consistency groups and part of no consistency
groups.
Open the FlashCopy window from the Copy Services menu, select the Volume that you want
to create the FlashCopy mapping for, as shown in Figure 11-21. Right-click the Volume or
click the Actions menu.
Depending on whether you created the target Volumes for your FlashCopy mappings or you
want the system to create the target Volumes for you, the following options are available:
If you created the target Volumes, see “Creating a FlashCopy mapping with existing target
Volumes” on page 498.
If you want the system to create the target Volumes for you, see “Creating a FlashCopy
mapping and create target Volumes” on page 503.
Attention: When starting a FlashCopy mapping from a source Volume to a target Volume,
data that is on the target is over-written. The system will not prevent you from selecting a
target Volume that is mapped to a host and already contains data.
1. Right-click the Volume that you want to create a FlashCopy mapping for, and select
Advanced FlashCopy → Use Existing Target Volumes, as shown in Figure 11-22.
498 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The Create FlashCopy Mapping window opens as shown in Figure 11-23. In this window,
you create the mapping between the selected source Volume (the field is pre-filled with the
name of the Volume you selected earlier) and the target Volume you want to create a
mapping with. Then, click Add.
Important: The source Volume and the target Volume must be of equal size. Therefore,
only targets of the same size are shown in the list for a source Volume.
Volumes that are already a target in an existing FlashCopy mapping cannot be a target
in a new mapping. Therefore, only Volumes that are not already targets can be
selected.
To remove a mapping that was created, click , as shown in Figure 11-24 on page 500.
3. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 11-25 on page 501. See
11.2.1, “FlashCopy presets” on page 493 for more details about the presets.
– Snapshot: Creates a point-in-time snapshot copy of the source Volume.
– Clone: Creates a point-in-time replica of the source Volume.
– Backup: Creates an incremental FlashCopy mapping that can be used to recover data
or objects if the system experiences data loss. These backups can be copied multiple
times from source and target Volumes.
Note: If you want to create a simple Snapshot of a Volume, then you probably want the
target Volume to be defined as thin-provisioned to save space on your system. If you
use an existing target, then make sure it is thin-provisioned first. Using the Snapshot
preset will not make the system check whether the target Volume is thin-provisioned.
500 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 11-25 FlashCopy mapping preset selection
When selecting a preset, some options such as Background Copy Rate, Incremental, and
Delete mapping after completion are automatically changed or selected. You can still
change the automatic settings, but this is not recommended for these reasons:
For example, if you select the Backup preset but then clear Incremental or select Delete
mapping after completion, you will lose the benefits of the incremental FlashCopy and
will need to copy the entire source Volume each time you start the mapping.
As another example, if you select the Snapshot preset but then change the Background
Copy Rate, you will end up with a full copy of your source Volume.
For more information about the Background Copy Rate and the Cleaning Rate, see
Table 11-1 on page 475 or see Table 11-5 on page 484.
Figure 11-26 Select or not a Consistency Group for the FlashCopy mapping
The FlashCopy mapping is now ready for use. It is visible in the three different windows:
FlashCopy, FlashCopy mappings, and Consistency Groups.
Note: Creating a FlashCopy mapping does not automatically start any copy. You need to
manually start the mapping.
502 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Creating a FlashCopy mapping and create target Volumes
Complete the following steps to create target Volumes for FlashCopy mapping:
1. Right-click the Volume that you want to create a FlashCopy mapping for, and select
Advanced FlashCopy → Create New Target Volumes, as shown in Figure 11-27.
2. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 11-28 on page 504. See
11.2.1, “FlashCopy presets” on page 493 for more details about the presets.
– Snapshot: Creates a point-in-time snapshot copy of the source Volume.
– Clone: Creates a point-in-time replica of the source Volume.
– Backup: Creates an incremental FlashCopy mapping that can be used to recover data
or objects if the system experiences data loss. These backups can be copied multiple
times from source and target Volumes.
Note: If you want to create a simple Snapshot of a Volume, then you probably want the
target Volume to be defined as thin-provisioned to save space on your system. If you
use an existing target, then make sure it is thin-provisioned first. Using the Snapshot
preset will not make the system check whether the target Volume is thin-provisioned.
When selecting a preset, some options such as Background Copy Rate, Incremental,
and Delete mapping after completion are automatically changed or selected. You can still
change the automatic settings but this is not recommended for these reasons:
– For example, if you select the Backup preset but then clear Incremental or select
Delete mapping after completion, you will lose the benefits of the incremental
FlashCopy. You will need to copy the entire source Volume each time you start the
mapping.
– As another example, if you select the Snapshot preset but then change the
Background Copy Rate, you will end up with a full copy of your source Volume.
For more details about the Background Copy Rate and the Cleaning Rate, see
Table 11-1 on page 475 or see Table 11-5 on page 484.
When your FlashCopy mapping setup is ready, click Next.
504 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. You can choose whether to add the mappings to a Consistency Group as shown in
Figure 11-29.
If you want to include this FlashCopy mapping in a Consistency Group, select Yes, add
the mappings to a consistency group, and select the Consistency Group from the
drop-down menu.
Figure 11-30 Select the type of Volumes for the created targets
Note: If you selected multiple source Volumes to create new FlashCopy mappings, then
if you select Inherit properties from source Volume, it applies to each newly created
target Volume. For example, if you selected a compressed Volume and a generic
Volume as sources for the new FlashCopy mappings, then the system creates a
compressed target and a generic target.
506 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. The final step allows the user to select the pool where the new target Volumes should be
created, as show in Figure 11-31. It can be the same pool as the source Volumes or
another existing pool. Click Finish.
Figure 11-31 Select the pool for the new target Volumes
The FlashCopy mapping is now ready for use. It is visible in the three different windows:
FlashCopy, FlashCopy mappings, and Consistency Groups.
3. You can select multiple Volumes at a time, thus creating as many snapshots automatically.
The system will then automatically group the FlashCopy mappings in a new consistency
group, as shown in Figure 11-33.
508 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The newly created consistency group is automatically started.
510 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To create and start a backup, complete these steps:
1. Open the FlashCopy window from the Copy Services FlashCopy menu.
2. Select the Volume that you want to create a snapshot of, and right-click it or click
Actions → Create Backup, as shown in Figure 11-36.
3. You can select multiple Volumes at a time, thus creating as many snapshots automatically.
The system will then automatically group the FlashCopy mappings in a new consistency
group, as shown Figure 11-37.
2. Enter the FlashCopy Consistency Group name that you want to use and click Create as
shown in Figure 11-39.
Consistency Group name: You can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The Volume name can be 1 - 63 characters.
512 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Select the Consistency Group that you want to create the FlashCopy mapping in. If you
prefer not to create a FlashCopy mapping in a Consistency Group, select Not in a Group,
and right-click the selected consistency group or click Actions → Create FlashCopy
Mapping, as shown in Figure 11-40.
3. Select a Volume in the Source Volume column by using the drop-down menu. Then, select
a Volume in the Target Volume column by using the drop-down menu. Click Add, as
shown in Figure 11-41.
Figure 11-41 Select source and target Volumes for the FlashCopy mapping
Repeat this step to create other mappings. To remove a mapping that was created, click
. Click Next.
Volumes that are already target Volumes in another FlashCopy mapping cannot be
target of a new FlashCopy mapping. Therefore, they do not appear in the list.
4. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 11-42. See 11.2.1,
“FlashCopy presets” on page 493 for more details about the presets.
– Snapshot: Creates a point-in-time snapshot copy of the source Volume.
– Clone: Creates a point-in-time replica of the source Volume.
– Backup: Creates an incremental FlashCopy mapping that can be used to recover data
or objects if the system experiences data loss. These backups can be copied multiple
times from source and target Volumes.
When selecting a preset, some options such as Background Copy Rate, Incremental,
and Delete mapping after completion are automatically changed or selected. You can
still change the automatic settings, but this is not recommended for these reasons:
– For example, if you select the Backup preset but then clear Incremental or select
Delete mapping after completion, you will lose the benefits of the incremental
FlashCopy. You will need to copy the entire source Volume each time you start the
mapping.
– As another example, if you select the Snapshot preset but then change the
Background Copy Rate, you will end up with a full copy of out source Volume.
For more information about the Background Copy Rate and the Cleaning Rate, see
Table 11-1 on page 475 or see Table 11-5 on page 484.
514 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.2.9 Showing related Volumes
To show related Volumes for a specific FlashCopy mapping, complete these steps:
1. Open the FlashCopy, Consistency Groups, or FlashCopy Mappings window.
2. Right-click a FlashCopy mapping, or a consistency group or a Volume (depending on
which window you are in) and select Show Related Volumes, as shown in Figure 11-43.
Figure 11-43 Showing related Volumes for a mapping, a consistency group or another Volume
3. In the Related Volumes window, you can see the related mapping for a Volume as shown
in Figure 11-44. If you click one of these Volumes, you can see its properties.
If you are in the FlashCopy window, you can only select target Volumes to be moved.
Selecting a source Volume will not allow you to move the mapping to a consistency group.
3. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency
Group for the FlashCopy mappings selection by using the drop-down menu, as shown in
Figure 11-46.
Figure 11-46 Selecting the consistency group where to move the FlashCopy mapping
516 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.2.11 Removing FlashCopy mappings from Consistency Groups
To remove one or multiple FlashCopy mappings from a Consistency Group, complete these
steps:
1. Open the FlashCopy, Consistency Groups, or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to remove and select Remove from
Consistency Group, as shown in Figure 11-47.
Note: Only FlashCopy mappings that belong to a consistency group can be removed.
3. In the Remove FlashCopy Mapping from Consistency Group window, click Remove, as
shown in Figure 11-48.
Note: It is not possible to select multiple FlashCopy mappings to edit their properties all
at the same time.
2. In the Edit FlashCopy Mapping window, you can modify the background copy rate and the
cleaning rate for a selected FlashCopy mapping, as shown in Figure 11-50.
518 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
For more information about the “Background Copy Rate” and the “Cleaning Rate”, see
Table 11-1 on page 475 or see Table 11-5 on page 484.
3. Click Save to confirm your changes.
FlashCopy mapping name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The FlashCopy mapping name can be 1 - 63
characters.
520 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 11-54.
Note: It is not possible to select multiple consistency groups to edit their names all at
the same time.
Consistency Group name: The name can consist of the letters A - Z and a - z, the
numbers 0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, a dash, or an underscore.
4. If you still have target Volumes that are inconsistent with the source Volumes and you want
to delete these FlashCopy mappings, select Delete the FlashCopy mapping even when
the data on the target Volume is inconsistent, or if the target Volume has other
dependencies. Click Delete.
522 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.2.15 Deleting a FlashCopy Consistency Group
Important: Deleting a Consistency Group does not delete the FlashCopy mappings that it
contains.
Important: Only FlashCopy mappings that do not belong to a consistency group can be
started individually. If FlashCopy mappings are part of a consistency group, then they can
only be started all together by using the consistency group start command.
It is the start command that defines the “point-in-time”. It is the moment that is used as a
reference (T0) for all subsequent operations on the source and the target Volumes. To start
one or multiple FlashCopy mappings that do not belong to a consistency group, complete
these steps:
1. Open the FlashCopy, Consistency Groups, or FlashCopy Mappings window.
2. Right-click the FlashCopy mappings that you want to start and select Start, as shown in
Figure 11-59.
You can check the FlashCopy state and the progress of the mappings in the Status and
Progress columns of the table as shown in Figure 11-60.
524 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
FlashCopy Snapshots are dependent on the source Volume and should be in a “copying”
state as long as the mapping is started.
FlashCopy clones and the first occurrence of FlashCopy backup can take a long time to
complete, depending on the size of the source Volume and on the copyrate value. The next
occurrences of FlashCopy backups are faster because only the changes that have been
made during two occurrences are copied.
For more information about FlashCopy starting operations and states, see 11.1.10, “Starting
FlashCopy mappings and Consistency Groups” on page 477.
Important: Only FlashCopy mappings that do not belong to a consistency group can be
stopped individually. If FlashCopy mappings are part of a consistency group, then they can
only be stopped all together by using the consistency group stop command.
The only reason to stop a FlashCopy mapping should be for incremental FlashCopy. When
the first occurrence of an incremental FlashCopy is started, a full copy of the source Volume
is made. When 100% of the source Volume is copied, the FlashCopy mapping does not stop
automatically and a manual stop can be performed. The target Volume is available for read
and write operations, during the copy, and after the mapping is stopped.
In any other case, stopping a FlashCopy mapping interrupts the copy and resets the bitmap
table. Because only part of the data from the source Volume has been copied, the copied
grains might be meaningless without the remaining ones. Therefore, the target Volumes are
placed offline and are unusable, as shown in Figure 11-61.
Figure 11-61 Showing target Volumes state and FlashCopy mappings status
Note: FlashCopy mappings can be in a stopping state for a long time if you have created
dependencies between several targets. It is in a cleaning mode. See “Stopping process in
a Multiple Target FlashCopy - Cleaning Mode” on page 483 for more information about
dependencies and stopping process.
For every FlashCopy mapping that is created on an IBM Spectrum Virtualize system, a
bitmap table is created to track the copied grains. By default, the system allocates 20 MiB of
memory for a minimum of 10 TiB of FlashCopy source volume capacity and 5 TiB of
incremental FlashCopy source volume capacity.
Depending on the grain size of the FlashCopy mapping, the memory capacity usage is
different. One MiB of memory provides the following volume capacity for the specified I/O
group:
For clones and snapshots FlashCopy with 256 KiB grains size, 2 TiB of total FlashCopy
source volume capacity
For clones and snapshots FlashCopy with 64 KiB grains size, 512 GiB of total FlashCopy
source volume capacity
526 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
For incremental FlashCopy, with 256 KiB grains size, 1 TiB of total incremental FlashCopy
source volume capacity
For incremental FlashCopy, with 64 KiB grains size, 256 GiB of total incremental
FlashCopy source volume capacity
Review Table 11-9 to calculate the memory requirements and confirm that your system is
able to accommodate the total installation size.
5 TiB of incremental
FlashCopy source
volume capacity
1 The
actual amount of functionality might increase based on settings such as grain size and strip
size.
FlashCopy includes the FlashCopy function, but also Global Mirror with change volumes, and
active-active (HyperSwap) relationships.
For multiple FlashCopy targets, you must consider the number of mappings. For example, for
a mapping with a grain size of 256 KiB, 8 KiB of memory allows one mapping between a
16 GiB source volume and a 16 GiB target volume. Alternatively, for a mapping with a 256 KiB
grain size, 8 KiB of memory allows two mappings between one 8 GiB source volume and two
8 GiB target volumes.
When creating a FlashCopy mapping, if you specify an I/O group other than the I/O group of
the source volume, the memory accounting goes toward the specified I/O group, not toward
the I/O group of the source volume.
When creating new FlashCopy relationships or mirrored volumes, additional bitmap space is
allocated automatically by the system if required.
For FlashCopy mappings, only one I/O group consumes bitmap space. By default, the I/O
group of the source volume is used.
When you create a reverse mapping, such as when you run a restore operation from a
snapshot to its source volume, a bitmap is created.
When you configure change volumes for use with Global Mirror, two internal FlashCopy
mappings are created for each change volume.
Transparent Cloud Tiering can help to solve business needs that require duplication of data of
your source Volume. Volumes can remain online and active while you create snapshot copies
of the data sets. Transparent Cloud Tiering operates below the host operating system and its
cache. Therefore, the copy is not apparent to the host.
IBM Spectrum Virtualize has built-in software algorithms that allow the Transparent Cloud
Tiering function to securely interact, for example with Information Dispersal Algorithms (IDA),
which is essentially the interface to IBM Cloud Object Storage.
Object storage is a general term that refers to the entity in which IBM Cloud Object Storage
organizes, manages, and stores units of data. To transform these snapshots of traditional
data into object storage, the storage nodes and the IDA ingest the data and transform it into a
number of metadata and slices. The object can be read by using a subset of those slices.
When an object storage entity is stored as IBM Cloud Object Storage, the objects must be
manipulated or managed as a whole unit. Therefore, objects cannot be accessed or updated
partially.
IBM Spectrum Virtualize uses internal software components to support HTTP-based REST
application programming interface (API) to interact with an external cloud service provider or
private cloud.
528 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
For more information about the IBM Cloud Object Storage portfolio, go to:
https://ibm.biz/Bdsc7m
Demonstration: The IBM Client Demonstration Center has a demo available here:
https://www.ibm.com/systems/clientcenterdemonstrations/faces/dcDemoView.jsp?dem
oId=2445
The use of Transparent Cloud Tiering can help businesses to manipulate data as follows:
Creating a consistent snapshot of dynamically changing data
Creating a consistent snapshot of production data to facilitate data movement or migration
between systems running at different locations
Creating a snapshot of production data sets for application development and testing
Creating a snapshot of production data sets for quality assurance
Using secure data tiering to off-premises cloud providers
From a technical standpoint, ensure that you evaluate the network capacity and bandwidth
requirements to support your data migration to off-premises infrastructure. To maximize
productivity, you must match your amount of data that needs to be transmitted off cloud plus
your network capacity.
From a security standpoint, ensure that your on-premises or off-premises cloud infrastructure
supports your requirements in terms of methods and level of encryption.
Regardless of your business needs, Transparent Cloud Tiering within the IBM Spectrum
Virtualize can provide opportunities to manage the exponential data growth and to manipulate
data at low cost.
Today, many Cloud Service Providers offers a number of storage-as-services solutions, such
as content repository, backup, and archive. Combining all of these services, your IBM
Spectrum Virtualize can help you solve many challenges related to rapid data growth,
scalability, and manageability at attractive costs.
When Transparent Cloud Tiering is applied as your backup strategy, IBM Spectrum Virtualize
uses the same FlashCopy functions to produce point-in-time snapshot of an entire Volume or
set of Volumes.
Many operating systems and applications provide mechanism to stop I/O operations and
ensure that all data is flushed from host cache. If these mechanisms are available, they can
be used in combination with snapshot operations. When these mechanisms are not available,
it might be necessary to flush the cache manually by quiescing the application and
unmounting the file system or logical drives.
When choosing cloud object storage as a backup solution, be aware that the object storage
must be managed as a whole. Backup and restore of individual files, folders, and partitions,
are not possible.
To interact with cloud service providers or a private cloud, the IBM Spectrum Virtualize
requires interaction with the correct architecture and specific properties. Conversely, cloud
service providers have offered attractive prices per object storage in cloud and deliver an
easy-to-use interface. Normally, cloud providers offer low-cost prices for object storage space,
and charges are only applied for the cloud outbound traffic.
This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.
Note: You should always consider the bandwidth characteristics and network capabilities
when choosing to use Transparent Cloud Tiering.
The restore of individual files using Transparent Cloud Tiering is not possible. As mentioned,
object storage is unlike a file or a block so object storage must be managed as a whole unit
piece of storage, and not partially. Cloud object storage is accessible using an HTTP-based
REST API.
530 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Volume containing two physical copies in two different storage pools cannot be part of
Transparent Cloud Tiering.
Cloud Tiering snapshots cannot be taken from a Volume that is part of migration activity
across storage pools.
Because VVols are managed by a specific VMware application, these Volumes are not
candidates for Transparent Cloud Tiering.
File System Volumes, such as Volumes provisioned by the IBM Storwize V7000 Unified
platform, are not qualified for Transparent Cloud Tiering.
Using your IBM Spectrum Virtualize management GUI, click Settings → System → DNS and
insert your DNS IPv4 or IPv6. The DNS name can be anything that you want, and is used as
a reference. Click Save after you complete the choices, as shown in Figure 11-64.
2. Click Next to continue. You must select one of three cloud service providers:
– SoftLayer (now renamed IBM Bluemix)
– OpenStack Switch
– Amazon S3
532 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 11-66 shows the options available.
3. In the next window, you must complete the settings of the Cloud Provider, credentials, and
security access keys. The required settings can change depending on your cloud service
provider. An example of an empty form for an IBM Bluemix connection is shown in
Figure 11-67.
5. The cloud credentials can be viewed and updated at any time by using the function icons
in left side of the GUI and clicking Settings → Systems → Transparent Cloud Tiering.
From this window, you can also verify the status, the data usage statistics, and the upload
and download bandwidth limits set to support this functionality.
In the account information window, you can visualize your cloud account information. This
window also enables you to remove the account.
534 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
An example of visualizing your cloud account information is shown in Figure 11-69.
Any Volume can be added to the Cloud Volumes. However, snapshots only work for Volumes
that are not related to any other copy service.
2. After Cloud Volumes is selected, a new window opens, and you can use the GUI to select
one or more Volumes that you need to enable a cloud snapshot or you can add Volumes to
the list, as shown in Figure 11-71.
536 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. IBM Spectrum Virtualize GUI provides two options for you to select. If the first option is
selected, the system decides what type of snapshot is created based on previous objects
for each selected Volume. If a full copy (full snapshot) of a Volume has already been
created, then the system makes an incremental copy of the Volume.
The second option creates a full snapshot of one or more selected Volumes. You can
select the second option for a first occurrence of a snapshot and click Finish, as shown in
Figure 11-73. You can also select the second option even if another full copy of the
Volume already exists.
The Cloud Volumes window shows complete information about the Volumes and their
snapshots. The GUI shows the following information:
– Name of the Volume
– ID of the Volume assigned by the IBM Spectrum Virtualize
– The size of all snapshots that are taken off the Volume
– The date and time that the last snapshot was created
– The number of snapshots that are taken for every Volume
– The snapshot status
– The restore status
– The Volume group for a set of Volumes
– The Volume UID
Figure 11-74 shows an example of a Cloud Volumes list.
“Managing” a snapshot is actually deleting one or multiple versions. The list of point-in-time
copies appear and provide details about their status, type and snapshot date, as shown in
Figure 11-76. From this window, an administrator can delete old snapshots (old point-in-time
copies) if they are no longer needed. The most recent copy cannot be deleted. If you want to
delete the most recent copy, you must first disable Cloud Tiering for the specified Volume.
538 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.4.5 Restoring cloud snapshots
This option allows IBM Spectrum Virtualize to restore snapshots from the cloud to the
selected Volumes or to create new Volumes with the restored data.
If the cloud account is shared among systems, IBM Spectrum Virtualize queries the
snapshots that are stored in the cloud, and enables you to restore to a new Volume. To
restore a Volume’s snapshot, complete these steps:
1. Open the Cloud Volumes window.
2. Right-click a Volume and select Restore, as shown in Figure 11-77.
If the snapshot version that you selected has later generations (more recent Snapshot
dates), then the newer copies are removed from the cloud.
4. The IBM Spectrum Virtualize GUI provides two options to restore the snapshot from cloud.
You can restore the snapshot from cloud directly to the selected Volume, or create a new
Volume to restore the data on, as shown in Figure 11-79. Make a selection and click Next.
540 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: Restoring a snapshot on the existing Volume overwrites the data on the Volume.
The Volume is taken offline (no read or write access) and the data from the point-in-time
copy of the Volume are written. The Volume returns back online when all data is
restored from the cloud.
5. If you selected the Restore to a new Volume option, then you need to enter the following
information for the new Volume to be created with the snapshot data, as shown in
Figure 11-80:
– Name
– Storage Pool
– Capacity Savings (None, Compressed or Thin-provisioned)
– I/O group
Note that you are not asked to enter the Volume size because the new Volume’s size will
be identical to the snapshot copy one.
Enter the settings for the new Volume and click Next.
If you chose to restore the data from the cloud to a new Volume, the new Volume appears
immediately in the Volumes window. However, it is taken offline until all the data from the
snapshot is written. The new Volume is totally independent. It is not defined as a target in a
FlashCopy mapping with the selected Volume for example. It is not mapped to a host either.
Volume mirroring is provided by a specific Volume mirroring function in the I/O stack. It cannot
be manipulated like a FlashCopy or other types of copy Volumes. However, this feature
provides migration functionality, which can be obtained by splitting the mirrored copy from the
source or by using the migrate to function. Volume mirroring cannot control backend storage
mirroring or replication.
With Volume mirroring, host I/O completes when both copies are written. This feature is
enhanced with a tunable latency tolerance. This tolerance provides an option to give
preference to losing the redundancy between the two copies. This tunable time-out value is
Latency or Redundancy.
542 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The Latency tuning option, which is set with chvdisk -mirrowritepriority latency, is the
default. It prioritizes host I/O latency, which yields a preference to host I/O over availability.
However, you might need to give preference to redundancy in your environment when
availability is more important than I/O response time. Use the chvdisk -mirror
writepriority redundancy command to set the redundancy option.
Regardless of which option you choose, Volume mirroring can provide extra protection for
your environment.
Migration: Although these migration methods do not disrupt access, you must take a brief
outage to install the host drivers for your IBM SAN Volume Controller if you do not already
have them installed.
With Volume mirroring, you can move data to different MDisks within the same storage pool or
move data between different storage pools. Using Volume mirroring over Volume migration is
beneficial because with Volume mirroring, storage pools do not need to have the same extent
size as is the case with Volume migration.
Note: Volume mirroring does not create a second Volume before you split copies. Volume
mirroring adds a second copy of the data under the same Volume. Therefore, you end up
having one Volume presented to the host with two copies of data that are connected to this
Volume. Only splitting copies creates another Volume, and then both Volumes have only
one copy of the data.
This approach helps when you have copies of different types, for example generic and
compressed, because now both copies use its independent cache and performs its own read
prefetch. Destaging of the cache can now be done independently for each copy, so one copy
does not affect performance of a second copy.
Also, because the Storwize destage algorithm is MDisk aware, it can tune or adapt the
destaging process, depending on MDisk type and usage, for each copy independently.
For more details about Volume Mirroring, see Chapter 7, “Volumes” on page 251.
IBM Spectrum Virtualize provides a single point of control when remote copy is enabled in
your network (regardless of the disk subsystems that are used) if those disk subsystems are
supported by the system.
The general application of remote copy services is to maintain two real-time synchronized
copies of a Volume. Often, the two copies are geographically dispersed between two IBM
Spectrum Virtualize systems. However, it is possible to use MM or GM within a single system
(within an I/O Group). If the master copy fails, you can enable an auxiliary copy for I/O
operations.
Tips: Intracluster MM/GM uses more resources within the system when compared to an
intercluster MM/GM relationship, where resource allocation is shared between the
systems. Use intercluster MM/GM when possible. For mirroring Volumes in the same
system, it is better to use Volume Mirroring or the FlashCopy feature.
A typical application of this function is to set up a dual-site solution that uses two SVC or
Storwize systems. The first site is considered the primary site or production site, and the
second site is considered the backup site or failover site. The failover site is activated when a
failure at the first site is detected.
544 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In the storage layer, a Storwize family system has the following characteristics and
requirements:
The system can perform MM and GM replication with other storage-layer systems.
The system can provide external storage for replication-layer systems or IBM SAN Volume
Controller.
The system cannot use a storage-layer system as external storage.
In the replication layer, an SVC or a Storwize system has the following characteristics and
requirements:
The system can perform MM and GM replication with other replication-layer systems.
The system cannot provide external storage for a replication-layer system.
The system can use a storage-layer system as external storage.
A Storwize family system is in the storage layer by default, but the layer can be changed. For
example, you might want to change a Storwize V7000 to a replication layer if you want to
virtualize other Storwize systems.
Note: Before you change the system layer, the following conditions must be met:
No host object can be configured with worldwide port names (WWPNs) from a Storwize
family system.
No system partnerships can be defined.
No Storwize family system can be visible on the SAN fabric.
In your IBM Storwize system, use the lssystem command to check the current system layer,
as shown in Example 11-2.
Example 11-2 Output from the lssystem command showing the system layer
IBM_Storwize:ITSO_7K:superuser>lssystem
id 000001002140020E
name ITSO_V7K
...
lines omited for brevity
...
easy_tier_acceleration off
has_nas_key no
layer replication
...
Note: Consider the following rules for creating remote partnerships between the IBM SAN
Volume Controller and Storwize Family systems:
An IBM SAN Volume Controller is always in the replication layer.
By default, the IBM Storwize systems are in the storage layer, but can be changed to
the replication layer.
A system can form partnerships only with systems in the same layer.
Starting in V6.4, an IBM SAN Volume Controller or Storwize system in the replication
layer can virtualize an IBM Storwize system in the storage layer.
Note: For more information about restrictions and limitations of native IP replication, see
11.8.2, “IP partnership limitations” on page 581.
546 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Supported multiple system mirroring topologies
Multiple system mirroring supports various partnership topologies, as shown in the example
in Figure 11-83. This example is a star topology (A → B, A → C, and A → D).
Figure 11-83 shows four systems in a star topology, with System A at the center. System A
can be a central DR site for the three other locations.
By using a star topology, you can migrate applications by using a process such as the one
described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.
3. Create the A → C relationship (or the B → C relationship).
4. Synchronize to system C, and ensure that A → C is established:
– A → B, A → C, A → D, B → C, B → D, and C → D
– A → B, A → C, and B → C
Figure 11-85 is a fully connected mesh in which every system has a partnership to each of
the three other systems. This topology enables Volumes to be replicated between any pair of
systems, for example A → B, A → C, and B → C.
Although systems can have up to three partnerships, Volumes can be part of only one remote
copy relationship, for example A → B.
System partnership intermix: All of the preceding topologies are valid for the intermix of
the IBM SAN Volume Controller with the Storwize V7000 if the Storwize V7000 is set to the
replication layer and running IBM Spectrum Virtualize code 6.3.0 or later.
An application that performs many database updates is designed with the concept of
dependent writes. With dependent writes, it is important to ensure that an earlier write
completed before a later write is started. Reversing or performing the order of writes
differently than the application intended can undermine the application’s algorithms and can
lead to problems, such as detected or undetected data corruption.
The IBM Spectrum Virtualize Metro Mirror and Global Mirror implementation operates in a
manner that is designed to always keep a consistent image at the secondary site. The Global
Mirror implementation uses complex algorithms that identify sets of data and number those
sets of data in sequence. The data is then applied at the secondary site in the defined
sequence.
548 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Operating in this manner ensures that if the relationship is in a Consistent_Synchronized
state, the Global Mirror target data is at least crash consistent, and supports quick recovery
through application crash recovery facilities.
For more dependent writes information, see 11.1.13, “FlashCopy and image mode Volumes”
on page 486.
Therefore, these commands can be issued simultaneously for all MM/GM relationships that
are defined within that Consistency Group, or to a single MM/GM relationship that is not part
of a Remote Copy Consistency Group. For example, when a startrcconsistgrp command is
issued to the Consistency Group, all of the MM/GM relationships in the Consistency Group
are started at the same time.
Certain uses of MM/GM require the manipulation of more than one relationship. Remote
Copy Consistency Groups can group relationships so that they are manipulated in unison.
Although Consistency Groups can be used to manipulate sets of relationships that do not
need to satisfy these strict rules, this manipulation can lead to undesired side effects. The
rules behind a Consistency Group mean that certain configuration commands are prohibited.
These configuration commands are not prohibited if the relationship is not part of a
Consistency Group.
If one application finishes its background copy more quickly than the other application,
MM/GM still refuses to grant access to its auxiliary Volumes even though it is safe in this case.
The MM/GM policy is to refuse access to the entire Consistency Group if any part of it is
inconsistent. Stand-alone relationships and Consistency Groups share a common
configuration and state model. All of the relationships in a non-empty Consistency Group
have the same state as the Consistency Group.
Zoning
At least 2 FC ports of every node of each system must communicate with each other to create
the partnership. Switch zoning is critical to facilitate intercluster communication.
These channels are maintained and updated as nodes and links appear and disappear from
the fabric, and are repaired to maintain operation where possible. If communication between
the systems is interrupted or lost, an event is logged (and the Metro Mirror and Global Mirror
relationships stop).
Alerts: You can configure the system to raise Simple Network Management Protocol
(SNMP) traps to the enterprise monitoring system to alert on events that indicate an
interruption in internode communication occurred.
Intercluster links
All IBM SAN Volume Controller nodes maintain a database of other devices that are visible on
the fabric. This database is updated as devices appear and disappear.
Devices that advertise themselves as IBM SAN Volume Controller or Storwize V7000 nodes
are categorized according to the system to which they belong. Nodes that belong to the same
system establish communication channels between themselves and begin to exchange
messages to implement clustering and the functional protocols of IBM Spectrum Virtualize.
Nodes that are in separate systems do not exchange messages after initial discovery is
complete, unless they are configured together to perform a remote copy relationship.
550 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The intercluster link carries control traffic to coordinate activity between two systems. The link
is formed between one node in each system. The traffic between the designated nodes is
distributed among logins that exist between those nodes.
If the designated node fails (or all of its logins to the remote system fail), a new node is
chosen to carry control traffic. This node change causes the I/O to pause, but it does not put
the relationships in a ConsistentStopped state.
With synchronous copies, host applications write to the master Volume, but they do not
receive confirmation that the write operation completed until the data is written to the auxiliary
Volume. This action ensures that both the Volumes have identical data when the copy
completes. After the initial copy completes, the Metro Mirror function always maintains a fully
synchronized copy of the source data at the target site.
Increased distance directly affects host I/O performance because the writes are synchronous.
Use the requirements for application performance when you are selecting your Metro Mirror
auxiliary location.
Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups.
Important: Performing Metro Mirror across I/O Groups within a system is not supported.
By using standard single-mode connections, the supported distance between two systems in
a Metro Mirror partnership is 10 km (6.2 miles), although greater distances can be achieved
by using extenders. For extended distance solutions, contact your IBM representative.
Limit: When a local fabric and a remote fabric are connected for Metro Mirror purposes,
the inter-switch link (ISL) hop count between a local node and a remote node cannot
exceed seven.
Events, such as a loss of connectivity between systems, can cause mirrored writes from the
master Volume and the auxiliary Volume to fail. In that case, Metro Mirror suspends writes to
the auxiliary Volume and enables I/O to the master Volume to continue to avoid affecting the
operation of the master Volumes.
Figure 11-88 shows how a write to the master Volume is mirrored to the cache of the auxiliary
Volume before an acknowledgment of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.
However, this process also means that the application is exposed to the latency and
bandwidth limitations (if any) of the communication link between the master and auxiliary
Volumes. This process might lead to unacceptable application performance, particularly when
placed under peak load. Therefore, the use of traditional Fibre Channel Metro Mirror has
distance limitations that are based on your performance requirements. IBM Spectrum
Virtualize does not support more than 300 km (186.4 miles).
552 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.6.7 Metro Mirror features
The IBM Spectrum Virtualize Metro Mirror function supports the following features:
Synchronous remote copy of Volumes that are dispersed over metropolitan distances.
The Metro Mirror relationships between Volume pairs, with each Volume in a pair that is
managed by a Storwize V7000 system or IBM SAN Volume Controller system (requires
V6.3.0 or later).
Supports intracluster Metro Mirror where both Volumes belong to the same system (and
I/O Group).
IBM Spectrum Virtualize supports intercluster Metro Mirror where each Volume belongs to
a separate system. You can configure a specific system for partnership with another
system. All intercluster Metro Mirror processing occurs between two IBM Spectrum
Virtualize systems that are configured in a partnership.
Intercluster and intracluster Metro Mirror can be used concurrently.
IBM Spectrum Virtualize does not require that a control network or fabric is installed to
manage Metro Mirror. For intercluster Metro Mirror, the system maintains a control link
between two systems. This control link is used to control the state and coordinate updates
at either end. The control link is implemented on top of the same FC fabric connection that
the system uses for Metro Mirror I/O.
IBM Spectrum Virtualize implements a configuration model that maintains the Metro Mirror
configuration and state through major events, such as failover, recovery, and
resynchronization, to minimize user configuration action through these events.
IBM Spectrum Virtualize supports the resynchronization of changed data so that write failures
that occur on the master or auxiliary Volumes do not require a complete resynchronization of
the relationship.
Switching copy direction: The copy direction for a Metro Mirror relationship can be
switched so that the auxiliary Volume becomes the master, and the master Volume
becomes the auxiliary, which is similar to the FlashCopy restore option. However, although
the FlashCopy target Volume can operate in read/write mode, the target Volume of the
started remote copy is always in read-only mode.
While the Metro Mirror relationship is active, the auxiliary Volume is not accessible for host
application write I/O at any time. The IBM Storwize V7000 allows read-only access to the
auxiliary Volume when it contains a consistent image. Storwize allows boot time operating
system discovery to complete without an error, so that any hosts at the secondary site can be
ready to start the applications with minimum delay, if required.
For example, many operating systems must read LBA zero to configure a logical unit.
Although read access is allowed at the auxiliary in practice, the data on the auxiliary Volumes
cannot be read by a host because most operating systems write a “dirty bit” to the file system
when it is mounted. Because this write operation is not allowed on the auxiliary Volume, the
Volume cannot be mounted.
This access is provided only where consistency can be ensured. However, coherency cannot
be maintained between reads that are performed at the auxiliary and later write I/Os that are
performed at the master.
To enable access to the auxiliary Volume for host operations, you must stop the Metro Mirror
relationship by specifying the -access parameter. While access to the auxiliary Volume for
host operations is enabled, the host must be instructed to mount the Volume before the
application can be started, or instructed to perform a recovery process.
For example, the Metro Mirror requirement to enable the auxiliary copy for access
differentiates it from third-party mirroring software on the host, which aims to emulate a
single, reliable disk regardless of what system is accessing it. Metro Mirror retains the
property that there are two Volumes in existence, but it suppresses one Volume while the
copy is being maintained.
The use of an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required, and that the tasks to be performed on the host that is involved in
establishing the operation on the auxiliary copy are substantial. The goal is to make this copy
rapid (much faster when compared to recovering from a backup copy) but not seamless.
The failover process can be automated through failover management software. The IBM
Storwize V7000 provides SNMP traps and programming (or scripting) for the CLI to enable
this automation.
554 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.6.10 Global Mirror overview
This section describes the Global Mirror copy service, which is an asynchronous remote copy
service. This service provides and maintains a consistent mirrored copy of a source Volume
to a target Volume.
Global Mirror establishes a Global Mirror relationship between two Volumes of equal size. The
Volumes in a Global Mirror relationship are referred to as the master (source) Volume and the
auxiliary (target) Volume, which is the same as Metro Mirror. Consistency Groups can be
used to maintain data integrity for dependent writes, which is similar to FlashCopy
Consistency Groups.
Global Mirror writes data to the auxiliary Volume asynchronously, which means that host
writes to the master Volume provide the host with confirmation that the write is complete
before the I/O completes on the auxiliary Volume.
Limit: When a local fabric and a remote fabric are connected for Global Mirror purposes,
the ISL hop count between a local node and a remote node must not exceed seven hops.
The Global Mirror function provides the same function as Metro Mirror remote copy, but over
long-distance links with higher latency without requiring the hosts to wait for the full round-trip
delay of the long-distance link.
The Global Mirror algorithms maintain a consistent image on the auxiliary. They achieve this
consistent image by identifying sets of I/Os that are active concurrently at the master,
assigning an order to those sets, and applying those sets of I/Os in the assigned order at the
secondary. As a result, Global Mirror maintains the features of Write Ordering and Read
Stability.
The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary system. Therefore, the process is not
subject to the latency of the long-distance link. These two elements of the protocol ensure
that the throughput of the total system can be grown by increasing system size while
maintaining consistency across a growing data set.
Global Mirror write I/O from production system to a secondary system requires serialization
and sequence-tagging before being sent across the network to a remote site (to maintain a
write-order consistent copy of data).
To avoid affecting the production site, IBM Spectrum Virtualize supports more parallelism in
processing and managing Global Mirror writes on the secondary system by using the
following methods:
Secondary system nodes store replication writes in new redundant non-volatile cache
Cache content details are shared between nodes
Cache content details are batched together to make node-to-node latency less of an issue
Nodes intelligently apply these batches in parallel as soon as possible
Nodes internally manage and optimize Global Mirror secondary write I/O processing
In a failover scenario where the secondary site must become the master source of data,
certain updates might be missing at the secondary site. Therefore, any applications that use
this data must have an external mechanism for recovering the missing updates and
reapplying them, such as a transaction log replay.
556 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Global Mirror is supported over FC, FC over IP (FCIP), FC over Ethernet (FCoE), and native
IP connections. The maximum distance cannot exceed 80 ms round trip, which is about 4000
km (2485.48 miles) between mirrored systems. But starting with IBM Spectrum Virtualize
V7.4, this distance was significantly increased for certain IBM Storwize Gen2 and IBM SAN
Volume Controller configurations. Figure 11-90 shows the current supported distances for
Global Mirror remote copy.
If multiple writes are allowed to be applied to the master for a sector, only the most recent
write gets the correct data during reconstruction. If reconstruction is interrupted for any
reason, the intermediate state of the auxiliary is inconsistent. Applications that deliver such
write activity do not achieve the performance that Global Mirror is intended to support. A
Volume statistic is maintained about the frequency of these collisions.
An attempt is made to allow multiple writes to a single location to be outstanding in the Global
Mirror algorithm. There is still a need for master writes to be serialized, and the intermediate
states of the master data must be kept in a non-volatile journal while the writes are
outstanding to maintain the correct write ordering during reconstruction. Reconstruction must
never overwrite data on the auxiliary with an earlier version. The Volume statistic that is
monitoring colliding writes is now limited to those writes that are not affected by this change.
The following numbers correspond to the numbers that are shown in Figure 11-91:
(1) The first write is performed from the host to LBA X.
(2) The host is provided acknowledgment that the write completed even though the
mirrored write to the auxiliary Volume is not yet complete.
(1’) and (2’) occur asynchronously with the first write.
(3) The second write is performed from the host also to LBA X. If this write occurs before
(2’), the write is written to the journal file.
(4) The host is provided acknowledgment that the second write is complete.
558 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Delay simulation
An optional feature for Global Mirror enables a delay simulation to be applied on writes that
are sent to auxiliary Volumes. This feature enables you to perform testing that detects
colliding writes. Therefore, you can use this feature to test an application before the full
deployment of the feature. The feature can be enabled separately for each of the intracluster
or intercluster Global Mirrors.
You specify the delay setting by using the chsystem command and view the delay by using the
lssystem command. The gm_intra_cluster_delay_simulation field expresses the amount of
time that intracluster auxiliary I/Os are delayed. The gm_inter_cluster_delay_simulation
field expresses the amount of time that intercluster auxiliary I/Os are delayed. A value of zero
disables the feature.
Tip: If you are experiencing repeated problems with the delay on your link, make sure that
the delay simulator was properly disabled.
Global Mirror has functionality that is designed to address the following conditions, which
might negatively affect certain Global Mirror implementations:
The estimation of the bandwidth requirements tends to be complex.
Ensuring that the latency and bandwidth requirements can be met is often difficult.
Congested hosts on the source or target site can cause disruption.
Congested network links can cause disruption with only intermittent peaks.
To address these issues, Change Volumes were added as an option for Global Mirror
relationships. Change Volumes use the FlashCopy functionality, but they cannot be
manipulated as FlashCopy Volumes because they are for a special purpose only. Change
Volumes replicate point-in-time images on a cycling period. The default is 300 seconds.
Your change rate needs to include only the condition of the data at the point-in-time that the
image was taken, rather than all the updates during the period. The use of this function can
provide significant reductions in replication Volume.
With Change Volumes, a FlashCopy mapping exists between the primary Volume and the
primary Change Volume. The mapping is updated on the cycling period (60 seconds to one
day). The primary Change Volume is then replicated to the secondary Global Mirror Volume
at the target site, which is then captured in another Change Volume on the target site. This
approach provides an always consistent image at the target site and protects your data from
being inconsistent during resynchronization.
For more information about IBM FlashCopy, see 11.1, “IBM FlashCopy” on page 462.
You can adjust the cycling period by using the chrcrelationship -cycleperiodseconds
<60 - 86400> command from the CLI. If a copy does not complete in the cycle period, the
next cycle does not start until the prior cycle completes. For this reason, the use of Change
Volumes gives you the following possibilities for RPO:
If your replication completes in the cycling period, your RPO is twice the cycling period.
If your replication does not complete within the cycling period, RPO is twice the completion
time. The next cycling period starts immediately after the prior cycling period is finished.
560 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Carefully consider your business requirements versus the performance of Global Mirror with
Change Volumes. Global Mirror with Change Volumes increases the intercluster traffic for
more frequent cycling periods. Therefore, selecting the shortest cycle periods possible is not
always the answer. In most cases, the default must meet requirements and perform well.
Important: When you create your Global Mirror Volumes with Change Volumes, make
sure that you remember to select the Change Volume on the auxiliary (target) site. Failure
to do so leaves you exposed during a resynchronization operation.
If this preferred practice is not maintained, such as if source Volumes are assigned to only
one node in the I/O group, you can change the preferred node for each Volume to distribute
Volumes evenly between the nodes. You can also change the preferred node for Volumes that
are in a remote copy relationship without affecting the host I/O to a particular Volume.
The remote copy relationship type does not matter. The remote copy relationship type can be
MM, GM, or GM with Change Volumes. You can change the preferred node both to the source
and target Volumes that are participating in the remote copy relationship.
Background copy I/O is scheduled to avoid bursts of activity that could have an adverse effect
on system behavior. An entire grain of tracks on one Volume is processed at around the same
time, but not as a single I/O. Double buffering is used to try to use sequential performance
within a grain. However, the next grain within the Volume might not be scheduled for some
time. Multiple grains might be copied simultaneously, and might be enough to satisfy the
requested rate, unless the available resources cannot sustain the requested rate.
Global Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy occurs on relationships that are in the InconsistentCopying
state with a status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
all nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node in turn divides its allocation evenly between the multiple relationships that are
performing a background copy.
The default value of the background copy is 25 megabytes per second (MBps), per Volume.
If the auxiliary Volume is thin-provisioned and the region is deallocated, the special buffer
prevents a write and, therefore, an allocation. If the auxiliary Volume is not thin-provisioned or
the region in question is an allocated region of a thin-provisioned Volume, a buffer of “real”
zeros is synthesized on the auxiliary and written as normal.
With this technique, do not allow I/O on the master or auxiliary before the relationship is
established. Then, the administrator must run the following commands:
Run mkrcrelationship with the -sync flag.
Run startrcrelationship without the -clean flag.
Important: Failure to perform these steps correctly can cause MM/GM to report the
relationship as consistent when it is not, possibly causing a data loss or data integrity
exposure for hosts accessing data on the auxiliary Volume.
562 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.6.18 Practical use of Global Mirror
The practical use of Global Mirror is similar to the Metro Mirror described in 11.6.9, “Practical
use of Metro Mirror” on page 554. The main difference between the two remote copy modes
is that Global Mirror and Global Mirror with Change Volumes are mostly used on much larger
distances than Metro Mirror. Weak link quality or insufficient bandwidth between the primary
and secondary sites can also be a reason to prefer asynchronous Global Mirror over
synchronous Metro Mirror. Otherwise, the use cases for Metro Mirror and Global Mirror are
the same.
You can create an HyperSwap topology system configuration where each I/O group in the
system is physically on a different site. These configurations can be used to maintain access
to data on the system when power failures or site-wide outages occur.
For more information about HyperSwap implementation and best practices, see IBM Storwize
V7000, Spectrum Virtualize, HyperSwap, and VMware Implementation, SG24-8317.
Since V7.8, it is possible to create a FlashCopy mapping (Change Volume) for a remote copy
target Volume to maintain a consistent image of the secondary volume. The system
recognizes it as a Consistency Protection and a link failure or an offline secondary Volume
event is handled differently now.
When Consistency Protection is configured, the relationship between the primary and
secondary volumes does not stop if the link goes down or the secondary volume is offline.
The relationship does not go in to the Consistent stopped status. Instead, the system uses
the secondary change volume to automatically copy the previous consistent state of the
secondary volume. The relationship automatically moves to the Consistent copying status as
the system resynchronizes and protects the consistency of the data. The relationship status
changes to Consistent synchronized when the resynchronization process completes. The
relationship automatically resumes replication after the temporary loss of connectivity.
Change Volumes used for Consistency Protection are not visible and manageable from the
GUI because they are used for Consistency Protection internal behavior only.
The option to add consistency protection is selected by default when Metro Mirror or Global
Mirror relationships are created. The option must be cleared to create Metro Mirror or Global
Mirror relationships without consistency protection.
Total Volume size per I/O Group There is a per I/O Group limit of 1024 terabytes (TB) on
the quantity of master and auxiliary Volume address
spaces that can participate in Metro Mirror and Global
Mirror relationships. This maximum configuration uses all
512 MiB of bitmap space for the I/O Group and allows
10 MiB of space for all remaining copy services features.
564 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.6.23 Remote Copy states and events
This section describes the various states of a MM/GM relationship and the conditions that
cause them to change. In Figure 11-94, the MM/GM relationship diagram shows an overview
of the status that can apply to a MM/GM relationship in a connected state.
When the MM/GM relationship is created, you can specify whether the auxiliary Volume is
already in sync with the master Volume, and the background copy process is then skipped.
This capability is useful when MM/GM relationships are established for Volumes that were
created with the format option.
Stop on Error
When a MM/GM relationship is stopped (intentionally, or because of an error), the state
changes. For example, the MM/GM relationships in the ConsistentSynchronized state enter
the ConsistentStopped state, and the MM/GM relationships in the InconsistentCopying state
enter the InconsistentStopped state.
If the connection is broken between the two systems that are in a partnership, all (intercluster)
MM/GM relationships enter a Disconnected state. For more information, see “Connected
versus disconnected” on page 566.
State overview
The following sections provide an overview of the various MM/GM states.
When the two systems can communicate, the systems and the relationships that spans them
are described as connected. When they cannot communicate, the systems and the
relationships spanning them are described as disconnected.
566 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In this state, both systems are left with fragmented relationships and are limited regarding the
configuration commands that can be performed. The disconnected relationships are
portrayed as having a changed state. The new states describe what is known about the
relationship and the configuration commands that are permitted.
When the systems can communicate again, the relationships are reconnected. MM/GM
automatically reconciles the two state fragments, considering any configuration or other event
that occurred while the relationship was disconnected. As a result, the relationship can return
to the state that it was in when it became disconnected, or it can enter a new state.
Relationships that are configured between Volumes in the same IBM Storwize V7000 system
(intracluster) are never described as being in a disconnected state.
An auxiliary Volume is described as consistent if it contains data that can be read by a host
system from the master if power failed at an imaginary point while I/O was in progress, and
power was later restored. This imaginary point is defined as the recovery point.
The requirements for consistency are expressed regarding activity at the master up to the
recovery point. The auxiliary Volume contains the data from all of the writes to the master for
which the host received successful completion and that data was not overwritten by a
subsequent write (before the recovery point).
Consider writes for which the host did not receive a successful completion (that is, it received
bad completion or no completion at all). If the host then performed a read from the master of
that data that returned successful completion and no later write was sent (before the recovery
point), the auxiliary contains the same data as the data that was returned by the read from
the master.
From the point of view of an application, consistency means that an auxiliary Volume contains
the same data as the master Volume at the recovery point (the time at which the imaginary
power failure occurred). If an application is designed to cope with an unexpected power
failure, this assurance of consistency means that the application can use the auxiliary and
begin operation as though it was restarted after the hypothetical power failure. Again,
maintaining the application write ordering is the key property of consistency.
For more information about dependent writes, see 11.1.13, “FlashCopy and image mode
Volumes” on page 486.
Because of the risk of data corruption, and in particular undetected data corruption, MM/GM
strongly enforces the concept of consistency and prohibits access to inconsistent data.
When you are deciding how to use Consistency Groups, the administrator must consider the
scope of an application’s data and consider all of the interdependent systems that
communicate and exchange information.
If two programs or systems communicate and store details as a result of the information that
is exchanged, either of the following actions might occur:
All of the data that is accessed by the group of systems must be placed into a single
Consistency Group.
The systems must be recovered independently (each within its own Consistency Group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in the past. Write I/O might continue to a master but
not be copied to the auxiliary. This state arises when it becomes impossible to keep data
up-to-date and maintain consistency. An example is a loss of communication between
systems when you are writing to the auxiliary.
When communication is lost for an extended period and Consistency Protection was not
enabled, MM/GM tracks the changes that occurred on the master, but not the order or the
details of such changes (write data). When communication is restored, it is impossible to
synchronize the auxiliary without sending write data to the auxiliary out of order. Therefore,
consistency is lost.
Note: MM/GM relationships with Consistency Protection enabled use a point-in-time copy
mechanism (FlashCopy) to keep a consistent copy of the auxiliary. The relationships stay
in a consistent state, although not synchronized, even if communication is lost. For details
about Consistency Protection, see 11.6.20, “Consistency Protection for GM/MM” on
page 563.
Detailed states
The following sections describe the states that are portrayed to the user for either
Consistency Groups or relationships. They also describe information that is available in each
state. The major states are designed to provide guidance about the available configuration
commands.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. A copy process must be
started to make the auxiliary consistent. This state is entered when the relationship or
Consistency Group was InconsistentCopying and suffered a persistent error or received a
stop command that caused the copy process to stop.
568 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
A start command causes the relationship or Consistency Group to move to the
InconsistentCopying state. A stop command is accepted, but has no effect.
If the relationship or Consistency Group becomes disconnected, the auxiliary side makes the
transition to InconsistentDisconnected. The master side changes to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. This state is entered after a
start command is issued to an InconsistentStopped relationship or a Consistency Group.
A persistent error or stop command places the relationship or Consistency Group into an
InconsistentStopped state. A start command is accepted but has no effect.
If the relationship or Consistency Group becomes disconnected, the auxiliary side changes to
InconsistentDisconnected. The master side changes to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date in relation to the master. This state can arise when a
relationship was in a ConsistentSynchronized state and experienced an error that forces a
Consistency Freeze. It can also arise when a relationship is created with a
CreateConsistentFlag set to TRUE.
Normally, write activity that follows an I/O error causes updates to the master, and the
auxiliary is no longer synchronized. In this case, consistency must be given up for a period to
reestablish synchronization. You must use a start command with the -force option to
acknowledge this condition, and the relationship or Consistency Group changes to
InconsistentCopying. Enter this command only after all outstanding events are repaired.
In the unusual case where the master and the auxiliary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you
can enter a switch command that moves the relationship or Consistency Group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
If the relationship or Consistency Group becomes disconnected, the same changes are made
as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary Volumes operate in the master role.
Therefore, both master and auxiliary Volumes are accessible for write I/O.
In this state, the relationship or Consistency Group accepts a start command. MM/GM
maintains a record of regions on each disk that received write I/O while they were idling. This
record is used to determine what areas must be copied following a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either Volume in any relationship received write I/O, which is indicated
by the Synchronized status. If the start command leads to loss of consistency, you must
specify the -force parameter.
Also, the relationship or Consistency Group accepts a -clean option on the start command
while in this state. If the relationship or Consistency Group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The target Volumes in this half of the
relationship or Consistency Group are all in the master role and accept read or write I/O.
The priority in this state is to recover the link to restore the relationship or consistency.
No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship changes to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or Consistency Group, which depends on the following factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
570 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
While IdlingDisconnected, if a write I/O is received that causes the loss of synchronization
(synchronized attribute transitions from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an event is raised to notify you of
the condition. This same event also is raised when this condition occurs for the
ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The target Volumes in this half of the
relationship or Consistency Group are all in the auxiliary role, and do not accept read or write
I/O. Except for deletes, no configuration activity is permitted until the relationship becomes
connected again.
When the relationship or Consistency Group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either of the following conditions are
true:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop command while disconnected.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The target Volumes in this half of the
relationship or Consistency Group are all in the auxiliary role, and accept read I/O but not
write I/O.
In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which
is the point when Consistency was frozen. When it is entered from ConsistentStopped, it
retains the time that it had in that state. When it is entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or Consistency Group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the
other system.
A stop command with the -access flag set to true transitions the relationship or Consistency
Group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary Volume and is used as part of a DR scenario.
When the relationship or Consistency Group becomes connected again, the relationship or
Consistency Group becomes ConsistentSynchronized only if this action does not lead to a
loss of consistency. The following conditions must be true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the master while disconnected.
Empty
This state applies only to Consistency Groups. It is the state of a Consistency Group that has
no relationships and no other state information to show.
It is entered when a Consistency Group is first created. It is exited when the first relationship
is added to the Consistency Group, at which point the state of the relationship becomes the
state of the Consistency Group.
Following these steps, the remote host server is mapped to the auxiliary Volume and the disk
is available for I/O.
For more information about MM/GM commands, see IBM System Storage SAN Volume
Controller and IBM Storwize V7000 Command-Line Interface User’s Guide, GC27-2287.
The command set for MM/GM contains the following broad groups:
Commands to create, delete, and manipulate relationships and Consistency Groups
Commands to cause state changes
If a configuration command affects more than one system, MM/GM coordinates configuration
activity between the systems. Certain configuration commands can be performed only when
the systems are connected, and fail with no effect when they are disconnected.
Other configuration commands are permitted even though the systems are disconnected. The
state is reconciled automatically by MM/GM when the systems become connected again.
For any command (with one exception), a single system receives the command from the
administrator. This design is significant for defining the context for a CreateRelationship
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command. In this case, the
system that is receiving the command is called the local system.
The exception is a command that sets systems into a MM/GM partnership. The
mkfcpartnership and mkippartnership commands must be issued on both the local and
remote systems.
The commands in this section are described as an abstract command set, and are
implemented by either of the following methods:
CLI can be used for scripting and automation.
GUI can be used for one-off tasks.
572 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.7.2 Listing available system partners
Use the lspartnershipcandidate command to list the systems that are available for setting
up a two-system partnership. This command is a prerequisite for creating MM/GM
relationships.
Important: Do not set this value higher than the default without first establishing that
the higher bandwidth can be sustained without affecting the host’s performance. The
limit must never be higher than the maximum that is supported by the infrastructure
connecting the remote sites, regardless of the compression rates that you might
achieve.
-gmlinktolerance link_tolerance
This parameter specifies the maximum period that the system tolerates delay before
stopping Global Mirror relationships. Specify values of 60 - 86,400 seconds in increments
of 10 seconds. The default value is 300. Do not change this value except under the
direction of IBM Support.
-gmmaxhostdelay max_host_delay
This parameter specifies the maximum time delay, in milliseconds, at which the Global
Mirror link tolerance timer starts counting down. This threshold value determines the
additional effect that Global Mirror operations can add to the response times of the Global
Mirror source Volumes. You can use this parameter to increase the threshold from the
default value of 5 milliseconds.
-gminterdelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intercluster copying
to an auxiliary Volume) is delayed. This parameter enables you to test performance
implications before Global Mirror is deployed and a long-distance link is obtained. Specify
a value of 0 - 100 milliseconds in 1-millisecond increments. The default value is 0. Use this
argument to test each intercluster Global Mirror relationship separately.
Use the chsystem command to adjust these values, as shown in the following example:
chsystem -gmlinktolerance 300
You can view all of these parameter values by using the lssystem <system_name> command.
Focus on the gmlinktolerance parameter in particular. If poor response extends past the
specified tolerance, a 1920 event is logged and one or more GM relationships automatically
stop to protect the application hosts at the primary site. During normal operations, application
hosts experience a minimal effect from the response times because the GM feature uses
asynchronous replication.
However, if GM operations experience degraded response times from the secondary system
for an extended period, I/O operations begin to queue at the primary system. This queue
results in an extended response time to application hosts. In this situation, the
gmlinktolerance feature stops GM relationships, and the application host’s response time
returns to normal.
After a 1920 event occurs, the GM auxiliary Volumes are no longer in the
consistent_synchronized state. Fix the cause of the event and restart your GM relationships.
For this reason, ensure that you monitor the system to track when these 1920 events occur.
You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0
(zero). However, the gmlinktolerance feature cannot protect applications from extended
response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature
under the following circumstances:
During SAN maintenance windows in which degraded performance is expected from SAN
components, and application hosts can stand extended response times from GM
Volumes.
During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the GM relationships. For example,
if you test by using an I/O generator that is configured to stress the back-end storage, the
gmlinktolerance feature might detect the high latency and stop the GM relationships.
Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test
host to extended response times.
A 1920 event indicates that one or more of the SAN components cannot provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).
574 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If 1920 events are occurring, you might need to use a performance monitoring and analysis
tool, such as the IBM Spectrum Control, to help identify and resolve the problem.
To establish a fully functional MM/GM partnership, you must issue this command on both
systems. This step is a prerequisite for creating MM/GM relationships between Volumes on
the IBM Spectrum Virtualize systems.
When creating the partnership, you must specify the Link Bandwidth and can specify the
Background Copy Rate:
The Link Bandwidth, expressed in Mbps (megabits per second), is the amount of
bandwidth that can be used for the FC or IP connection between the systems within the
partnership.
The Background Copy Rate is the maximum percentage of the Link Bandwidth that can
be used for background copy operations. The default value is 50%.
The background copy bandwidth determines the rate at which the background copy is
attempted for MM/GM. The background copy bandwidth can affect foreground I/O latency in
one of the following ways:
The following results can occur if the background copy bandwidth is set too high compared
to the MM/GM intercluster link capacity:
– The background copy I/Os can back up on the MM/GM intercluster link.
– There is a delay in the synchronous auxiliary writes of foreground I/Os.
– The foreground I/O latency increases as perceived by applications.
If the background copy bandwidth is set too high for the storage at the primary site,
background copy read I/Os overload the primary storage and delay foreground I/Os.
If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary site overload the auxiliary storage, and again
delay the synchronous secondary writes of foreground I/Os.
To set the background copy bandwidth optimally, ensure that you consider all three resources:
Primary storage, intercluster link bandwidth, and auxiliary storage. Provision the most
restrictive of these three resources between the background copy bandwidth and the peak
foreground I/O workload.
The MM/GM consistency group name must be unique across all consistency groups that are
known to the systems owning this consistency group. If the consistency group involves two
systems, the systems must be in communication throughout the creation process.
The new consistency group does not contain any relationships and is in the Empty state. You
can add MM/GM relationships to the group (upon creation or afterward) by using the
chrelationship command.
Optional parameter: If you do not use the -global optional parameter, a Metro Mirror
relationship is created rather than a Global Mirror relationship.
The auxiliary Volume must be equal in size to the master Volume or the command fails. If both
Volumes are in the same system, they must be in the same I/O Group. The master and
auxiliary Volume cannot be in an existing relationship, and they cannot be the target of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.
When the MM/GM relationship is created, you can add it to an existing Consistency Group, or
it can be a stand-alone MM/GM relationship if no Consistency Group is specified.
When the command is issued, you can specify the master Volume name and auxiliary system
to list the candidates that comply with the prerequisites to create a MM/GM relationship. If the
command is issued with no parameters, all of the Volumes that are not disallowed by another
configuration state, such as being a FlashCopy target, are listed.
576 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Adding an MM/GM relationship: When an MM/GM relationship is added to a
Consistency Group that is not empty, the relationship must have the same state and copy
direction as the group to be added to it.
When the command is issued, you can set the copy direction if it is undefined. Optionally, you
can mark the auxiliary Volume of the relationship as clean. The command fails if it is used as
an attempt to start a relationship that is already a part of a consistency group.
You can issue this command only to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (master and auxiliary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when the relationship is restarted. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original master of the relationship.
The use of the -force parameter here is a reminder that the data on the auxiliary becomes
inconsistent while resynchronization (background copying) takes place. Therefore, this data is
unusable for DR purposes before the background copy completes.
In the Idling state, you must specify the master Volume to indicate the copy direction. In
other connected states, you can provide the -primary argument, but it must match the
existing setting.
If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue a startrcrelationship command. Write activity is no longer copied from the
master to the auxiliary Volume. For a relationship in the ConsistentSynchronized state, this
command causes a Consistency Freeze.
For a consistency group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped by a stop command or by an I/O error.
If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you issue the startrcconsistgrp command. Write activity is no longer copied
from the master to the auxiliary Volumes that belong to the relationships in the group. For a
consistency group in the ConsistentSynchronized state, this command causes a Consistency
Freeze.
If the relationship is disconnected at the time that the command is issued, the relationship is
deleted only on the system on which the command is being run. When the systems
reconnect, the relationship is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the relationship on
both systems, you can issue the rmrcrelationship command independently on both of the
systems.
A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.
If you delete an inconsistent relationship, the auxiliary Volume becomes accessible even
though it is still inconsistent. This situation is the one case in which MM/GM does not inhibit
access to inconsistent data.
578 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If the consistency group is disconnected at the time that the command is issued, the
consistency group is deleted only on the system on which the command is being run. When
the systems reconnect, the consistency group is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the consistency
group on both systems, you can issue the rmrcconsistgrp command separately on both of
the systems.
If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.
Important: Remember that by reversing the roles, your current source Volumes become
targets, and target Volumes become source Volumes. Therefore, you lose write access to
your current primary Volumes.
Demonstration: The IBM Client Demonstration Center shows how data is replicated using
Global Mirror with Change Volumes (cycling mode set to multiple). This configuration
perfectly fits the new IP replication functionality because it is well designed for links with
high latency, low bandwidth, or both:
https://ibm.biz/Bdjhzs
This technology can improve remote mirroring network bandwidth usage up to three times.
Improved bandwidth usage can enable clients to deploy a less costly network infrastructure,
or speed up remote replication cycles to enhance disaster recovery effectiveness.
With an Ethernet network data flow, the data transfer can slow down over time. This condition
occurs because of the latency that is caused by waiting for the acknowledgment of each set of
packets that is sent. The next packet set cannot be sent until the previous packet is
acknowledged, as shown in Figure 11-95.
However, by using the embedded IP replication, this behavior can be eliminated with the
enhanced parallelism of the data flow by using multiple virtual connections (VC) that share IP
links and addresses. The artificial intelligence engine can dynamically adjust the number of
VCs, receive window size, and packet size to maintain optimum performance. While the
engine is waiting for one VC’s ACK, it sends more packets across other VCs. If packets are
lost from any VC, data is automatically retransmitted, as shown in Figure 11-96.
Figure 11-96 Optimized network data flow by using Bridgeworks SANSlide technology
For more information about this technology, see IBM Storwize V7000 and SANSlide
Implementation, REDP-5023.
580 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
With native IP partnership, the following Copy Services features are supported:
Metro Mirror
Referred to as synchronous replication, MM provides a consistent copy of a source
Volume on a target Volume. Data is written to the target Volume synchronously after it is
written to the source Volume so that the copy is continuously updated.
Global Mirror and GM with Change Volumes
Referred to as asynchronous replication, GM provides a consistent copy of a source
Volume on a target Volume. Data is written to the target Volume asynchronously so that
the copy is continuously updated. However, the copy might not contain the last few
updates if a DR operation is performed. An added extension to GM is GM with Change
Volumes. GM with Change Volumes is the preferred method for use with native IP
replication.
Note: For IP partnerships, generally use the Global Mirror with change Volumes
method of copying (asynchronous copy of changed grains only). This method can have
performance benefits. Also, Global Mirror and Metro Mirror might be more susceptible
to the loss of synchronization.
Note: A physical link is the physical IP link between the two sites: A (local) and B
(remote). Multiple IP addresses on local system A could be connected (by Ethernet
switches) to this physical link. Similarly, multiple IP addresses on remote system B
could be connected (by Ethernet switches) to the same physical link. At any point in
time, only a single IP address on cluster A can form an RC data session with an IP
address on cluster B.
The maximum throughput is restricted based on the use of 1 Gbps or 10 Gbps Ethernet
ports. It varies based on distance (for example, round-trip latency) and quality of
communication link (for example, packet loss):
– One 1 Gbps port can transfer up to 110 MBps unidirectional, 190 MBps bidirectional
– Two 1 Gbps ports can transfer up to 220 MBps unidirectional, 325 MBps bidirectional
– One 10 Gbps port can transfer up to 240 MBps unidirectional, 350 MBps bidirectional
– Two 10 Gbps port can transfer up to 440 MBps unidirectional, 600 MBps bidirectional
The minimum supported link bandwidth is 10 Mbps. However, this requirement scales up
with the amount of host I/O that you choose to do. Figure 11-97 describes the scaling of
host I/O.
The equation that can describe the approximate minimum bandwidth that is required
between two systems with < 5 ms round-trip time and errorless link follows:
Minimum intersite link bandwidth in Mbps > Required Background Copy in Mbps +
Maximum Host I/O in Mbps + 1 Mbps heartbeat traffic
582 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Increasing latency and errors results in a higher requirement for minimum bandwidth.
Note: The Bandwidth setting definition when the IP partnerships are created changed
in V7.7. Previously, the bandwidth setting defaulted to 50 MiB, and was the maximum
transfer rate from the primary site to the secondary site for initial sync/resyncs of
Volumes.
The Link Bandwidth setting is now configured by using megabits (Mb) not MB. You set
the Link Bandwidth setting to a value that the communication link can sustain, or to
what is allocated for replication. The Background Copy Rate setting is now a
percentage of the Link Bandwidth. The Background Copy Rate setting determines the
available bandwidth for the initial sync and resyncs or for GM with Change Volumes.
Data compression is supported for IPv4 or IPv6 partnerships. To enable data compression,
both systems in an IP partnership must be running a software level that supports IP
partnership compression (V7.7 or later).
Although IP compression uses the same RtC algorithm as for Volumes, there is no need for a
RtC license on any of the local or remote system.
During the VLAN configuration for each IP address, the VLAN settings for the local and
failover ports on two nodes of an I/O Group can differ. To avoid any service disruption,
switches must be configured so that the failover VLANs are configured on the local switch
ports and the failover of IP addresses from a failing node to a surviving node succeeds. If
failover VLANs are not configured on the local switch ports, there are no paths to the IBM
Spectrum Virtualize system nodes during a node failure and the replication fails.
Consider the following requirements and procedures when implementing VLAN tagging:
VLAN tagging is supported for IP partnership traffic between two systems.
VLAN provides network traffic separation at the layer 2 level for Ethernet transport.
VLAN tagging by default is disabled for any IP address of a node port. You can use the CLI
or GUI to optionally set the VLAN ID for port IPs on both systems in the IP partnership.
When a VLAN ID is configured for the port IP addresses that are used in remote copy port
groups, appropriate VLAN settings on the Ethernet network must also be configured to
prevent connectivity issues.
Setting VLAN tags for a port is disruptive. Therefore, VLAN tagging requires that you stop the
partnership first before you configure VLAN tags. Restart the partnership after the
configuration is complete.
Remote copy group or Remote copy port group The following numbers group a set of IP addresses that are
connected to the same physical link. Therefore, only IP
addresses that are part of the same remote copy group can
form remote copy connections with the partner system:
0 – Ports that are not configured for remote copy
1 – Ports that belong to remote copy port group 1
2 – Ports that belong to remote copy port group 2
Each IP address can be shared for iSCSI host attach and
remote copy functionality. Therefore, appropriate settings must
be applied to each IP address.
IP partnership Two systems that are partnered to perform remote copy over
native IP links.
FC partnership Two systems that are partnered to perform remote copy over
native Fibre Channel links.
Failover Failure of a node within an I/O group causes the Volume access
to go through the surviving node. The IP addresses fail over to
the surviving node in the I/O group. When the configuration
node of the system fails, management IPs also fail over to an
alternative node.
584 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
IP partnership terminology Description
Failback When the failed node rejoins the system, all failed over IP
addresses are failed back from the surviving node to the
rejoined node, and Volume access is restored through
this node.
IP partnership or partnership over native IP links These terms are used to describe the IP partnership feature.
Remote copy port group ID is a numerical tag that is associated with an IP port of an IBM
Spectrum Virtualize system to indicate which physical IP link it is connected to. Multiple nodes
might be connected to the same physical long-distance link, and must therefore share the
same remote copy port group ID.
In scenarios with two physical links between the local and remote clusters, two remote copy
port group IDs must be used to designate which IP addresses are connected to which
physical link. This configuration must be done by the system administrator by using the GUI or
the cfgportip CLI command.
Remember: IP ports on both partners must have been configured with identical remote
copy port group IDs for the partnership to be established correctly.
The IBM Spectrum Virtualize system IP addresses that are connected to the same physical
link are designated with identical remote copy port groups. The system supports three remote
copy groups: 0, 1, and 2.
586 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The systems’ IP addresses are, by default, in remote copy port group 0. Ports in port group 0
are not considered for creating remote copy data paths between two systems. For
partnerships to be established over IP links directly, IP ports must be configured in remote
copy group 1 if a single inter-site link exists, or in remote copy groups 1 and 2 if two inter-site
links exist.
You can assign one IPv4 address and one IPv6 address to each Ethernet port on the system
platforms. Each of these IP addresses can be shared between iSCSI host attach and the IP
partnership. The user must configure the required IP address (IPv4 or IPv6) on an Ethernet
port with a remote copy port group.
The administrator might want to use IPv6 addresses for remote copy operations and use IPv4
addresses on that same port for iSCSI host attach. This configuration also implies that for two
systems to establish an IP partnership, both systems must have IPv6 addresses that are
configured.
Administrators can choose to dedicate an Ethernet port for IP partnership only. In that case,
host access must be explicitly disabled for that IP address and any other IP address that is
configured on that Ethernet port.
Note: To establish an IP partnership, each SVC node must have only a single remote copy
port group that is configured, 1 or 2. The remaining IP addresses must be in remote copy
port group 0.
Note: For explanation purposes, this section illustrates a node with 2 ports available, 1 and
2. This number generally increments when using IBM SAN Volume Controller nodes model
DH8 or SV1.
The following supported configurations for IP partnership that were in the first release are
described in this section:
Two 2-node systems in IP partnership over a single inter-site link, as shown in
Figure 11-98 (configuration 1).
Figure 11-98 Single link with only one remote copy port group configured in each system
588 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
This configuration has the following characteristics:
– Only one node in each system has a remote copy port group that is configured, and no
failover ports are configured.
– If the Node A1 in System A or the Node B2 in System B were to encounter a failure, the
IP partnership stops and enters the Not_Present state until the failed nodes recover.
– After the nodes recover, the IP ports fail back, the IP partnership recovers, and the
partnership state goes to the Fully_Configured state.
– If the inter-site system link fails, the IP partnerships change to the Not_Present state.
– This configuration is not recommended because it is not resilient to node failures.
Two 2-node systems in IP partnership over a single inter-site link (with failover ports
configured), as shown in Figure 11-99 (configuration 2).
Figure 11-99 One remote copy group on each system and nodes with failover ports configured
Figure 11-100 Multinode systems single inter-site link with only one remote copy port group
590 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Port selection is determined by a path configuration algorithm. The other ports play the
role of standby ports.
If Node A1 fails in System A, the IP partnership selects one of the remaining ports that is
configured with remote copy port group 1 from any of the nodes from either of the two I/O
groups in System A. However, it might take some time (generally seconds) for discovery
and path configuration logic to reestablish paths post failover. This process can cause
partnerships to change to the Not_Present state.
This result causes remote copy relationships to stop. The administrator might need to
manually verify the issues in the event log and start the relationships or remote copy
consistency groups, if they do not autorecover. The details of the particular IP port actively
participating in the IP partnership process is provided in the lsportip view (reported as
used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both I/O groups.
However, only one port in that remote copy port group remains active and participates
in IP partnership on each system.
– If the Node A1 in System A or the Node B2 in System B were to encounter some failure
in the system, IP partnerships discovery is triggered and it continues servicing the I/O
from the failover port.
– The discovery mechanism that is triggered because of failover might introduce a delay
wherein the partnerships momentarily change to the Not_Present state and then
recover.
– The bandwidth of the single link is used completely.
Figure 11-101 Multinode systems single inter-site link with only one remote copy port group
592 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
This process can lead to remote copy relationships stopping, and the administrator must
manually start them if the relationships do not auto-recover. The details of which particular
IP port is actively participating in IP partnership process is provided in lsportip output
(reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both the I/O groups
that are identified for participating in IP Replication. However, only one port in that
remote copy port group remains active on each system and participates in IP
Replication.
– If the Node A1 in System A or the Node B2 in System B fails in the system, the IP
partnerships trigger discovery and continue servicing the I/O from the failover ports.
– The discovery mechanism that is triggered because of failover might introduce a delay
wherein the partnerships momentarily change to the Not_Present state and then
recover.
– The bandwidth of the single link is used completely.
Two 2-node systems with two inter-site links, as shown in Figure 11-102 (configuration 5).
Figure 11-102 Dual links with two remote copy groups on each system configured
As shown in Figure 11-102, remote copy port groups 1 and 2 are configured on the nodes
in System A and System B because two inter-site links are available. In this configuration,
the failover ports are not configured on partner nodes in the I/O group. Instead, the ports
are maintained in different remote copy port groups on both of the nodes. They remain
active and participate in IP partnership by using both of the links.
However, if either of the nodes in the I/O group fail (that is, if Node A1 on System A fails),
the IP partnership continues only from the available IP port that is configured in remote
copy port group 2. Therefore, the effective bandwidth of the two links is reduced to 50%
because only the bandwidth of a single link is available until the failure is resolved.
Figure 11-103 Multinode systems with dual inter-site links between the two systems
594 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
This configuration is an extension of Configuration 5 to a multinode multi-I/O group
environment. This configuration has two I/O groups, and each node in the I/O group has a
single port that is configured in remote copy port groups 1 or 2.
Although two ports are configured in remote copy port groups 1 and 2 on each system,
only one IP port in each remote copy port group on each system actively participates in IP
partnership. The other ports that are configured in the same remote copy port group act as
standby ports in the event of failure. Which port in a configured remote copy port group
participates in IP partnership at any moment is determined by a path configuration
algorithm.
In this configuration, if Node A1 fails in System A, IP partnership traffic continues from
Node A2 (that is, remote copy port group 2) and at the same time the failover also causes
discovery in remote copy port group 1. Therefore, the IP partnership traffic continues from
Node A3 on which remote copy port group 1 is configured. The details of the particular IP
port that is actively participating in IP partnership process is provided in the lsportip
output (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in the I/O groups 1 or 2.
However, only one port per system in both remote copy port groups remains active and
participates in IP partnership.
– Only a single port per system from each configured remote copy port group
participates simultaneously in IP partnership. Therefore, both of the links are used.
– During node failure or port failure of a node that is actively participating in IP
partnership, IP partnership continues from the alternative port because another port is
in the system in the same remote copy port group but in a different I/O Group.
– The pathing algorithm can start discovery of available ports in the affected remote copy
port group in the second I/O group and pathing is reestablished, which restores the
total bandwidth, so both of the links are available to support IP partnership.
Figure 11-104 Multinode systems (two I/O groups on each system) with dual inter-site links
between the two systems
596 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
This configuration has the following characteristics:
– There are two I/O Groups with nodes in those I/O groups that are configured in two
remote copy port groups because there are two inter-site links for participating in IP
partnership. However, only one port per system in a particular remote copy port group
remains active and participates in IP partnership.
– One port per system from each remote copy port group participates in IP partnership
simultaneously. Therefore, both of the links are used.
– If a node or port on the node that is actively participating in IP partnership fails, the RC
data path is established from that port because another port is available on an
alternative node in the system with the same remote copy port group.
– The path selection algorithm starts discovery of available ports in the affected remote
copy port group in the alternative I/O groups and paths are reestablished, which
restores the total bandwidth across both links.
– The remaining or all of the I/O groups can be in remote copy partnerships with other
systems.
An example of an unsupported configuration for a single inter-site link is shown in
Figure 11-105 (configuration 8).
Figure 11-105 Two node systems with single inter-site link and remote copy port groups configured
Figure 11-106 Dual Links with two Remote Copy Port Groups with failover Port Groups configured
598 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In this configuration, one port on each node in System A and System B is configured in
remote copy group 1 to establish IP partnership and support remote copy relationships. A
dedicated inter-site link is used for IP partnership traffic, and iSCSI host attach is disabled
on those ports.
The following configuration steps are used:
a. Configure system IP addresses properly. As such, they can be reached over the
inter-site link.
b. Qualify if the partnerships must be created over IPv4 or IPv6, and then assign IP
addresses and open firewall ports 3260 and 3265.
c. Configure IP ports for remote copy on both the systems by using the following settings:
• Remote copy group: 1
• Host: No
• Assign IP address
d. Check that the maximum transmission unit (MTU) levels across the network meet the
requirements as set (default MTU is 1500 on Storwize V7000).
e. Establish IP partnerships from both of the systems.
f. After the partnerships are in the Fully_Configured state, you can create the remote
copy relationships.
Figure 11-107 on page 598 is an example deployment for the configuration that is shown
in Figure 11-101 on page 592. Ports that are shared with host access are shown in
Figure 11-108 (configuration 11).
In this configuration, IP ports are to be shared by both iSCSI hosts and for IP partnership.
The following configuration steps are used:
a. Configure System IP addresses properly so that they can be reached over the inter-site
link.
b. Qualify if the partnerships must be created over IPv4 or IPv6, and then assign IP
addresses and open firewall ports 3260 and 3265.
Note: The Copy Services Consistency Groups menu relates to FlashCopy consistency
groups only, not Remote Copy ones.
600 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The following panes are used to visualize and manage your remote copies:
The Remote Copy pane
To open the Remote Copy pane, click Copy Services → Remote Copy in the main menu
as shown in Figure 11-109.
602 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.9.1 Creating Fibre Channel partnership
Intra-cluster Metro Mirror: If you are creating an intra-cluster Metro Mirror, do not perform
this next step to create the Metro Mirror partnership. Instead, go to 11.9.2, “Creating
remote copy relationships” on page 604.
To create an FC partnership between IBM Spectrum Virtualize systems by using the GUI,
open the Partnerships pane and click Create Partnership to create a partnership, as shown
in Figure 11-113.
In the Create Partnership window, enter the following information, as shown in Figure 11-114
on page 604:
1. Select the partnership type (Fibre Channel or IP). If you choose IP partnership, you must
provide the IP address of the partner system and the partner system’s CHAP key.
2. If your partnership is based on Fibre Channel protocol, select an available partner system
from the menu. If no candidate is available, the This system does not have any
candidates error message is displayed.
3. Enter a link bandwidth in Mbps that is used by the background copy process between the
systems in the partnership.
4. Enter the background copy rate.
5. Click OK to confirm the partnership relationship.
To fully configure the partnership between both systems, perform the same steps on the other
system in the partnership. If not configured on the partner system, the partnership will be
displayed as Partially Configured.
When both sides of the system partnership are defined, the partnership is displayed as Fully
Configured as shown in Figure 11-115.
604 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To create a remote copy relationship, complete these steps:
1. Open the Copy Services Remote Copy pane.
2. Right-click the Consistency Group that you want to create the new relationship for, and
select Create Relationship, as shown in Figure 11-116. If you want to create a
stand-alone relationship (not in a consistency group), right-click the Not in a Group group.
3. In the Create Relationship window, select one of the following types of relationships that
you want to create, as shown in Figure 11-117, and click Next.
– Metro Mirror
– Global Mirror (with or without Consistency Protection)
– Global Mirror with Change Volumes
5. In the next window, you can create relationships between master Volumes and auxiliary
Volumes, as shown in Figure 11-119. Click Add when both Volumes are selected. You can
add multiple relationships in that step by repeating the selection.
When all the relationships that you need are created, click Next.
606 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Important: The master and auxiliary Volumes must be of equal size. Therefore, only
the targets with the appropriate size are shown in the list for a specific source Volume.
6. Specify whether the Volumes are synchronized, as shown in Figure 11-120. Then, click
Next.
Note: If the Volumes are not synchronized, the initial copy will copy the entire source
Volume to the remote target Volume. If you suspect Volumes are different or if you have
a doubt, then synchronize them to ensure consistency on both sides of the relationship.
608 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. Enter a name for the Consistency Group and click Next, as shown in Figure 11-123.
3. In the next window, select the location of the auxiliary Volumes in the Consistency Group,
as shown in Figure 11-124, and click Next.
– On this system, which means that the Volumes are local.
– On another system, which means that you select the remote system from the menu.
Figure 11-124 Selecting the system to create the Consistency Group with
Figure 11-125 Selecting whether relationships should be added to the new Consistency Group
Select one of the following types of relationships that you want to create/add, as shown in
Figure 11-126, and click Next:
– Metro Mirror
– Global Mirror (with or without Consistency Protection)
– Global Mirror with Change Volumes
610 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5. As shown in Figure 11-127, you can optionally select existing relationships to add to the
group. Click Next.
Figure 11-127 Adding Remote Copy relationships to the new Consistency Group
Note: Only relationships of the type that was previously selected are listed.
Figure 11-128 Creating new relationships for the new Consistency Group
Important: The master and auxiliary Volumes must be of equal size. Therefore, only
the targets with the appropriate size are shown in the list for a specific source Volume.
7. Specify whether the Volumes in the Consistency Group are synchronized, as shown in
Figure 11-120, and click Next.
Figure 11-129 Selecting if Volumes in the new Consistency Group are already synchronized or not
612 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note: If the Volumes are not synchronized, the initial copy will copy the entire source
Volume to the remote target Volume. If you suspect Volumes are different or if you have
a doubt, then synchronize them to ensure consistency on both sides of the relationship.
8. In the last window, select whether you want to start the copy of the Consistency Group or
not, as shown in Figure 11-130. Click Finish.
3. Enter the new name that you want to assign to the relationships and click Rename, as
shown in Figure 11-132.
Remote copy relationship name: You can use the letters A - Z and a - z, the numbers
0 - 9, and the underscore (_) character. The remote copy name can be 1 - 15
characters. No blanks are allowed.
614 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.9.5 Renaming a remote copy consistency group
To rename a remote copy consistency group, complete these steps:
1. Open the Copy Services Remote Copy pane.
2. Right-click the consistency group to be renamed and select Rename, as shown in
Figure 11-133.
3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 11-134.
Remote copy consistency group name: You can use the letters A - Z and a - z, the
numbers 0 - 9, and the underscore (_) character. The remote copy name can be 1 - 15
characters. No blanks are allowed.
3. Select the Consistency Group for this remote copy relationship by using the menu, as
shown in Figure 11-136. Click Add to Consistency Group to confirm your changes.
616 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.9.7 Removing remote copy relationships from Consistency Group
To remove one or multiple relationships from a remote copy consistency group, complete
these steps:
1. Open the Copy Services Remote Copy pane.
2. Right-click the relationships to be removed and select Remove from Consistency
Group, as shown in Figure 11-137.
618 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.9.9 Starting a remote copy Consistency Group
When a remote copy consistency group is created, the remote copy process can be started,
for all the relationships that are part of the consistency groups.
To start a consistency group, open the Copy Services Remote Copy pane, right-click the
consistency group to be started, and select Start, as shown in Figure 11-140.
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the Volume that changes from primary to secondary because all of the I/O is inhibited to
that Volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a relationship.
Figure 11-142 Switching master-auxiliary direction of relationships changes the write access
620 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. When a remote copy relationship is switched, an icon is displayed in the Remote Copy
pane list, as shown in Figure 11-143.
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the Volume that changes from primary to secondary because all of the I/O is inhibited to
that Volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a relationship.
Figure 11-145 Switching direction of Consistency Groups changes the write access
622 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To stop one or multiple relationships, complete these steps:
1. Open the Copy Services Remote Copy pane.
2. Right-click the relationships to be stopped and select Stop, as shown in Figure 11-146.
3. When a remote copy relationship is stopped, access to the auxiliary Volume can be
changed so it can be read and written by a host. A confirmation message is displayed as
shown in Figure 11-147.
Figure 11-147 Grant access in read and write to the auxiliary Volume
3. When a remote copy consistency group is stopped, access to the auxiliary Volumes can
be changed so it can be read and written by a host. A confirmation message is displayed
as shown in Figure 11-149.
Figure 11-149 Grant access in read and write to the auxiliary Volumes
624 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.9.14 Deleting remote copy relationships
To delete remote copy relationships, complete these steps:
1. Open the Copy Services Remote Copy pane.
2. Right-click the relationships that you want to delete and select Delete, as shown in
Figure 11-150.
Important: Deleting a Consistency Group does not delete its remote copy mappings.
626 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.9.16 Remote Copy memory allocation
Copy Services features require that small amounts of volume cache be converted from cache
memory into bitmap memory to allow the functions to operate, at an I/O group level. If you do
not have enough bitmap space allocated when you try to use one of the functions, you will not
be able to complete the configuration. The total memory that can be dedicated to these
functions is not defined by the physical memory in the system. The memory is constrained by
the software functions that use the memory.
For every Remote Copy relationship that is created on an IBM Spectrum Virtualize system, a
bitmap table is created to track the copied grains. By default, the system allocates 20 MiB of
memory for a minimum of 2 TiB of remote copied source volume capacity. Every 1 MiB of
memory provides the following volume capacity for the specified I/O group:
For 256 KiB grains size, 2 TiB of total Metro Mirror, Global Mirror, or active-active volume
capacity.
Review Table 11-14 to calculate the memory requirements and confirm that your system is
able to accommodate the total installation size.
When you configure change volumes for use with Global Mirror, two internal FlashCopy
mappings are created for each change volume.
For Metro Mirror, Global Mirror, and HyperSwap active-active relationships, two bitmaps exist.
For Metro Mirror or Global Mirror relationships, one is used for the master clustered system
and one is used for the auxiliary system because the direction of the relationship can be
reversed. For active-active relationships, which are configured automatically when
HyperSwap volumes are created, one bitmap is used for the volume copy on each site
because the direction of these relationships can be reversed.
Metro Mirror and Global Mirror relationships do not automatically increase the available
bitmap space. You might need to use the chiogrp command to manually increase the space
in one or both of the master and auxiliary systems.
628 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In practice, the most often overlooked cause is latency. Global Mirror has a round-trip-time
tolerance limit of 80 or 250 milliseconds, depending on the firmware version and the hardware
model. A message that is sent from the source IBM Spectrum Virtualize system to the target
system and the accompanying acknowledgment must have a total time of 80 or 250
milliseconds round trip. In other words, it must have up to 40 or 125 milliseconds latency each
way.
The primary component of your round-trip time is the physical distance between sites. For
every 1000 kilometers (621.4 miles), you observe a 5-millisecond delay each way. This delay
does not include the time that is added by equipment in the path. Every device adds a varying
amount of time, depending on the device, but a good rule is 25 microseconds for pure
hardware devices.
Company A has a production site that is 1900 kilometers (1180.6 miles) away from its
recovery site. The network service provider uses a total of five devices to connect the two
sites. In addition to those devices, Company A employs a SAN FC router at each site to
provide FCIP to encapsulate the FC traffic between sites.
Now, there are seven devices, and 1900 kilometers (1180.6 miles) of distance delay. All the
devices are adding 200 microseconds of delay each way. The distance adds 9.5 milliseconds
each way, for a total of 19 milliseconds. Combined with the device latency, the delay is
19.4 milliseconds of physical latency minimum, which is under the 80-millisecond limit of
Global Mirror until you realize that this number is the best case number.
The link quality and bandwidth play a large role. Your network provider likely ensures a
latency maximum on your network link. Therefore, be sure to stay as far beneath the Global
Mirror round-trip-time (RTT) limit as possible. You can easily double or triple the expected
physical latency with a lower quality or lower bandwidth network link. Then, you are within the
range of exceeding the limit if high I/O occurs that exceeds the existing bandwidth capacity.
When you get a 1920 event, always check the latency first. The FCIP routing layer can
introduce latency if it is not properly configured. If your network provider reports a much lower
latency, you might have a problem at your FCIP routing layer. Most FCIP routing devices have
built-in tools to enable you to check the RTT. When you are checking latency, remember that
TCP/IP routing devices (including FCIP routers) report RTT by using standard 64-byte ping
packets.
Before proceeding, look at the second largest component of your RTT, which is serialization
delay. Serialization delay is the amount of time that is required to move a packet of data of a
specific size across a network link of a certain bandwidth. The required time to move a
specific amount of data decreases as the data transmission rate increases.
Figure 11-155 shows the orders of magnitude of difference between the link bandwidths. It is
easy to see how 1920 errors can arise when your bandwidth is insufficient. Never use a
TCP/IP ping to measure RTT for FCIP traffic.
Figure 11-155 Effect of packet size (in bytes) versus the link size
In Figure 11-155, the amount of time in microseconds that is required to transmit a packet
across network links of varying bandwidth capacity is compared. The following packet sizes
are used:
64 bytes: The size of the common ping packet
1500 bytes: The size of the standard TCP/IP packet
2148 bytes: The size of an FC frame
Finally, your path maximum transmission unit (MTU) affects the delay that is incurred to get a
packet from one location to another location. An MTU might cause fragmentation or be too
large and cause too many retransmits when a packet is lost.
Note: Unlike 1720 errors, 1920 errors are deliberately generated by the system because it
evaluated that a relationship could impact the host’s response time. The system has no
indication on if/when the relationship can be restarted. Therefore, the relationship cannot
be restarted automatically and it needs to be done manually.
630 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
11.10.2 1720 error
The 1720 error (event ID 050020) is the other problem remote copy might encounter. The
amount of bandwidth that is needed for system-to-system communications varies based on
the number of nodes. It is important that it is not zero. When a partner on either side stops
communication, a 1720 is displayed in your error log. According to the product documentation,
there are no likely field-replaceable unit breakages or other causes.
The source of this error is most often a fabric problem or a problem in the network path
between your partners. When you receive this error, check your fabric configuration for zoning
of more than one host bus adapter (HBA) port for each node per I/O Group if your fabric has
more than 64 HBA ports zoned. The suggested zoning configuration for fabrics is one port for
each node per I/O Group per fabric that is associated with the host.
For those fabrics with 64 or more host ports, this suggestion becomes a rule. Therefore, you
see four paths to each Volume discovered on the host because each host needs to have at
least two FC ports from separate HBA cards, each in a separate fabric. On each fabric, each
host FC port is zoned to two SVC node ports where each node port comes from a different
SVC node. This configuration provides four paths per Volume. More than four paths per
Volume are supported but not recommended.
Improper zoning can lead to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer with IBM Spectrum Control and comparing
against your sample interval reveals potential SAN congestion. If a zero buffer credit timer is
more than 2% of the total time of the sample interval, it might cause problems.
Next, always ask your network provider to check the status of the link. If the link is acceptable,
watch for repeats of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences could indicate a larger problem.
If you receive multiple 1720 errors, recheck your network connection and then check the
system partnership information to verify its status and settings. Then, perform diagnostics for
every piece of equipment in the path between the two Storwize systems. It often helps to have
a diagram that shows the path of your replication from both logical and physical configuration
viewpoints.
Note: With Consistency Protection enabled on the MM/GM relationships, the system tries
to resume the replication when possible. Therefore, it is not necessary to manually restart
the failed relationship after a 1720 error is triggered.
If your investigations fail to resolve your remote copy problems, contact your IBM Support
representative for a more complete analysis.
Multiple drivers exist for an organization to implement data-at-rest encryption. These can be
internal, such as protection of confidential company data, and ease of storage sanitization, or
external, like compliance with legal requirements or contractual obligations.
Therefore, before configuring encryption on the storage, the organization should define its
needs and, if it is decided that data-at-rest encryption is required, include it in the security
policy. Without defining the purpose of the particular implementation of data-at-rest
encryption, it would be difficult or impossible to choose the best approach to implementing
encryption and verifying whether the implementation meets the set goals.
Below is a list of items that are worth considering during the design of a solution that includes
data-at-rest encryption:
Legal requirements
Contractual obligations
Organization's security policy
Attack vectors
Expected resources of an attacker
Encryption key management
Physical security
Another document that should be consulted when planning data-at-rest encryption is the
organization’s security policy.
The final outcome of a data-at-rest encryption planning session should be replies to three
questions:
1. What are the goals that the organization wants to realize by using data-at-rest encryption?
2. How will data-at-rest encryption be implemented?
3. How can it be demonstrated that the proposed solution realizes the set goals?
634 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Encryption of data at-rest complies with the Federal Information Processing Standard 140
(FIPS-140) standard, but is not certified.
Ciphertext stealing XTS-AES-256 is used for data encryption.
AES 256 is used for master access keys.
The algorithm is public. The only secrets are the keys.
A symmetric key algorithm is used. The same key is used to encrypt and decrypt data.
The encryption of system data and metadata is not required, so they are not encrypted.
Encryption is enabled at a system level and all of the following prerequisites must be met
before you can use encryption:
You must purchase an encryption license before you activate the function.
If you did not purchase a license, contact an IBM marketing representative or IBM
Business Partner to purchase an encryption license.
At least three USB flash drives are required if you plan not to use a key management
server. They are available as a feature code from IBM.
You must activate the license that you purchased.
Encryption must be enabled.
Note: Only data at-rest is encrypted. Host to storage communication and data sent over
links used for Remote Mirroring are not encrypted.
Figure 12-1 shows an encryption example. Encrypted disks and encrypted data paths are
marked in blue. Unencrypted disks and data paths are marked in red. The server sends
unencrypted data to a SAN Volume Controller 2145-DH8 system, which stores
hardware-encrypted data on internal disks. The data is mirrored to a remote Storwize V7000
Gen1 system using Remote Copy. The data flowing through the Remote Copy link is not
encrypted. Because the Storwize V7000 Gen1 is unable to perform any encryption activities,
data on the Storwize V7000 Gen1 is not encrypted.
Server
Remote Copy
2145-DH8 2076-324
SAS Hardware SAS
Encryption
2145-24F 2076-224
2145-24F 2076-224
Server
Remote Copy
2145-DH8 2145-SV1
SAS Hardware SAS
Encryption
2145-24F 2145-24F
2145-24F 2145-24F
Figure 12-3 shows an example configuration that uses both software and hardware
encryption. Software encryption is used to encrypt an external virtualized storage system.
Hardware encryption is used for internal, SAS-attached disk drives.
Server
2145-SV1
Software SAS Hardware
FC
Encryption Encryption
2145-24F
2076-324
2145-24F
636 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Placement of hardware encryption and software encryption in the Storwize code stack are
shown in Figure 12-4. The functions that are implemented in software are shown in blue. The
external storage system is shown in yellow. The hardware encryption on the SAS chip is
marked in pink. Compression is performed before encryption. Therefore, it is possible to
realize benefits of compression for the encrypted data.
Figure 12-4 Encryption placement in the SAN Volume Controller software stack
Each volume copy can use different encryption methods (hardware, software). It is also
allowed to have volume copies with different encryption status (encrypted versus
unencrypted). The encryption method depends only on the pool that is used for the specific
copy. You can migrate data between different encryption methods by using volume migration
or volume mirroring.
Note: Software encryption is available in IBM Spectrum Virtualize code V7.6 and later.
Both methods of encryption use the same encryption algorithm, the same key management
infrastructure, and the same license.
Note: The design for encryption is based on the concept that a system should either be
encrypted or not encrypted. Encryption implementation is intended to encourage solutions
that contain only encrypted volumes or only unencrypted volumes. For example, after
encryption is enabled on the system, all new objects (for example, pools) are by default
created as encrypted.
638 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
• During cluster internal communication, data encryption keys are encrypted with the
master access key.
• Data encryption keys cannot be viewed.
• Data encryption keys cannot be changed.
• When an encrypted object is deleted, its data encryption key is discarded (secure
erase).
Important: If all master access key copies are lost and the system must cold restart, all
encrypted data is gone. No method exists, even for IBM, to decrypt the data without the
keys. If encryption is enabled and the system cannot access the master access key, all
SAS hardware is offline, including unencrypted arrays.
Attempts to add a node can fail if the correct license for the node that is being added does not
exist. You can add licenses to the system for nodes that are not part of the system.
No trial licenses for encryption exist on the basis that when the trial runs out, the access to
the data would be lost. Therefore, you must purchase an encryption license before you
activate encryption. Licenses are generated by IBM data storage feature activation (DSFA)
based on the serial number (S/N) and the machine type and model (MTM) of the nodes.
You can activate an encryption license during the initial system setup (on the Encryption
window of the initial setup wizard) or later on, in the running environment.
Activation of the license can be performed in one of two ways: Automatically or manually. Both
methods are available during the initial system setup and on the running system.
When you purchase a license, you should receive a function authorization document with an
authorization code printed on it. This code allows you to proceed using the automatic
activation process.
If the automatic activation process fails or if you prefer using the manual activation process,
use this page to retrieve your license keys:
https://www.ibm.com/storage/dsfa/storwize/selectMachine.wss
See 12.3.5, “Activate the license manually” on page 646 for instructions about how to retrieve
the machine signature of a node.
640 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. The Encryption window displays information about your storage system, as shown in
Figure 12-6.
3. Right-clicking the node opens a menu with two license activation options (Activate
License Automatically and Activate License Manually), as shown in Figure 12-7. Use
either option to activate encryption. See 12.3.4, “Activate the license automatically” on
page 644 for instructions about how to complete the automatic activation process. See
12.3.5, “Activate the license manually” on page 646 for instructions on how to complete a
manual activation process.
Note: Every enclosure needs an active encryption license before you can enable
encryption on the system.
Figure 12-8 Successful encryption license activation during initial system setup
642 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
12.3.3 Start activation process on a running system
To activate encryption on a running system, complete these steps:
1. Click Settings → System → Licensed Functions.
2. Click Encryption Licenses, as shown in Figure 12-9.
Figure 12-9 Expanding Encryption Licenses section on the Licensed Functions window
3. The Encryption Licenses window displays information about your nodes. Right-click the
node on which you want to install an encryption license. This action opens a menu with
two license activation options (Activate License Automatically and Activate License
Manually), as shown in Figure 12-10. Use either option to activate encryption. See 12.3.4,
“Activate the license automatically” on page 644 for instructions on how to complete an
automatic activation process. See 12.3.5, “Activate the license manually” on page 646 for
instructions on how to complete a manual activation process.
Figure 12-10 Select the node on which you want to enable the encryption
Important: To perform this operation, the personal computer used to connect to the GUI
and activate the license must be able to connect to hosts on the internet.
To activate the encryption license for a node automatically, complete these steps:
1. Select Activate License Automatically to open the Activate License Automatically
window, as shown in Figure 12-12.
2. Enter the authorization code that is specific to the node that you selected, as shown in
Figure 12-13. You can now click Activate.
644 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The system connects to IBM to verify the authorization code and retrieve the license key.
Figure 12-14 shows a window that is displayed during this connection. If everything works
correctly, the procedure takes less than a minute.
After the license key has been retrieved, it is automatically applied as shown in
Figure 12-15.
Check whether the personal computer that is used to connect to the SAN Volume Controller
GUI and activate the license can access the internet. If you are unable to complete the
automatic activation procedure, try to use the manual activation procedure that is described in
12.3.5, “Activate the license manually” on page 646.
2. If you have not done so already, obtain the encryption license for the node. The
information that is required to obtain the encryption license is displayed in the Manual
Activation window. Use this data to follow the instructions in 12.3.1, “Obtaining an
encryption license” on page 639.
646 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. You can enter the license key either by typing it, by using cut or copy and paste, or by
clicking the folder icon and uploading to the storage system the license key file
downloaded from DSFA. In Figure 12-18, the sample key is already entered. Click
Activate.
After the task completes successfully, the GUI shows that encryption is licensed for the
specified node, as shown in Figure 12-19.
Key server support is available in IBM Spectrum Virtualize code V7.8 and later. Additionally
IBM Spectrum Virtualize code V8.1 introduces the ability to define up to four encryption key
servers, which is a preferred configuration because it increases key provider availability.
Support for simultaneous use of both USB flash drives and a key server is available in IBM
Spectrum Virtualize code V8.1 and later. Organizations that use encryption key management
servers might consider parallel use of USB flash drives as a backup solution. During normal
operation, such drives can be disconnected and stored in a secure location. However, during
a catastrophic loss of encryption servers, the USB drives can still be used to unlock the
encrypted storage.
The following list of key server and USB flash drive characteristics might help you to choose
the type of encryption key provider that you want to use.
648 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
USB flash drives have the following characteristics:
Physical access to the system might be required to process a rekey operation
No moving parts with almost no read or write operations to the USB flash drive
Inexpensive to maintain and use
Convenient and easy to have multiple identical USB flash drives available as backups
The Enable Encryption wizard starts by asking which encryption key provider to use for
storing the encryption keys, as shown in Figure 12-23. You can enable either or both
providers.
650 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The next section will present a scenario in which both encryption key providers are enabled at
the same time. See 12.4.2, “Enabling encryption using USB flash drives” on page 651 for
instructions on how to enable encryption using only USB flash drives. See 12.4.3, “Enabling
encryption using key servers” on page 655 for instructions on how to enable encryption using
key servers as the sole encryption key provider.
Note: The system needs at least three USB flash drives to be present before you can
enable encryption using this encryption key provider. IBM USB flash drives are preferred,
although other flash drives might work. You can use any USB ports in any node of the
cluster.
Order IBM USB flash drives in e-config as Feature Code ACEB: Encryption USB Flash
Drives (Four Pack).
Using USB flash drives as the encryption key provider requires a minimum of three USB flash
drives to store the generated encryption keys. Because the system will attempt to write the
encryption keys to any USB key inserted into a node port, it is critical to maintain physical
security of the system during this procedure.
While the system enables encryption, you are prompted to insert USB flash drives into the
system. The system generates and copies the encryption keys to all available USB flash
drives.
Ensure that each copy of the encryption key is valid before you write any user data to the
system. The system validates any key material on a USB flash drive when it is inserted into
the canister. If the key material is not valid, the system logs an error. If the USB flash drive is
unusable or fails, the system does not display it as output. Figure 12-79 on page 686 shows
an example where the system detected and validated three USB flash drives.
If your system is in a secure location with controlled access, one USB flash drive for each
canister can remain inserted in the system. If there is a risk of unauthorized access, then all
USB flash drives with the master access keys must be removed from the system and stored
in a secure place.
Securely store all copies of the encryption key. For example, any USB flash drives holding an
encryption key copy that are not left plugged into the system can be locked in a safe. Similar
precautions must be taken to protect any other copies of the encryption key that are stored on
other media.
Notes: Generally, create at least one additional copy on another USB flash drive for
storage in a secure location. You can also copy the encryption key from the USB drive and
store the data on other media, which can provide additional resilience and mitigate risk that
the USB drives used to store the encryption key come from a faulty batch.
Every encryption key copy must be stored securely to maintain confidentiality of the
encrypted data.
During power-on, insert USB flash drives into the USB ports on two supported canisters to
safeguard against failure of a node, node’s USB port, or USB flash drive during the power-on
procedure.
To enable encryption using USB flash drives as the only encryption key provider, complete
these steps:
1. In the Enable Encryption wizard Welcome tab, select USB flash drives and click Next, as
shown in Figure 12-24.
Figure 12-24 Selecting USB flash drives in the Enable Encryption wizard
652 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. If there are fewer than three USB flash drives inserted into the system, you are prompted
to insert more drives, as shown in Figure 12-25. The system reports how many more
drives need to be inserted.
Note: The Next option remains disabled and the status at the bottom is kept at 0 until at
least three USB flash drives are detected.
3. Insert the USB flash drives into the USB ports as requested.
Figure 12-26 Writing the master access key to USB flash drives
You can keep adding USB flash drives or replacing the ones already plugged in to create
new copies. When done, click Next.
5. The number of keys that were created is shown in the Summary tab, as shown in
Figure 12-27. Click Finish to finalize the encryption enablement.
654 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. You receive a message confirming that the encryption is now enabled on the system, as
shown in Figure 12-28.
7. You can confirm that encryption is enabled and verify which key providers are in use by
going to Settings → Security → Encryption, as shown in Figure 12-29.
Figure 12-29 Encryption view showing using USB flash drives as the enabled provider
IBM Spectrum Virtualize supports use of an IBM Security Key Lifecycle Manager (SKLM) key
server as an encryption key provider. SKLM supports Key Management Interoperability
Protocol (KMIP), which is a standard for management of cryptographic keys.
Note: Make sure that the key management server functionality is fully independent from
storage that is provided by systems by using a key server for encryption key management.
Failure to observe this requirement might create an encryption deadlock. An encryption
deadlock is a situation in which none of key servers in the environment can become
operational because some critical part of the data in each server is stored on a storage
system that depends on one of the key servers to unlock access to the data.
IBM Spectrum Virtualize code V8.1 and later supports up to four key server objects defined in
parallel.
For more information about completing these tasks, see SKLM documentation at IBM
Knowledge Center at:
https://www.ibm.com/support/knowledgecenter/SSWPVP
Access to the key server storing the correct master access key is required to enable
encryption for the cluster after a system restart such as a system-wide reboot or power loss.
Access to the key server is not required during a warm reboot, such as a node exiting service
mode or a single node reboot. The data center power-on procedure must ensure key server
availability before the storage system using encryption is booted.
Figure 12-30 Selecting Key server as the only provider in the Enable Encryption wizard
656 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. The wizard moves to the Key Servers tab, as shown in Figure 12-31. Enter the name and
IP address of the key servers. Note that the first key server specified must be the primary
SKLM key server.
Note: The supported versions of IBM Security Key Lifecycle Manager (up to V2.7,
which was the latest code version available at the time of writing) differentiate between
the primary and secondary key server role. The Primary SKLM server as defined on
Key Servers window of the Enable Encryption wizard must be the server defined as the
primary by SKLM administrators.
The key server name serves just as a label. Only the provided IP address is used to
actually contact the server. If the key server’s TCP port number differs from the default
value for the KMIP protocol (that is, 5696), then enter the port number. An example of a
complete primary SKLM configuration is shown in Figure 12-31.
5. The next window in the wizard is a reminder that SPECTRUM_VIRT device group
dedicated for IBM Spectrum Virtualize systems must exist on the SKLM key servers. Make
sure that this device group exists and click Next to continue, as shown in Figure 12-33.
658 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. Enable secure communication between the IBM Spectrum Virtualize system and the
SKLM key servers by either uploading the public certificate of the certificate authority (CA)
used to sign all the SKLM key server certificates, or by uploading the public SSL certificate
of each key server directly. Figure 12-34 shows the case when an organization’s CA
certificate is used. Click Next to proceed to the next step.
Figure 12-34 Uploading the key server or certification authority SSL certificate
7. Configure the SKLM key server to trust the SSL certificate of the IBM Spectrum Virtualize
system. You can download the IBM Spectrum Virtualize system public SSL certificate by
clicking Export Public Key, as shown in Figure 12-35. Install this certificate in the SKLM
key server in the SPECTRUM_VIRT device group.
9. The key server configuration is shown in the Summary tab, as shown in Figure 12-37.
Click Finish to create the key server object and finalize the encryption enablement.
660 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.If there are no errors while creating the key server object, you receive a message that
confirms that the encryption is now enabled on the system, as shown in Figure 12-38.
Click Close.
Figure 12-39 Encryption enabled with only key servers as encryption key providers
IBM Spectrum Virtualize supports enabling encryption using an IBM SKLM key server. SKLM
supports KMIP, which is a standard for encryption of stored data and management of
cryptographic keys.
Note: Make sure that the key management server functionality is fully independent from
storage provided by systems using a key server for encryption key management. Failure to
observe this requirement might create an encryption deadlock. An encryption deadlock is a
situation in which none of key servers in the environment can become operational because
some critical part of the data in each server is stored on an encrypted storage system that
depends on one of the key servers to unlock access to the data.
IBM Spectrum Virtualize code V8.1 and later supports up to four key server objects defined in
parallel.
For more information about completing these tasks, see SKLM at IBM Knowledge Center:
https://www.ibm.com/support/knowledgecenter/SSWPVP
Access to the key server storing the correct master access key is required to enable
encryption for the cluster after a system restart such as a system-wide reboot or power loss.
Access to the key server is not required during a warm reboot, such as a node exiting service
mode or a single node reboot. The data center power-on procedure must ensure key server
availability before storage system using encryption is booted.
662 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. The wizard moves to the Key Servers tab, as shown in Figure 12-41. Enter the name and
IP address of the key servers. Note that the first key server specified must be the primary
SKLM key server.
Note: The supported versions of IBM Security Key Lifecycle Manager (up to V2.7,
which was the latest code version when this book was written) differentiate between
primary and secondary key server role. Primary SKLM server as defined on the Key
Servers window of the Enable Encryption wizard must be the server defined as primary
by SKLM administrators.
5. The next page in the wizard is a reminder that SPECTRUM_VIRT device group dedicated
for IBM Spectrum Virtualize systems must exist on the SKLM key servers. Make sure that
this device group exists and click Next to continue, as shown in Figure 12-43.
664 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. The next step is to enable secure communication between the IBM Spectrum Virtualize
system and the SKLM key servers. This process can be done by either uploading the
public certificate of the certificate authority used to sign all the SKLM key server
certificates, or by uploading the public SSL certificate of each key server directly.
Figure 12-44 shows the case when an organization’s CA certificate is used. When either
file has been selected, you can click Next.
Figure 12-44 Uploading the key server or certificate authority SSL certificate
7. Subsequently, configure the SKLM key server to trust the SSL certificate of the IBM
Spectrum Virtualize system. You can download the IBM Spectrum Virtualize system public
SSL certificate by clicking Export Public Key, as shown in Figure 12-45. You should
install this certificate in the SKLM key servers in the SPECTRUM_VIRT device group.
9. The next step in the wizard is to store the master encryption key copies on USB flash
drives. If there are fewer than three drives detected, the system requests plugging more
USB flash drives as shown on Figure 12-47. You cannot proceed until the required
minimum number of USB flash drives is detected by the system.
Figure 12-47 At least three USB flash drives are required to configure USB flash drive key provider
666 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.After at least three USB flash drives are detected, the system writes master access key to
each of the drives. Note that the system will attempt to write the encryption key to any
flash drive it detects. Therefore, it is crucial to maintain physical security of the system
during this procedure. After the keys are successfully copied to at least three USB flash
drives, the system displays a window as shown on Figure 12-48.
Figure 12-48 Master Access Key successfully copied to USB flash drives
11.The next window presents you with the summary of the configuration that will be
implemented on the system, see Figure 12-49. Click Finish to create the key server object
and finalize the encryption enablement.
Figure 12-50 Encryption enabled message using both encryption key providers
13.You can confirm that encryption is enabled and verify which key providers are in use by
going to Settings → Security → Encryption, as shown in Figure 12-51. Note four green
check marks confirming, that the master access key is available on all four SKLM servers.
Note: If you set up encryption of your storage system when it was running IBM Spectrum
Virtualize code version earlier than V7.8.0, then when you upgrade to code version V8.1
you must rekey the master encryption key before you can enable second encryption
provider.
668 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
12.5.1 Adding SKLM as a second provider
If the storage system is configured with the USB flash drive provider, it is possible to configure
SKLM servers as a second provider. To enable SKLM servers as a second provider, complete
these steps:
1. Go to Settings → Security → Encryption, expand the Key Servers section and click
Enable, as shown in Figure 12-52. To enable key server as a second provider, the system
must detect at least one USB flash drive with a current copy of the master access key.
2. Complete the steps required to configure the key server provider, as described in 12.4.3,
“Enabling encryption using key servers” on page 655. One difference in the process
described in that section is that the wizard gives you an option to change from the USB
flash drive provider to key server provider. Select No to enable both encryption key
providers, as shown in Figure 12-53.
Figure 12-53 Do not disable USB flash drive encryption key provider
4. After you click Finish, the system will configure SKLM servers as a second encryption key
provider. Successful completion of the task is confirmed by a message, as shown in
Figure 12-55. Click Close.
670 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5. You can confirm that encryption is enabled and verify which key providers are in use by
going to Settings → Security → Encryption, as shown in Figure 12-56. Note the four
green check marks confirming that the master access key is available on all four SKLM
servers.
Figure 12-57 Enable USB flash drives as a second encryption key provider
3. You can confirm that encryption is enabled and verify which key providers are in use by
going to Settings → Security → Encryption, as shown in Figure 12-59. Note the four
green check marks indicating that the master access key is available on all four SKLM
servers.
672 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
12.6.1 Changing from USB flash drive provider to encryption key server
The system is designed to facilitate changing from USB flash drives encryption key provider
to encryption key server provider. If you follow the steps described in 12.5.1, “Adding SKLM
as a second provider” on page 669, but when completing step 2 on page 669, select Yes
instead of No (see Figure 12-60). This action causes de-activation of the USB flash drives
provider, and the procedure completes with a single active encryption keys provider: SKLM
servers.
Figure 12-60 Disable USB flash drive provider while changing to SKLM provider
12.6.2 Changing from encryption key server to USB flash drive provider
Change in the other direction, that is to say from using encryption key servers provider to
USB flash drives provider, is not possible using only the GUI.
To perform the change, add USB flash drives as a second provider. You can do this by
completing the steps described in 12.5.2, “Adding USB flash drives as a second provider” on
page 671. Then, issue the following command in the CLI:
chencryption -usb validate
To make sure that USB drives contain the correct master access key, disable the encryption
key server provider by issuing the following command:
chencryption -keyserver disable
This command disables the encryption key server provider, effectively migrating your system
from encryption key server to USB flash drive provider.
If you have lost access to the encryption key server provider, then issue this command:
chencryption -keyserver disable
If you have lost access to the USB flash drives provider, then issue this command:
chencryption -usb disable
If you want to restore the configuration with both encryption key providers, then follow the
instructions in 12.5, “Configuring additional providers” on page 668.
Note: If you lose access to all encryption key providers defined in the system, then there is
no method to recover access to the data protected by the master access key.
Notes: Encryption support for Distributed RAID is available in IBM Spectrum Virtualize
code V7.7 and later.
You must decide whether to encrypt or not encrypt an object when it is created. You cannot
change this setting later. To change the encryption state of stored data that you have to
migrate from an encrypted object (for example, pool) to unencrypted one, or vice versa.
Volume migration is the only way to encrypt any volumes that were created before enabling
encryption on the system.
674 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
12.8.1 Encrypted pools
See Chapter 6, “Storage pools” on page 197 for generic instructions on how to open the
Create Pool window. After encryption is enabled, any new pool will by default be created as
encrypted, as shown in Figure 12-61.
You can click Create to create an encrypted pool. All storage that is added to this pool will be
encrypted.
You can customize the Pools view in the management GUI to show pool encryption status.
Click Pools → Pools, and then click Actions → Customize Columns → Encryption, as
shown in Figure 12-62.
If you create an unencrypted pool, but you add only encrypted arrays or self-encrypting
MDisks to the pool, then the pool will be reported as encrypted because all extents in the pool
are encrypted. The pool reverts to the unencrypted state if you add an unencrypted array or
MDisk.
676 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
However, if you want to create encrypted child pools from an unencrypted storage pool
containing a mix of internal arrays and external MDisks. The following restrictions apply:
The parent pool must not contain any unencrypted internal arrays.
All SAN Volume Controller nodes or Storwize canisters in the system must support
software encryption and have the encryption license activated.
Note: An encrypted child pool created from an unencrypted parent storage pool reports as
unencrypted if the parent pool contains any unencrypted internal arrays. Remove these
arrays to ensure that the child pool is fully encrypted.
If you modify Pools view as described earlier in this section, you will see the encryption status
of child pools, as shown in Figure 12-65. The example shows an encrypted child pool with
non-encrypted parent pool.
.
Note: To create an unencrypted array when encryption is enabled use the command-line
interface (CLI) to run the mkarray -encrypt no command. However, you cannot add
unencrypted arrays to an encrypted pool.
You can also check the encryption state of an array by looking at its drives in Pools →
Internal Storage view. The internal drives associated with an encrypted array are assigned
an encrypted property that can be seen by clicking an icon at the right edge of the table
header row and selecting the Encrypted option from the menu, as shown in Figure 12-67.
The user interface gives no method to see which extents contain encrypted data and which
do not. However, if a volume is created in a correctly configured encrypted pool, then all data
written to this volume will be encrypted.
678 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The extents could contain stale unencrypted data if the MDisk was earlier used for storage of
unencrypted data. This is because file deletion only marks disk space as free. The data is not
actually removed from the storage. So, if the MDisk is not self-encrypting and was a part of an
unencrypted pool, and then was moved to an encrypted pool, then it will contain stale data
from its previous life.
However, all data written to any MDisk that is a part of correctly configured encrypted pool is
going to be encrypted.
You can customize the MDisk by Pools view to show the object encryption state by clicking
Pools → MDisk by Pools, selecting the menu bar, right-clicking it, and clicking the
Encryption Key icon. Figure 12-68 shows a case where self-encrypting MDisk is in an
unencrypted pool.
Self-encrypting MDisks
When adding external storage to a pool, be exceptionally diligent when declaring the MDisk
as self-encrypting. Correctly declaring an MDisk as self-encrypting avoids waste of
resources, such as CPU time. However, when used improperly it might lead to unencrypted
data at-rest.
IBM Spectrum Virtualize products can detect that an MDisk is self-encrypting by using the
SCSI Inquiry page C2. MDisks provided by other IBM Spectrum Virtualize products will report
this page correctly. For these MDisks, the Externally encrypted box shown in Figure 12-69
will not be selected. However, when added, they are still considered as self-encrypting.
Note: You can override external encryption setting of an MDisk detected as self-encrypting
and configure it as unencrypted using the CLI command chmdisk -encrypt no. However,
you should only do so if you plan to decrypt the data on the backend or if the backend uses
inadequate data encryption.
To check whether an MDisk has been detected or declared as self-encrypting, click Pools →
MDisk by Pools and customize the view to show the encryption state by selecting the menu
bar, right-clicking it, and clicking the Encryption Key icon, as shown in Figure 12-70.
680 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Note that the value shown in the Encryption column shows the property of objects in
respective rows. That means that in the configuration shown in Figure 12-70 on page 680,
Pool1 is encrypted, so every volume created from this pool will be encrypted. However, that
pool is backed by three MDisks, out of which two are self-encrypting and one is not.
Therefore, a value of no next to mdisk7 does not imply that encryption of Pool1 is in any way
compromised. It only indicates that encryption of the data placed on mdisk7 will be done by
using software encryption, while data placed on mdisk2 and mdisk8 will be encrypted by the
back-end storage providing these MDisks.
Note: You can change the self-encrypting attribute of an MDisk that is unmanaged or is a
part of an unencrypted pool. However, you cannot change the self-encrypting attribute of
an MDisk after it has been added to an encrypted pool.
You can modify Volumes view to show if the volume is encrypted. Click Volumes → Volumes,
then click Actions → Customize Columns → Encryption to customize the view to show
volumes encryption status, as shown in Figure 12-71.
Note that a volume is reported as encrypted only if all the volume copies are encrypted, as
shown in Figure 12-72.
For more information about either method, see Chapter 7, “Volumes” on page 251.
12.8.6 Restrictions
The following restrictions apply to encryption:
Image mode volumes cannot be in encrypted pools.
You cannot add external non self-encrypting MDisks to encrypted pools unless all nodes
in the cluster support encryption.
Nodes that cannot perform software encryption cannot be added to systems with
encrypted pools that contain external MDisks that are not self-encrypting.
682 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If you have both USB and key server enabled, then rekeying is done separately for each of
the providers.
Important: Before you create a master access key, ensure that all nodes are online and
that the current master access key is accessible.
There is no method to directly change data encryption keys. If you need to change the data
encryption key used to encrypt given data, then the only available method is to migrate that
data to a new encrypted object (for example, an encrypted child pool). Because the data
encryption keys are defined per encrypted object, such migration will force a change of the
key that is used to encrypt that data.
To rekey the master access key kept on the key server provider, complete these steps:
1. Click Settings → Security → Encryption, ensure that Encryption Keys shows that all
configured SKLM servers are reported as Accessible, as shown in Figure 12-74. Click
Key Servers to expand the section.
3. Click Yes in the next window to confirm the rekey operation, as shown in Figure 12-76.
Note: The rekey operation is performed only on the primary key server configured in
the system. If you have additional key servers configured apart from the primary one,
they will not hold the updated encryption key until they obtain it from the primary key
server. To restore encryption key provider redundancy after a rekey operation, replicate
the encryption key from the primary key server to the secondary key servers.
684 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
You receive a message confirming that the rekey operation was successful, as shown in
Figure 12-77.
After the rekey operation is complete, update all other copies of the encryption key, including
copies stored on other media. Take the same precautions to securely store all copies of the
new encryption key as when you were enabling encryption for the first time.
To rekey the master access key on USB flash drives, complete these steps:
1. Click Settings → Security → Encryption. Click USB Flash Drives to expand the
section, as shown in Figure 12-78.
Figure 12-78 Locate USB Flash Drive section in the Encryption view
3. If the system detects a validated USB flash drive and at least three available USB flash
drives, new encryption keys are automatically copied on the USB flash drives, as shown in
Figure 12-80. Click Commit to finalize the rekey operation.
686 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. You should receive a message confirming the rekey operation was successful, as shown
in Figure 12-81. Click Close.
If you only have the USB key provider enabled, and you choose to enable the key server, then
the GUI gives you an option to disable the USB key provider during key server configuration.
Follow the procedure as described in 12.4.3, “Enabling encryption using key servers” on
page 655. During key server provider configuration, the wizard asks if the USB flash drives
provider should be disabled, as shown in Figure 12-82. Select Yes and continue with the
procedure to migrate from USB to SKLM provider.
Figure 12-82 Disable the USB provider using the encryption key server configuration wizard
It is also possible to migrate from key server provider to USB provider or, if you have both
providers enabled, to disable either of them. However, these operations are possible only by
using the CLI.
2. You receive a message confirming that encryption has been disabled. Figure 12-84 shows
the message when using a key server.
688 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
13
Fault tolerance and high levels of availability are achieved by the following methods:
The Redundant Array of Independent Disks (RAID) capabilities of the underlying disks
IBM SAN Volume Controller nodes clustering by using a Compass architecture
Auto-restart of hung nodes
Integrated Battery Backup Units (BBU) to provide memory protection if there is a site
power failure
Host system failover capabilities using N-Port ID Virtualization (NPIV)
Hot Spare Node option to provide complete node redundancy and failover
The heart of IBM Spectrum Virtualize system is one or more pairs of nodes. The nodes share
the read and write data workload from the attached hosts and to the disk arrays. This section
examines the RAS features of IBM SAN Volume Controller system, monitoring, and
troubleshooting.
IBM SAN Volume Controller nodes are always installed in pairs, forming an I/O group. A
minimum of one pair and a maximum of four pairs of nodes constitute a clustered SVC
system. Many of the components that make up IBM SAN Volume Controller nodes include
light-emitting diodes (LEDs) that indicate status and the activity of that component.
690 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 13-1 shows the rear view, ports, and indicator lights on the IBM SAN Volume
Controller node model 2145-SV1.
Figure 13-1 Rear ports and indicators of IBM SAN Volume Controller model 2145-SV1
USB ports
Two active Universal Serial Bus (USB) connectors are available in the horizontal position.
They have no numbers and no indicators are associated with them. These ports can be used
for initial cluster setup, encryption key backup, and node status or log collection.
Each port is associated with one green and one amber LED indicating its status of the
operation, as listed in Table 13-3.
Figure 13-2 Rear view and indicators of IBM SAN Volume Controller model 2145-SV1
692 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
The next section explains the LED components and the condition that are associated with
them.
Power LED
The Power LED has these statuses:
Off: When the Power LED is off, the IBM SAN Volume Controller node has no power at the
power supply or both power supplies have failed.
On: When the Power LED is on, the IBM SAN Volume Controller node is on.
Flashing: When the Power LED is flashing, the IBM SAN Volume Controller is off, but it
has power at the power supplies.
Identify LED
The Identify LED has this status:
This LED flashes if the identify feature is on. This function can be used to find a specific
node at the data center.
The interpretation of SAS status LED indicators has the same meaning as the LED indicators
of SAS ports in the IBM SAN Volume Controller node. Table 13-4 shows the LED status
values of the expansion canister.
694 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
13.1.3 Adding a node to an SVC system
Before you add a node to a system, ensure that you have configured the switch zoning so that
the node you are adding is in the same zone as all other nodes in the system. If you are
replacing a node and the switch is zoned by worldwide port name (WWPN) rather than by
switch port, you must follow the service instructions carefully to continue to use the same
WWPNs.
Complete the following steps to add a node to the SVC clustered system:
1. If the switch zoning is correct, you see the additional I/O Group as a gray empty frame on
the Monitoring → System pane. Figure 13-4 shows this empty frame.
2. Click the disabled node representing an empty io_grp1 to open the Add Nodes window.
4. Ensure that you have selected the correct available nodes to be added and click Finish. If
the existing system has encryption enabled, you are prompted to enable encryption on the
selected nodes. Encryption licenses need to be installed for all nodes in the system.
5. The system display, as shown in Figure 13-6, shows the new nodes in I/O group 1 where
one node has completed adding while the other node is still adding. This process can take
approximately 30 minutes as the system automatically updates the code level on the new
node if it was not matching the system level.
696 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
13.1.4 Removing a node from an SVC clustered system
From the System window, complete the following steps to remove a node:
1. Click the front panel of the node you want to remove.
2. From the enlarged view of the node, right-click the front panel as shown in Figure 13-7.
Select Remove from the menu.
Figure 13-7 Remove a node from the SVC clustered system action
3. A Warning window shown in Figure 13-8 opens. Read the warnings before continuing by
clicking Yes.
Warning: By default, the cache is flushed before the node is deleted to prevent data
loss if a failure occurs on the other node in the I/O Group.
In certain circumstances, such as when the node is already offline, you can remove the
specified node immediately without flushing the cache or ensuring that data loss does
not occur. Select Bypass check for volumes that will go offline, and remove the
node immediately without flushing its cache.
Figure 13-9 System Details pane with one SVC node removed
5. If this node is the last node in the system, the warning message shown in Figure 13-10 is
displayed. Before you delete the last node in the system, ensure that you want to delete
the system. The user interface and any open CLI sessions are lost.
Figure 13-10 Warning window for the removal of the last node in the cluster
6. After you click OK, the node becomes a candidate, ready to be added into an SVC cluster
or create a new system.
13.1.5 Power
IBM SAN Volume Controller nodes and disk enclosures accommodate two power supply units
(PSUs) for normal operation. For this reason, it is highly advised to supply AC power to each
PSU from different Power Distribution Units (PDUs).
A fully charged battery is able to perform two fire hose dumps. It supports the power outage
for 5 seconds before initiating safety procedures.
698 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 13-11 shows two PSUs present in the IBM SAN Volume Controller node. The
controller PSU has two green and one amber indication LEDs reporting the status of the PSU.
Power supplies in both nodes, dense drawers and regular expansion enclosures are
hot-swappable and replaceable without a need to shut down a node or cluster.
If the power is interrupted in one node for less than 5 seconds, the node or enclosure will not
perform a fire hose dump, and continues operation from the battery. This feature is useful for
a case of, for example, maintenance of UPS systems in the data center or replugging the
power to a different power source or PDU unit. A fully charged battery is able to perform two
fire hose dumps.
Important: Never shut down your IBM SAN Volume Controller cluster by powering off the
PSUs, removing both PSUs, or removing both power cables from the nodes. It can lead to
inconsistency or loss of the data staged in the cache.
Before shutting down the cluster, stop all hosts that have allocated volumes from the device.
This step can be skipped for hosts that have volumes that are also provisioned with mirroring
(host-based mirror) from different storage devices. However, doing so incurs errors that are
related to lost storage paths/disks on the host error log.
You can shut down a single node, or shut down the entire cluster. When you shut down only
one node, all activities remain active. When you shut down the entire cluster, you need to
power on the nodes locally to start the system again.
If all input power to the SVC clustered system is removed for more than a few minutes (for
example, if the machine room power is shut down for maintenance), it is important that you
shut down the SVC system before you remove the power.
Shutting down the system while it is still connected to the main power ensures that the
internal node batteries are still fully charged when the power is restored.
When power is restored, the nodes start. However, if the nodes batteries have insufficient
charge to survive another power failure, allowing the node to perform another clean
shutdown, the node will enter service mode. You would not want the batteries to run out of
power in the middle of the node’s shutdown.
It can take approximately 3 hours to charge the batteries sufficiently for a node to come
online.
Important: When a node shuts down because of a power loss, the node dumps the cache
to an internal Flash drive so that the cached data can be retrieved when the system starts
again.
The SAN Volume Controller (SVC) internal batteries are designed to survive at least two
power failures in a short time. After that time, the nodes will not come online until the batteries
have sufficient power to survive another immediate power failure.
During maintenance activities, if the batteries detect power and then detect a loss of power
multiple times (the nodes start and shut down more than once in a short time), you might
discover that you have unknowingly drained the batteries. In this case, you must wait until
they are charged sufficiently before the nodes start again.
700 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
A confirmation window opens, as shown in Figure 13-13.
2. Before you continue, ensure that you stopped all FlashCopy mappings, remote copy
relationships, data migration operations, and forced deletions.
Attention: Pay special attention when encryption is enabled on some storage pools.
You must have inserted a USB drive with the stored encryption keys or you must ensure
that your IBM SAN Volume Controller is able to “talk” to SKLM server or clone servers
to retrieve the encryptions keys. Otherwise, the data will not be readable after restart.
3. Enter the generated confirmation code and click OK to begin the shutdown process.
Important: Generally, perform a daily backup of the IBM Spectrum Virtualize configuration
backup file. The best approach is to automate this task. Always perform an additional
backup before any critical maintenance task, such as an update of the IBM Spectrum
Virtualize software version.
The backup file is updated by the cluster every day. Saving it after any changes to your
system configuration is also important. It contains configuration data of arrays, pools,
volumes, and so on. The backup does not contain any data from the volumes.
To successfully perform the configuration backup, follow the prerequisites and requirements:
All nodes must be online.
No independent operations that change the configuration can be running in parallel.
No object name can begin with an underscore.
The svcconfig backup command generates three files that provide information about the
backup process and cluster configuration. These files are dumped into the /tmp directory on
the configuration node. Use the lsdumps command to list them (Example 13-2).
Table 13-5 describes the three files that are created by the backup process.
svc.config.backup.sh This file contains the names of the commands that were issued to
create the backup of the cluster.
svc.config.backup.log This file contains details about the backup, including any error
information that might have been reported.
702 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Save the current backup to a secure and safe location. The files can be downloaded using
UNIX scp or pscp for MS windows as shown in Example 13-3. Replace the IP address with
the cluster IP address of your SVC and specify a local folder on your workstation. In this
example, we are saving to C:\SVCbackups.
C:\>dir SVCbackups\
Volume in drive C has no label.
Volume Serial Number is 1825-978F
Directory of C:\SVCbackups
C:\>
The use of the -unsafe option enables you to use the wildcard for downloading all the
svc.config.backup files in a single command.
Tip: If you encounter Fatal: Received unexpected end-of-file from server when using
the pscp command, consider upgrading your version of PuTTY.
The Download New Support Package or Log File window opens, as shown in
Figure 13-15.
704 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. Click Download Existing Package to launch a list of files found on the config node. We
filtered the view by clicking in the Filter box, entering backup, and pressing Enter, as
shown in Figure 13-16.
5. Select all of the files to include in the compressed file, then click Download. Depending on
your browser preferences, you might be prompted where to save the file or it will download
to your defined download directory.
The format for the software update package name ends in four positive integers that are
separated by dots. For example, a software update package might have the following name:
IBM_2145_INSTALL_8.1.0.0
Important: Before you attempt any IBM Spectrum Virtualize code update, read and
understand the concurrent compatibility and code cross-reference matrix. For more
information, see the following website and click Latest IBM Spectrum Virtualize code:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707
If you do not perform this check, certain hosts might lose connectivity to their volumes and
experience I/O errors.
Download the software update utility from this page where you can also download the
firmware. This procedure ensures that you get the current version of this utility. You can use
the svcupgradetest utility to check for known issues that might cause problems during a
software update.
The software update test utility can be downloaded in advance of the update process.
Alternately, it can be downloaded and run directly during the software update, as guided by
the update wizard.
You can run the utility multiple times on the same system to perform a readiness check in
preparation for a software update. Run this utility for a final time immediately before you apply
the IBM Spectrum Virtualize update to ensure that there were no new releases of the utility
since it was originally downloaded.
The installation and use of this utility is nondisruptive, and does not require restart of any
nodes. Therefore, there is no interruption to host I/O. The utility is only installed on the current
configuration node.
System administrators must continue to check whether the version of code that they plan to
install is the latest version. You can obtain the current information about the following website:
https://ibm.biz/BdjviZ
This utility is intended to supplement rather than duplicate the existing tests that are
performed by the IBM Spectrum Virtualize update procedure (for example, checking for
unfixed errors in the error log).
Concurrent software update of all components is supported through the standard Ethernet
management interfaces. However, during the update process, most of the configuration tasks
are restricted.
706 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
13.4.3 Update procedure to IBM Spectrum Virtualize V8.1
To update the IBM Spectrum Virtualize software, complete the following steps:
1. Open a supported web browser and navigate to your cluster IP address. A login window
opens (Figure 13-17).
2. Log in with superuser rights. The IBM SVC management home window opens. Move the
cursor over Settings and click System (Figure 13-18).
4. From this window, you can select either to run the update test utility and continue with the
code update or just run the test utility. For this example, we click Update and Test.
Go to the following address (an IBM account is required) and add your system to the
notifications list to be advised of support information, and to download the current code
to your workstation for later upload:
http://www.ibm.com/software/support/einfo.html
5. Because you have previously downloaded both files from https://ibm.biz/BdjviZ, you
can click each yellow folder, browse to the location where you saved the files, and upload
them to the SVC cluster. If the files are correct, the GUI detects and updates the target
code level as shown in Figure 13-20.
Figure 13-20 Upload option for both Test utility and Update package
708 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. Select the type of update you want to perform, as shown in Figure 13-21. Select
Automatic update unless IBM Support has suggested a Service Assistant Manual
update. The manual update might be preferable in cases where misbehaving host
multipathing is known to cause loss of access. Click Finish to begin the update package
upload process.
7. When updating from a V8.1 or later level, an additional window is displayed at this point
allowing you to choose a fully automated update, one that pauses when half the nodes
have completed update, or one that pauses after each node update, as shown in
Figure 13-22. The pause options will require the Resume button to be clicked to continue
the update after each pause. Click Finish.
8. After the update packages have uploaded, the update test utility looks for any known
issues that might affect a concurrent update of your system. The GUI helps identify any
detected issues, as shown in Figure 13-23.
The results pane opens and shows you what issues were detected (Figure 13-25). In our
case, the warning is that email notification (call home) is not enabled. Although this is not
a recommended condition, it does not prevent the system update from running. Therefore,
we can click Close and proceed with the update. However, you might need to contact IBM
Support to assist with resolving more serious issues before continuing.
710 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
10.Click Resume on the Update System window and the update proceeds as shown in
Figure 13-26.
11.Due to the utility detecting issues, another warning comes up to ensure that you have
investigated them and are certain you want to proceed, as shown in Figure 13-27. When
you are ready to proceed, click Yes.
12.The system begins updating the IBM Spectrum Virtualize software by taking one node
offline and installing the new code. This process takes approximately 20 minutes. After the
node returns from the update, it is listed as complete as shown in Figure 13-28.
Tip: If you are updating from V7.8 or later code, the 30-minute wait period can be
adjusted by using the applysoftware CLI command with the -delay (mins) parameter
to begin the update instead of using the GUI.
You now see the new V8.1 GUI and the status of the second node updating in
Figure 13-30.
712 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
After the second node completes, the update is committed to the system, as shown in
Figure 13-31.
14.The update process completes when all nodes and the system are committed. The final
status indicates the new level of code that is installed in the system.
Note: If your nodes have greater than 64 GB of memory before updating to V8.1, each
node will post an 841 error after the update completes. Because V8.1 allocates
memory differently, the memory must be accepted by running the fix procedure for the
event or issue the CLI command svctask chnodehw <id> for each node. See the SAN
Volume Controller IBM Knowledge Center for more information:
https://ibm.biz/BdjmK3
To use this feature, the spare node must be either a DH8 or SV1 node type and have the
same amount of memory and matching FC port configuration as the other nodes in the
cluster. Up to four hot spare nodes can be added to a cluster and must be zoned as part of
the SVC Cluster. Figure 13-32 shows how the GUI displays the hot spare node online while
the original cluster node is offline for update.
To update the IBM SAN Volume Controller internal drives code, complete these steps:
1. Download the latest Drive firmware package for IBM SAN Volume Controller from this
website:
https://www.ibm.com/support/docview.wss?uid=ssg1S1003843
2. On the IBM Spectrum Virtualize GUI, navigate to Pools Internal Storage and select All
Internal.
714 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. Click Actions and select Upgrade all, as shown in Figure 13-34.
Tip: The Upgrade all drives action will only display if you have not selected any
individual drive in the list. If you have clicked an individual drive in the list, the action
gives you individual drive actions. Selecting Upgrade only upgrades that drives
firmware. You can clear an individual drive by holding the control (CTRL) key and
clicking the drive again.
4. The Upgrade all drives window opens, as shown in Figure 13-35. Click the small folder at
the right end of the Upgrade package entry box to navigate to where you saved the
downloaded file in step 1. Click Upgrade to upload the firmware package and begin
upgrading any drives that are downlevel. Do not tick the box to install firmware even if the
drive is running a newer level. Only, only do this under guidance from IBM Support.
5. With the drive upgrades running, you can view the progress by clicking the Tasks icon and
clicking View for the Drive Upgrade running task, as shown in Figure 13-36.
The Drive upgrade running task panel is displayed, listing drives pending upgrade and an
estimated time of completion, as shown in Figure 13-37.
716 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. You can also view each drives firmware level from the Pools Internal Storage All Internal
window by enabling the drive firmware option after right-clicking in the column header line,
as shown in Figure 13-38.
With the Firmware level column enabled, you can see the current level of each drive, as
shown in Figure 13-39.
After uploading the update utility test and Software update package to the cluster using
PSCP, and running the utility test, complete the following steps:
1. Start by removing node 2, which is the partner node of the configuration node in iogrp 0,
using either the cluster GUI or CLI.
2. Log in to the service GUI to verify that the removed node is in candidate status.
3. Select the candidate node and click Update Manually from the left pane.
4. Browse and locate the code that you already downloaded and saved to your PC.
5. Upload the code and click Update.
When the update is completed, a message caption indicating software update completion
displays. The node then reboots, and appears again in the service GUI after
approximately 20 - 25 minutes in candidate status.
6. Select the node and verify that it is updated to the new code.
7. Add the node back by using either the cluster GUI or the CLI.
8. Select node 3 from iogrp1.
9. Repeat steps 1 - 7 to remove node 3, update it manually, verify the code, and add it back
to the cluster.
10.Proceed to node 5 in iogrp 2.
11.Repeat steps 1 - 7 to remove node 5, update it manually, verify the code, and add it back
to the cluster.
12.Move on to node 7 in iogrp 3.
13.Repeat steps 1 - 7 to remove node 5, update it manually, verify the code, and add it back
to the cluster.
Note: At this point, the update is 50% completed. You now have one node from each
iogrp updated with the new code manually. Always leave the configuration node for last
during a manual IBM Spectrum Control Software update.
718 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
19.Repeat steps 1 - 7 to remove node 8, update it manually, verify the code, and add it back
to the cluster.
20.Lastly, select and remove node 1, which is the configuration node in iogrp 0.
Note: A partner node becomes the configuration node because the original config node
is removed from the cluster, keeping the cluster manageable.
The removed configuration node becomes candidate, and you do not have to apply the
code update manually. Simply add the node back to the cluster. It automatically updates
itself and then adds itself back to the cluster with the new code.
21.After all the nodes are updated, you must confirm the update to complete the process. The
confirmation restarts each node in order, which takes about 30 minutes to complete.
For a video guide on how to setup and use IBM Call Home Web, see:
https://www.youtube.com/watch?v=7G9rqk8NXPA
Another feature is Critical Fix Notification function, which enables IBM to warn IBM
Spectrum Virtualize users that a critical issue exists in the level of code that they are using.
The system notifies users when they log on to the GUI using a web browser connected to the
internet.
The decision about what a critical fix is subjective and requires judgment, which is exercised
by the development team. As a result, clients might still encounter bugs in code that were not
deemed critical. They should continue to review information about new code levels to
determine whether they should update even without a critical fix notification.
Important: Inventory notification must be enabled and operational for these features to
work. It is strongly preferred to enable Call Home and Inventory reporting on your IBM
Spectrum Virtualize clusters.
Figure 13-41 shows the Monitoring menu for System information, viewing Events, or seeing
real-time Performance statistics.
720 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Use the management GUI to manage and service your system. Select Monitoring → Events
to list events that should be addressed and maintenance procedures that walk you through
the process of correcting problems. Information in the Events window can be filtered in three
ways:
Recommended Actions
Shows only the alerts that require attention. Alerts are listed in priority order and should be
resolved sequentially by using the available fix procedures. For each problem that is
selected, you can do these tasks:
– Run a fix procedure
– View the properties
Unfixed Messages and Alerts
Displays only the alerts and messages that are not fixed. For each entry that is selected,
you can perform these tasks:
– Run a fix procedure
– Mark an event as fixed
– Filter the entries to show them by specific minutes, hours, or dates
– Reset the date filter
– View the properties
Show All
Displays all event types, whether they are fixed or unfixed. For each entry that is selected,
you can perform these tasks:
– Run a fix procedure
– Mark an event as fixed
– Filter the entries to show them by specific minutes, hours, or dates
– Reset the date filter
– View the properties
Some events require a certain number of occurrences in 25 hours before they are displayed
as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired.
Monitoring events are below the coalesce threshold, and are usually transient.
Important: The management GUI is the primary tool that is used to operate and service
your system. Real-time monitoring should be established by using SNMP traps, email
notifications, or syslog messaging in an automatic manner.
Use the views that are available in the management GUI to verify the status of the system, the
hardware devices, the physical storage, and the available volumes by completing these steps:
1. Click Monitoring → Events to see all problems that exist on the system (Figure 13-42).
2. Select Show All → Recommended Actions to display the most important events to be
resolved (Figure 13-43). The Recommended Actions tab shows the highest priority
maintenance procedure that must be run. Use the troubleshooting wizard so that IBM SAN
Volume Controller can determine the proper order of maintenance procedures.
722 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
In this example, the number of device logins reduced is listed (service error code 1630).
Review the physical FC cabling to determine the issue and then click Run Fix. At any time
and from any GUI window, you can directly go to this menu by using the Alerts icon at the
top of the GUI (Figure 13-44).
If an error is reported, always use the fix procedures from the management GUI to resolve the
problem. Always use the fix procedures for both software configuration problems and
hardware failures. The fix procedures analyze the system to ensure that the required changes
will not cause volumes to become inaccessible to the hosts. The fix procedures automatically
perform configuration changes that are required to return the system to its optimum state.
The fix procedure displays information that is relevant to the problem, and provides various
options to correct the problem. Where possible, the fix procedure runs the commands that are
required to reconfigure the system.
Note: After V7.4, you are no longer required to run the fix procedure for a failed internal
enclosure drive. Hot plugging of a replacement drive will automatically trigger the validation
processes.
The fix procedure also checks that any other existing problem will not result in volume access
being lost. For example, if a power supply unit in a node enclosure must be replaced, the fix
procedure checks and warns you if the integrated battery in the other power supply unit is not
sufficiently charged to protect the system.
Hint: Always use the Run Fix button, which resolves the most serious issues first. Often,
other alerts will be corrected automatically because they were the result of a more serious
issue.
Figure 13-45 Initiate Run Fix procedure from the management GUI
2. The pop-up window prompts you to indicate whether the issue was caused by a planned
change or maintenance task, or whether it appeared in an unexpected manner
(Figure 13-46).
3. If you answer Yes, the fix procedure finishes, assuming the changes in the system were
done on purpose and no other action is necessary. However, our example simulates a
failed SFP in the SAN switch and we continue the fix procedure. Select No and click Next.
724 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
4. In the next window (Figure 13-47), the IBM Spectrum Virtualize GUI lists suggested
actions and which components must be checked to fix and resolve the error. When you
are sure that all possible technical requirements are met (in our case, we replaced a failed
SFP in the SAN switch), click Next.
5. An event has been marked as fixed, and you can safely finish the fix procedure. Click
Close and the event is removed from the list of events (Figure 13-50).
726 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
13.6.3 Resolve alerts in a timely manner
To minimize any impact to your host systems, always perform the recommended actions as
quickly as possible after a problem is reported. Your system is designed to be resilient to most
single hardware failures. However, if it operates for any period with a hardware failure, the
possibility increases that a second hardware failure can result in volume data that is
unavailable. If several unfixed alerts exist, fixing any one alert might become more difficult
because of the effects of the others.
Select or remove columns as needed. You can then also extend or shrink the width of the
column to fit your screen resolution and size. This is the way to manipulate it for most grids in
the management GUI of IBM Spectrum Virtualize, not just the events pane.
Every field of the event log is available as a column in the event log grid. Several fields are
useful when you work with IBM Support. The preferred method in this case is to use the
Show All filter, with events sorted by time stamp. All fields have the sequence number, event
count, and the fixed state. Using Restore Default View sets the grid back to the defaults.
You might want to see more details about each critical event. Some details are not shown in
the main grid. To access properties and sense data of a specific event, double-click the
specific event anywhere in its row.
For more information about troubleshooting options, see the IBM SAN Volume Controller
Troubleshooting section in IBM Knowledge Center, which is available at:
https://ibm.biz/Bdjmgz
13.7 Monitoring
An important step is to correct any issues that are reported by your IBM SAN Volume
Controller as soon as possible. Configure your system to send automatic notifications when a
new event is reported. To avoid having to monitor the management GUI for new events, select
the type of event for which you want to be notified. For example, restrict notifications to just
events that require action. Several event notification mechanisms exist:
Email An event notification can be sent to one or more email addresses. This
mechanism notifies individuals of problems. Individuals can receive
notifications wherever they have email access, including mobile devices.
SNMP An SNMP traps report can be sent to a data center management system,
such as IBM Systems Director, that consolidates SNMP reports from
multiple systems. With this mechanism, you can monitor your data center
from a single workstation.
Syslog A syslog report can be sent to a data center management system that
consolidates syslog reports from multiple systems. With this option, you
can monitor your data center from a single location.
728 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
If your system is within warranty or if you have a hardware maintenance agreement, configure
your IBM SAN Volume Controller cluster to send email events directly to IBM if an issue that
requires hardware replacement is detected. This mechanism is known as Call Home. When
this event is received, IBM automatically opens a problem ticket and, if appropriate, contacts
you to help resolve the reported problem.
Important: If you set up Call Home to IBM, ensure that the contact details that you
configure are correct and kept up to date. Personnel changes can cause delays in IBM
making contact.
3. After clicking Next on the welcome window, provide the information about the location of
the system (Figure 13-55) and contact information of IBM SAN Volume Controller
administrator (Figure 13-56 on page 731) to be contactable by IBM Support. Always keep
this information current.
730 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Figure 13-56 shows the contact information of the owner.
4. Configure the IP address of your company SMTP server, as shown in Figure 13-57. When
the correct SMTP server is provided, you can test the connectivity using Ping to its IP
address. You can configure additional SMTP servers by clicking the Plus sign (+) at the
end of the entry line. When you are done, click Apply and Next.
6. After completing the configuration wizard, test the email function. To do so, enter Edit
mode, as illustrated in Figure 13-59. In the same window, you can define additional email
recipients or alter any contact and location details as needed.
732 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
We strongly suggest that you keep the sending inventory option enabled to IBM Support.
However, it might not be of interest to local users, although inventory content can serve as
a basis for inventory and asset management.
7. In Edit mode, you can change any of the previously configured settings. After you are
finished editing these parameters, adding more recipients, or just testing the connection,
save the configuration to make the changes take effect (Figure 13-60).
Note: The Test button will appear for new email users after first saving and then editing
again.
Note: Clients who have purchased Enterprise Class Support (ECS) are entitled to IBM
support using Remote Support Assistance to quickly connect and diagnose problems.
However, IBM Support might choose to use this feature on non-ECS systems at their
discretion. Therefore, configure and test the connection on all systems.
If you are enabling Remote Support Assistance, then ensure that the following prerequisites
are met:
1. Ensure that call home is configured with a valid email server.
2. Ensure that a valid service IP address is configured on each node on IBM Spectrum
Virtualize.
3. If your SAN Volume Controller is behind a firewall or if you want to route traffic from
multiple storage systems to the same place, you must configure a Remote Support Proxy
server. Before you configure remote support assistance, the proxy server must be
installed and configured separately. During the setup for support assistance, specify the IP
address and the port number for the proxy server on the Remote Support Centers window.
4. If you do not have firewall restrictions and the SAN Volume Controller nodes are directly
connected to the internet, request your network administrator to allow connections to
129.33.206.139 and 204.146.30.139 on Port 22.
5. Both uploading support packages and downloading software require direct connections to
the internet. A DNS server must be defined on your SAN Volume Controller for both of
these functions to work.
734 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
6. To ensure that support packages are uploaded correctly, configure the firewall to allow
connections to the following IP addresses on port 443: 129.42.56.189, 129.42.54.189, and
129.42.60.189.
7. To ensure that software is downloaded correctly, configure the firewall to allow connections
to the following IP addresses on port 22: 170.225.15.105,170.225.15.104,
170.225.15.107, 129.35.224.105, 129.35.224.104, and 129.35.224.107.
Figure 13-62 shows a pop-up window that appears in the GUI after updating to V8.1. It
prompt you to configure your SAN Volume Controller for Remote Support. You can select to
not enable it, open a tunnel when needed, or to open a permanent tunnel to IBM.
You can choose to configure SAN Volume Controller, learn some more about the feature, or
just close the window by clicking the X. Figure 13-63 shows how you can find the Setup
Remote Support Assistance if you have closed the window.
Note: Selecting I want support personnel to work on-site only does not entitle you
to expect IBM support to attend on-site for all issues. Most maintenance contracts are
for customer-replaceable units (CRU) support, where IBM diagnoses your problem and
send a replacement component for you to replace if required. If you prefer to have IBM
perform replacement tasks for you, then contact your local sales person to investigate
an upgrade to your current maintenance contract.
736 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
2. The next window, shown in Figure 13-65, lists the IBM Support center’s IP addresses and
SSH port that will need to be open in your firewall. You can also define a Remote Support
Assistance Proxy if you have multiple Storwize V7000 or SAN Volume Controllers in the
data center, allowing for firewall configuration only being required for the Proxy Server
rather than every storage system. We do not have a proxy server and leave the field blank,
then click Next.
3. The next window asks if you want to open a tunnel to IBM permanently, allowing IBM to
connect to your Storwize V7000 At Any Time, or On Permission Only, as shown in
Figure 13-66. On Permission Only requires a storage administrator to log on to the GUI
and enable the tunnel when required. Click Finish.
738 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
5. A pop-up window asks how long you would like the tunnel to remain open if there is no
activity by setting a timeout value. As shown in Figure 13-68, the connection is established
and waits for IBM Support to connect.
You can configure an SNMP server to receive various informational, error, or warning
notifications by entering the following information (Figure 13-69 on page 740):
IP Address
The address for the SNMP server.
Server Port
The remote port number for the SNMP server. The remote port number must be a value of
1 - 65535 where the default is port 162 for SNMP.
Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong. Typically, the default of public is used.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action such as a space efficient volume running out of space.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
To add an SNMP server, click Actions → Add and fill out the Add SNMP Server window, as
shown in Figure 13-70. To remove an SNMP server, click the line with the server you want to
remove, and select Actions → Remove.
740 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
13.7.5 Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event.
You can configure a syslog server to receive log messages from various systems and store
them in a central repository by entering the following information (Figure 13-71):
IP Address
The IP address for the syslog server.
Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard detail about the event.
– The expanded format provides more details about the event.
Event Notifications
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
The syslog messages are sent in concise message format or expanded message format
depending on the Facility level chosen.
The audit log tracks action commands that are issued through a Secure Shell (SSH) session,
through the management GUI or Remote Support Assistance. It provides the following
entries:
Identity of the user who issued the action command
Name of the actionable command
Time stamp of when the actionable command was issued on the configuration node
Parameters that were issued with the actionable command
742 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Several specific service commands are not included in the audit log:
dumpconfig
cpdumps
cleardumps
finderr
dumperrlog
dumpintervallog
svcservicetak dumperrlog
svcservicetask finderr
Figure 13-72 shows the access to the audit log. Click Audit Log in the left menu to see which
configuration CLI commands have been run on IBM SAN Volume Controller system.
Figure 13-73 shows an example of the audit log after creating a FlashCopy volume, with a
command highlighted. The Running Tasks button is available at the top of the window in the
status pane. If you click that button, the progress of the currently running tasks can be
displayed by clicking the associated View button.
13.9 Collecting support information using the GUI and the CLI
Occasionally, if you have a problem and call the IBM Support Center, they will most likely ask
you to provide a support package. You can collect and upload this package from the Settings
Support menu.
744 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
13.9.1 Collecting information using the GUI
To collect information using the GUI, complete the following steps:
1. Click Settings → Support and the Support Package tab (Figure 13-75).
2. Click the Upload Support Package button.
Assuming that the problem encountered was an unexpected node restart that has logged
a 2030 error, collect the default logs plus the most recent statesave from each node to
capture the most relevant data for support.
Note: When a node unexpectedly reboots, it first dumps its current statesave
information before it restarts to recover from an error condition. This statesave is critical
for support to analyze what happened. Collecting a snap type 4 creates new statesaves
at the time of the collection, which is not useful for understanding the restart event.
4. The procedure to create the snap on an IBM SAN Volume Controller system, including the
latest statesave from each node, begins. This process might take a few minutes
(Figure 13-77).
746 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
13.9.2 Collecting logs using the CLI
The CLI can be used to collect and upload a support package as requested by IBM Support
by performing the following steps:
1. Log in to the CLI and issue the svc_snap command that matches the type of snap
requested by IBM Support:
– Standard logs (type 1):
svc_snap upload pmr=ppppp,bbb,ccc gui1
– Standard logs plus one existing statesave (type 2):
svc_snap upload pmr=ppppp,bbb,ccc gui2
– Standard logs plus most recent statesave from each node (type 3):
svc_snap upload pmr=ppppp,bbb,ccc gui3
– Standard logs plus new statesaves:
svc_livedump -nodes all -yes
svc_snap upload pmr=ppppp,bbb,ccc gui3
2. We collect the type 3 (option 3) and have it automatically upload to the PMR number
provided by IBM Support, as shown in Example 13-6.
3. If you do not want to automatically upload the snap to IBM, do not specify the upload
pmr=ppppp,bbb,ccc part of the commands. When the snap creation completes, it creates a
file named using this format:
/dumps/snap.<panel_id>.YYMMDD.hhmmss.tgz
It takes a few minutes for the snap file to complete, and longer if including statesaves.
4. The generated file can then be retrieved from the GUI under Settings → Support →
Manual Upload Instructions twisty → Download Support Package and then click
Download Existing Package, as shown in Figure 13-78.
748 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
To upload information, use the following procedure:
1. Using a web browser, navigate to ECuRep:
https://www.secure.ecurep.ibm.com/app/upload
This link takes you to the Secure Upload page (Figure 13-80).
4. Select one or more files, click Upload to continue, and follow the directions.
Typically, an IBM Spectrum Virtualize cluster is initially configured with the following IP
addresses:
One service IP address for each IBM SAN Volume Controller node.
One cluster management IP address, which is set when the cluster is created.
The SAT is available even when the management GUI is not accessible. The following
information and tasks can be accomplished with the Service Assistance Tool:
Status information about the connections and the IBM SAN Volume Controller nodes
Basic configuration information, such as configuring IP addresses
Service tasks, such as restarting the Common Information Model (CIM) object manager
(CIMOM) and updating the WWNN
750 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Details about node error codes
Details about the hardware such as IP address and Media Access Control (MAC)
addresses
The SAT GUI is available by using a service assistant IP address that is configured on each
SAN Volume Controller node. It can also be accessed through the cluster IP addresses by
appending /service to the cluster management IP.
If the clustered system is down, the only method of communicating with the nodes is through
the SAT IP address directly. Each node can have a single service IP address on Ethernet
port 1 and should be configured on all nodes of the cluster, including any Hot Spare Nodes.
To open the SAT GUI, enter one of the following URLs into any web browser:
http(s)://<cluster IP address of your cluster>/service
http(s)://<service IP address of a node>/service
3. The current selected SAN Volume Controller node is displayed in the upper left corner of
the GUI. In Figure 13-83, this is node ID 1. Select the node that you want in the Change
Node section of the window. You see the details in the upper left change to reflect the
selected node.
Note: The SAT GUI provides access to service procedures and shows the status of the
nodes. It is advised that these procedures should only be carried out if directed to do so by
IBM Support.
For more information about how to use the SAT, see the following website:
https://ibm.biz/BdjKXu
752 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
A
To ensure that the performance levels of your system are maintained, monitor performance
periodically to provide visibility to potential problems that exist or are developing so that they
can be addressed in a timely manner.
Performance considerations
When you are designing the IBM Spectrum Virtualize infrastructure or maintaining an existing
infrastructure, you must consider many factors in terms of their potential effect on
performance. These factors include, but are not limited to dissimilar workloads competing for
the same resources, overloaded resources, insufficient available resources, poor performing
resources, and similar performance constraints.
Remember the following high-level rules when you are designing your storage area network
(SAN) and IBM Spectrum Virtualize layout:
Host-to-SVC inter-switch link (ISL) oversubscription
This area is the most significant input/output (I/O) load across ISLs. The recommendation
is to maintain a maximum of 7-to-1 oversubscription. A higher ratio is possible, but it tends
to lead to I/O bottlenecks. This suggestion also assumes a core-edge design, where the
hosts are on the edges and the SVC is the core.
Storage-to-SVC ISL oversubscription
This area is the second most significant I/O load across ISLs. The maximum
oversubscription is 7-to-1. A higher ratio is not supported. Again, this suggestion assumes
a multiple-switch SAN fabric design.
Node-to-node ISL oversubscription
This area is the least significant load of the three possible oversubscription bottlenecks. In
standard setups, this load can be ignored. Although this area is not entirely negligible, it
does not contribute significantly to the ISL load. However, node-to-node ISL
oversubscription is mentioned here in relation to the split-cluster capability that was made
available since V6.3 (Stretched Cluster and HyperSwap).
When the system is running in this manner, the number of ISL links becomes more
important. As with the storage-to-SVC ISL oversubscription, this load also requires a
maximum of 7-to-1 oversubscription. Exercise caution and careful planning when you
determine the number of ISLs to implement. If you need assistance, contact your IBM
representative and request technical assistance.
ISL trunking/port channeling
For the best performance and availability, use ISL trunking or port channeling.
Independent ISL links can easily become overloaded and turn into performance
bottlenecks. Bonded or trunked ISLs automatically share load and provide better
redundancy in a failure.
754 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Number of paths per host multipath device
The maximum supported number of paths per multipath device that is visible on the host is
eight. Although the IBM Subsystem Device Driver Path Control Module (SDDPCM),
related products, and most vendor multipathing software can support more paths, the SVC
expects a maximum of eight paths. In general, you see only an effect on performance from
more paths than eight. Although the IBM Spectrum Virtualize can work with more than
eight paths, this design is technically unsupported.
Do not intermix dissimilar array types or sizes
Although the IBM Spectrum Virtualize supports an intermix of differing storage within
storage pools, it is best to always use the same array model, Redundant Array of
Independent Disks (RAID) mode. RAID size (RAID 5 6+P+S does not mix well with RAID 6
14+2), and drive speeds.
Rules and guidelines are no substitution for monitoring performance. Monitoring performance
can provide a validation that design expectations are met, and identify opportunities for
improvement.
This capability provides a great investment value because the nodes are relatively
inexpensive and a node swap can be done online. This capability provides an instant
performance boost with no license changes. Newer nodes, such as 2145-SV1 models, which
dramatically increased cache of 32 - 64 gigabytes (GB) per node, provide an extra benefit on
top of the typical refresh cycle.
To set the Fibre Channel port mapping for the 2145-SV1, you can use following application
link. This link only supports upgrades from 2145-CF8, 2145-CG8, and 2145-DH8:
https://ports.eu-gb.mybluemix.net
The performance is near linear when nodes are added into the cluster until performance
eventually becomes limited by the attached components. Although virtualization provides
significant flexibility in terms of the components that are used, it does not diminish the
necessity of designing the system around the components so that it can deliver the level of
performance that you want.
The key item for planning is your SAN layout. Switch vendors have slightly different planning
requirements, but the end goal is that you always want to maximize the bandwidth that is
available to the SVC ports. The SVC is one of the few devices that can drive ports to their
limits on average, so it is imperative that you put significant thought into planning the SAN
layout.
The statistics files (Volume, managed disk (MDisk), and Node) are saved at the end of the
sampling interval. A maximum of 16 files (each) are stored before they are overlaid in a
rotating log fashion. This design then provides statistics for the most recent 80-minute period
if the default 5-minute sampling interval is used. IBM Spectrum Virtualize supports
user-defined sampling intervals of 1 - 60 minutes.
The maximum space that is required for a performance statistics file is around 1 MB
(1,153,482 bytes). Up to 128 (16 per each of the three types across eight nodes) different files
can exist across eight SVC nodes. This design makes the total space requirement a
maximum of a bit more than 147 MB (147,645,694 bytes) for all performance statistics from
all nodes in an 8-node SVC cluster.
Note: Remember this maximum of 147,645,694 bytes for all performance statistics from all
nodes in an SVC cluster when you are in time-critical situations. The required size is not
otherwise important because SVC node hardware can map the space.
You can define the sampling interval by using the startstats -interval 2 command to
collect statistics at, in this example, 2-minute intervals.
Statistics are collected at the end of each sampling period (as specified by the -interval
parameter). These statistics are written to a file. A file is created at the end of each sampling
period. Separate files are created for MDisks, volumes, and node statistics.
Use the startstats command to start the collection of statistics, as shown in Example A-1.
This command starts statistics collection and gathers data at 4-minute intervals.
To verify that the statistics collection interval, display the system properties again, as shown in
Example A-2.
It is not possible to stop statistics collection with the command stopstats starting with V8.1.
756 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Collection intervals: Although more frequent collection intervals provide a more detailed
view of what happens within IBM Spectrum Virtualize and SVC, they shorten the amount of
time that the historical data is available on the IBM Spectrum Virtualize. For example,
rather than an 80-minute period of data with the default five-minute interval, if you adjust to
2-minute intervals, you have a 32-minute period instead.
Statistics are collected per node. The sampling of the internal performance counters is
coordinated across the cluster so that when a sample is taken, all nodes sample their internal
counters at the same time. It is important to collect all files from all nodes for a complete
analysis. Tools, such as IBM Spectrum Control, perform this intensive data collection for you.
The node_frontpanel_id is of the node on which the statistics were collected. The date is in
the form <yymmdd> and the time is in the form <hhmmss>. The following example shows an
MDisk statistics file name:
Nm_stats_113986_161024_151832
Example A-3 shows typical MDisk, volume, node, and disk drive statistics file names.
Use the -load parameter to specify the session that is defined in PuTTY.
With system-level statistics, you can quickly view the CPU usage and the bandwidth of
volumes, interfaces, and MDisks. Each graph displays the current bandwidth in megabytes
per second (MBps) or I/O operations per second (IOPS), and a view of bandwidth over time.
Each node collects various performance statistics, mostly at 5-second intervals, and the
statistics that are available from the config node in a clustered environment. This information
can help you determine the performance effect of a specific node. As with system statistics,
node statistics help you to evaluate whether the node is operating within normal performance
metrics.
The lsnodestats command provides performance statistics for the nodes that are part of a
clustered system, as shown in Example A-4. This output is truncated and shows only part of
the available statistics. You can also specify a node name in the command to limit the output
for a specific node.
758 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
1 node_75ACXP0 fc_io 219 315 171004174202
1 node_75ACXP0 sas_mb 0 0 171004174607
1 node_75ACXP0 sas_io 0 0 171004174607
1 node_75ACXP0 iscsi_mb 0 0 171004174607
1 node_75ACXP0 iscsi_io 0 0 171004174607
1 node_75ACXP0 write_cache_pc 0 0 171004174607
1 node_75ACXP0 total_cache_pc 0 0 171004174607
1 node_75ACXP0 vdisk_mb 0 0 171004174607
1 node_75ACXP0 vdisk_io 0 0 171004174607
1 node_75ACXP0 vdisk_ms 0 0 171004174607
1 node_75ACXP0 mdisk_mb 0 13 171004174202
1 node_75ACXP0 mdisk_io 5 96 171004174202
1 node_75ACXP0 mdisk_ms 0 12 171004174202
1 node_75ACXP0 drive_mb 0 0 171004174607
1 node_75ACXP0 drive_io 0 0 171004174607
1 node_75ACXP0 drive_ms 0 0 171004174607
1 node_75ACXP0 vdisk_r_mb 0 0 171004174607
1 node_75ACXP0 vdisk_r_io 0 0 171004174607
1 node_75ACXP0 vdisk_r_ms 0 0 171004174607
1 node_75ACXP0 vdisk_w_mb 0 0 171004174607
...
2 node_75ACXF0 mdisk_w_ms 0 0 171004174607
2 node_75ACXF0 drive_r_mb 0 0 171004174607
2 node_75ACXF0 drive_r_io 0 0 171004174607
2 node_75ACXF0 drive_r_ms 0 0 171004174607
2 node_75ACXF0 drive_w_mb 0 0 171004174607
2 node_75ACXF0 drive_w_io 0 0 171004174607
2 node_75ACXF0 drive_w_ms 0 0 171004174607
2 node_75ACXF0 iplink_mb 0 0 171004174607
2 node_75ACXF0 iplink_io 0 0 171004174607
2 node_75ACXF0 iplink_comp_mb 0 0 171004174607
2 node_75ACXF0 cloud_up_mb 0 0 171004174607
2 node_75ACXF0 cloud_up_ms 0 0 171004174607
2 node_75ACXF0 cloud_down_mb 0 0 171004174607
2 node_75ACXF0 cloud_down_ms 0 0 171004174607
Example A-4 on page 758 shows statistics for the two nodes members of cluster ITSO_DH8.
For each node, the following columns are displayed:
stat_name: The name of the statistic field
stat_current: The current value of the statistic field
stat_peak: The peak value of the statistic field in the last 5 minutes
stat_peak_time: The time that the peak occurred
The lsnodestats command can also be used with a node name or ID as an argument. For
example, you can enter the following command to display the statistics of node with ID 1 only:
lsnodestats 1
The lssystemstats command lists the same set of statistics that is listed with the
lsnodestats command, but representing all nodes in the cluster. The values for these
statistics are calculated from the node statistics values in the following way:
Bandwidth: Sum of bandwidth of all nodes
Latency: Average latency for the cluster, which is calculated by using data from the whole
cluster, not an average of the single node values
760 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Table A-1 gives the description of the different counters that are presented by the
lssystemstats and lsnodestats commands.
compression_cpu_pc Displays the percentage of allocated CPU capacity that is used for
compression.
cpu_pc Displays the percentage of allocated CPU capacity that is used for the
system.
fc_mb Displays the total number of megabytes transferred per second for Fibre
Channel traffic on the system. This value includes host I/O and any
bandwidth that is used for communication within the system.
fc_io Displays the total I/O operations that are transferred per second for Fibre
Channel traffic on the system. This value includes host I/O and any
bandwidth that is used for communication within the system.
sas_mb Displays the total number of megabytes transferred per second for
serial-attached SCSI (SAS) traffic on the system. This value includes
host I/O and bandwidth that is used for background RAID activity.
sas_io Displays the total I/O operations that are transferred per second for SAS
traffic on the system. This value includes host I/O and bandwidth that is
used for background RAID activity.
iscsi_mb Displays the total number of megabytes transferred per second for iSCSI
traffic on the system.
iscsi_io Displays the total I/O operations that are transferred per second for
iSCSI traffic on the system.
write_cache_pc Displays the percentage of the write cache usage for the node.
total_cache_pc Displays the total percentage for both the write and read cache usage for
the node.
vdisk_mb Displays the average number of megabytes transferred per second for
read and write operations to volumes during the sample period.
vdisk_io Displays the average number of I/O operations that are transferred per
second for read and write operations to volumes during the sample
period.
vdisk_ms Displays the average amount of time in milliseconds that the system
takes to respond to read and write requests to volumes over the sample
period.
mdisk_mb Displays the average number of megabytes transferred per second for
read and write operations to MDisks during the sample period.
mdisk_io Displays the average number of I/O operations that are transferred per
second for read and write operations to MDisks during the sample
period.
mdisk_ms Displays the average amount of time in milliseconds that the system
takes to respond to read and write requests to MDisks over the sample
period.
drive_mb Displays the average number of megabytes transferred per second for
read and write operations to drives during the sample period
drive_io Displays the average number of I/O operations that are transferred per
second for read and write operations to drives during the sample period.
drive_ms Displays the average amount of time in milliseconds that the system
takes to respond to read and write requests to drives over the sample
period.
vdisk_w_mb Displays the average number of megabytes transferred per second for
read and write operations to volumes during the sample period.
vdisk_w_io Displays the average number of I/O operations that are transferred per
second for write operations to volumes during the sample period.
vdisk_w_ms Displays the average amount of time in milliseconds that the system
takes to respond to write requests to volumes over the sample period.
mdisk_w_mb Displays the average number of megabytes transferred per second for
write operations to MDisks during the sample period.
mdisk_w_io Displays the average number of I/O operations that are transferred per
second for write operations to MDisks during the sample period.
mdisk_w_ms Displays the average amount of time in milliseconds that the system
takes to respond to write requests to MDisks over the sample period.
drive_w_mb Displays the average number of megabytes transferred per second for
write operations to drives during the sample period.
drive_w_io Displays the average number of I/O operations that are transferred per
second for write operations to drives during the sample period.
drive_w_ms Displays the average amount of time in milliseconds that the system
takes to respond write requests to drives over the sample period.
vdisk_r_mb Displays the average number of megabytes transferred per second for
read operations to volumes during the sample period.
vdisk_r_io Displays the average number of I/O operations that are transferred per
second for read operations to volumes during the sample period.
vdisk_r_ms Displays the average amount of time in milliseconds that the system
takes to respond to read requests to volumes over the sample period.
mdisk_r_mb Displays the average number of megabytes transferred per second for
read operations to MDisks during the sample period.
mdisk_r_io Displays the average number of I/O operations that are transferred per
second for read operations to MDisks during the sample period.
mdisk_r_ms Displays the average amount of time in milliseconds that the system
takes to respond to read requests to MDisks over the sample period.
drive_r_mb Displays the average number of megabytes transferred per second for
read operations to drives during the sample period
drive_r_io Displays the average number of I/O operations that are transferred per
second for read operations to drives during the sample period.
drive_r_ms Displays the average amount of time in milliseconds that the system
takes to respond to read requests to drives over the sample period.
762 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Value Description
iplink_mb The total number of megabytes transferred per second for Internet
Protocol (IP) replication traffic on the system. This value does not include
iSCSI host I/O operations.
iplink_io The total I/O operations that are transferred per second for IP
partnership traffic on the system. This value does not include Internet
Small Computer System Interface (iSCSI) host I/O operations.
cloud_up_mb Displays the average number of Mbps for upload operations to a cloud
account during the sample period.
cloud_up_ms Displays the average amount of time (in milliseconds) it takes for the
system to respond to upload requests to a cloud account during the
sample period.
cloud_down_mb Displays the average number of Mbps for download operations to a cloud
account during the sample period.
cloud_down_ms Displays the average amount of time (in milliseconds) that it takes for the
system to respond to download requests to a cloud account during the
sample period.
Figure A-1 IBM Spectrum Virtualize Dashboard displaying System performance overview
Figure A-2 IBM Spectrum Virtualize Dashboard displaying Nodes performance overview
You can also use real-time statistics to monitor CPU utilization, volume, interface, and MDisk
bandwidth of your system and nodes. Each graph represents five minutes of collected
statistics and provides a means of assessing the overall performance of your system.
The real-time statistics are available from the IBM Spectrum Virtualize GUI. Click
Monitoring → Performance (as shown in Figure A-3) to open the Performance Monitoring
window.
As shown in Figure A-4 on page 765, the Performance monitoring pane is divided into the
following sections that provide utilization views for the following resources:
CPU Utilization: The CPU utilization graph shows the current percentage of CPU usage
and peaks in utilization. It can also display compression CPU usage for systems with
compressed volumes.
Volumes: Shows four metrics on the overall volume utilization graphics:
– Read
– Write
– Read latency
– Write latency
764 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Interfaces: The Interfaces graph displays data points for FC, iSCSI, serial-attached SCSI
(SAS), and IP Remote Copy interfaces. You can use this information to help determine
connectivity issues that might affect performance.
– Fibre Channel
– iSCSI
– SAS
– IP Remote Copy
MDisks: Also shows four metrics on the overall MDisks graphics:
– Read
– Write
– Read latency
– Write latency
You can use these metrics to help determine the overall performance health of the volumes
and MDisks on your system. Consistent unexpected results can indicate errors in
configuration, system faults, or connectivity issues.
The system’s performance is also always visible in the bottom of the IBM Spectrum Virtualize
window, as shown in Figure A-4.
Note: The indicated values in the graphics are averaged on a 1 second based sample.
Figure A-5 View statistics per node or for the entire system
You can also change the metric between MBps or IOPS, as shown in Figure A-6.
On any of these views, you can select any point with your cursor to know the exact value and
when it occurred. When you place your cursor over the timeline, it becomes a dotted line with
the various values gathered, as shown in Figure A-7.
766 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
For each of the resources, various values exist that you can view by selecting the value. For
example, as shown in Figure A-8, the four available fields are selected for the MDisks view:
Read, Write, Read latency, and Write latency. In our example, Read and Write MBps are not
selected.
IBM Spectrum Control is installed separately on a dedicated system, and is not part of the
IBM Spectrum Virtualize bundle.
A Software as a Service (SaaS) version of IBM Spectrum Control, called IBM Spectrum
Control Storage Insights, allows you to use the solution as a service (no installation) in
minutes and offers a free trial for 30 days.
For more information about the use of IBM Spectrum Control to monitor your storage
subsystem, see:
https://www.ibm.com/systems/storage/spectrum/control/
Detailed CLI information is available on the IBM SAN Volume Controller web page under the
command-line section, which is at:
https://ibm.biz/Bdjnjs
Note: If a task completes in the GUI, the associated CLI command is always displayed in
the details, as shown throughout this book.
Basic setup
In the IBM Spectrum Virtualize GUI, authentication is performed by supplying a user name
and password. The CLI uses an SSH to connect from the host to the IBM Spectrum Virtualize
system. Either a private and a public key pair or user name and password is necessary. The
following steps are required to enable CLI access with SSH keys:
1. A public key and a private key are generated together as a pair.
2. A public key is uploaded to the IBM Spectrum Virtualize system through the GUI.
3. A client SSH tool must be configured to authenticate with the private key.
4. A secure connection can be established between the client and SVC system.
SSH is the communication vehicle between the management workstation and the IBM
Spectrum Virtualize system. The SSH client provides a secure environment from which to
connect to a remote machine. It uses the principles of public and private keys for
authentication.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the clustered system, and a private key, which is kept
private on the workstation that is running the SSH client. These keys authorize specific users
to access the administration and service functions on the system.
Each key pair is associated with a user-defined ID string that can consist of up to 40
characters. Up to 100 keys can be stored on the system. New IDs and keys can be added,
and unwanted IDs and keys can be deleted. To use the CLI, an SSH client must be installed
on that system, the SSH key pair must be generated on the client system, and the client’s
SSH public key must be stored on the IBM Spectrum Virtualize systems.
The SSH client that is used in this book is PuTTY. A PuTTY key generator can also be used to
generate the private and public key pair. The PuTTY client can be downloaded from the
following address at no cost:
http://www.putty.org/
770 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Generating a public and private key pair
To generate a public and private key pair, complete the following steps:
1. Start the PuTTY key generator to generate the public and private key pair (Figure B-1).
To generate keys: The blank area that is indicated by the message is the large blank
rectangle on the GUI inside the section of the GUI labeled Key. Continue to move the
mouse pointer over the blank area until the progress bar reaches the far right. This
action generates random characters to create a unique key pair.
772 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. After the keys are generated, save them for later use. Click Save public key (Figure B-3).
4. You are prompted for a name (for example, pubkey) and a location for the public key (for
example, C:\Keys\ITSO_admin.pub). Click Save.
Ensure that you record the name and location because the name and location of this SSH
public key must be specified later.
Public key extension: By default, the PuTTY key generator saves the public key with
no extension. Use the string pub for naming the public key, for example, pubkey, to easily
differentiate the SSH public key from the SSH private key.
5. Click Save private key. You are prompted with a warning message (Figure B-4). Click Yes
to save the private key without a passphrase.
Key generator: The PuTTY key generator saves the PuTTY private key (PPK) with the
.ppk extension.
Uploading the SSH public key to the IBM Spectrum Virtualize system
After you create your SSH key pair, upload your SSH public key onto the IBM Spectrum
Virtualize system. Complete the following steps:
1. Open the user section in the GUI (Figure B-5).
2. Right-click the user name for which you want to upload the key and click Properties
(Figure B-6).
774 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. To upload the public key, click Browse, and select the folder where you stored the public
SSH key (Figure B-7).
2. In the right pane, select SSH as the connection type. In the Close window on exit section,
select Only on clean exit, which ensures that if any connection errors occur, they are
displayed on the user’s window.
776 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
3. In the Category pane, on the left side of the PuTTY Configuration window (Figure B-11),
click Connection → SSH to open the PuTTY SSH Configuration window.
7. In the Category pane, click Session to return to the Basic options for your PuTTY session
view.
778 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
8. Enter the following information in these fields (Figure B-13) in the right pane:
– Host Name. Specify the host name or system IP address of the IBM Spectrum
Virtualize system.
– Saved Sessions. Enter a session name.
11.PuTTY now connects to the system and prompts you for a user name to log in as. Enter
ITSO Admin as the user name (Example B-1) and press Enter.
You have now completed the tasks to configure the CLI for IBM Spectrum Virtualize system
administration.
780 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
C
Appendix C. Terminology
This appendix summarizes the IBM Spectrum Virtualize and IBM SAN Volume Controller
(SVC) terms that are commonly used in this book.
To see the complete set of terms that relate to the IBM SAN Volume Controller, see the
Glossary section of IBM Knowledge Center for SVC, which is available at:
http://www.ibm.com/support/knowledgecenter/STPVGU/landing/SVC_welcome.html
Array
An ordered collection, or group, of physical devices (disk drive modules) that are
used to define logical volumes or devices. An array is a group of drives
designated to be managed with a Redundant Array of Independent Disks (RAID).
Asymmetric virtualization
Asymmetric virtualization is a virtualization technique in which the virtualization
engine is outside the data path and performs a metadata-style service. The
metadata server contains all the mapping and locking tables, and the storage
devices contain only data. See also “Symmetric virtualization” on page 797.
Asynchronous replication
Asynchronous replication is a type of replication in which control is given back to
the application as soon as the write operation is made to the source volume.
Later, the write operation is made to the target volume. See also “Synchronous
replication” on page 797.
Back end
See “Front end and back end” on page 788.
Call home
Call home is a communication link that is established between a product and a
service provider. The product can use this link to call IBM or another service
provider when the product requires service. With access to the machine, service
personnel can perform service tasks, such as viewing error and problem logs or
initiating trace and dump retrievals.
Canister
A canister is a single processing unit within a storage system.
Capacity licensing
Capacity licensing is a licensing model that licenses features with a
price-per-terabyte model. Licensed features are FlashCopy, Metro Mirror, Global
Mirror, and virtualization. See also “FlashCopy” on page 787, “Metro Mirror” on
page 791, and “Virtualization” on page 798.
782 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Chain
A set of enclosures that are attached to provide redundant access to the drives
inside the enclosures. Each control enclosure can have one or more chains.
Channel extender
A channel extender is a device that is used for long-distance communication that
connects other storage area network (SAN) fabric components. Generally,
channel extenders can involve protocol conversion to asynchronous transfer
mode (ATM), Internet Protocol (IP), or another long-distance communication
protocol.
Child pool
Administrators can use child pools to control capacity allocation for volumes that
are used for specific purposes. Rather than being created directly from managed
disks (MDisks), child pools are created from existing capacity that is allocated to
a parent pool. As with parent pools, volumes can be created that specifically use
the capacity that is allocated to the child pool. Child pools are similar to parent
pools with similar properties. Child pools can be used for volume copy operation.
Also, see “Parent pool” on page 792.
Cloud Container
Cloud Container is a virtual object that include all of the elements, components,
or data that are common to a specific application or data.
Cloud Provider
Cloud provider is the company or organization that provide off- and on-premises
cloud services such as storage, server, network, and so on. IBM Spectrum
Virtualize has built-in software capabilities to interact with Cloud Providers such
as IBM Cloud, Amazon S3, and deployments of OpenStack Swift.
Cloud Tenant
Cloud Tenant is a group or an instance that provides common access with the
specific privileges to an object, software, or data source.
Cold extent
A cold extent is an extent of a volume that does not get any performance benefit
if it is moved from a hard disk drive (HDD) to a Flash disk. A cold extent also
refers to an extent that needs to be migrated onto an HDD if it is on a Flash disk
drive.
Compression accelerator
A compression accelerator is hardware onto which the work of compression is
offloaded from the microprocessor.
Configuration node
While the cluster is operational, a single node in the cluster is appointed to
provide configuration and service functions over the network interface. This node
is termed the configuration node. This configuration node manages the data that
describes the clustered-system configuration and provides a focal point for
configuration commands. If the configuration node fails, another node in the
cluster transparently assumes that role.
Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or
data sets that are maintained with the same time reference so that all copies are
consistent in time. A Consistency Group can be managed as a single entity.
Container
A container is a software object that holds or organizes other software objects or
entities.
Contingency capacity
For thin-provisioned volumes that are configured to automatically expand, the
unused real capacity that is maintained. For thin-provisioned volumes that are not
configured to automatically expand, the difference between the used capacity
and the new real capacity.
Copied state
Copied is a FlashCopy state that indicates that a copy was triggered after the
copy relationship was created. The Copied state indicates that the copy process
is complete and the target disk has no further dependency on the source disk.
The time of the last trigger event is normally displayed with this status.
Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A
counterpart SAN provides all of the connectivity of the redundant SAN, but
without the 100% redundancy. SVC nodes are typically connected to a
“redundant SAN” that is made up of two counterpart SANs. A counterpart SAN is
often called a SAN fabric.
Cross-volume consistency
A consistency group property that ensures consistency between volumes when
an application issues dependent write operations that span multiple volumes.
784 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Data consistency
Data consistency is a characteristic of the data at the target site where the
dependent write order is maintained to ensure the recoverability of applications.
Data migration
Data migration is the movement of data from one physical location to another
physical location without the disruption of application I/O operations.
Discovery
The automatic detection of a network topology change, for example, new and
deleted nodes or links.
Disk tier
MDisks (logical unit numbers (LUNs)) that are presented to the SVC cluster likely
have different performance attributes because of the type of disk or RAID array
on which they are installed. The MDisks can be on 15,000 revolutions per minute
(RPM) Fibre Channel (FC) or serial-attached SCSI (SAS) disk, Nearline SAS, or
Serial Advanced Technology Attachment (SATA), or even Flash Disks. Therefore,
a storage tier attribute is assigned to each MDisk and the default is generic_hdd.
SVC 6.1 introduced a new disk tier attribute for Flash Disk, which is known as
generic_ssd.
Easy Tier
Easy Tier is a volume performance function within the SVC that provides
automatic data placement of a volume’s extents in a multitiered storage pool. The
pool normally contains a mix of Flash Disks and HDDs. Easy Tier measures host
I/O activity on the volume’s extents and migrates hot extents onto the Flash Disks
to ensure the maximum performance.
Enhanced Stretched Systems intelligently route I/O traffic between nodes and
controllers to reduce the amount of I/O traffic between sites, and to minimize the
effect on host application I/O latency. Enhanced Stretched Systems include an
implementation of additional policing rules to ensure that the correct
configuration of a standard stretched system is used.
Evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on
all the volume extents in a pool are “measured” only. No automatic extent
migration is performed.
Event (error)
An event is an occurrence of significance to a task or system. Events can include
the completion or failure of an operation, user action, or a change in the state of a
process. Before SVC V6.1, this situation was known as an error.
Event code
An event code is a value that is used to identify an event condition to a user. This
value might map to one or more event IDs or to values that are presented on the
service window. This value is used to report error conditions to IBM and to
provide an entry point into the service guide.
Event ID
An event ID is a value that is used to identify a unique error condition that was
detected by the SVC. An event ID is used internally in the cluster to identify the
error.
786 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Excluded condition
The excluded condition is a status condition. It describes an MDisk that the SVC
decided is no longer sufficiently reliable to be managed by the cluster. The user
must issue a command to include the MDisk in the cluster-managed storage.
Extent
An extent is a fixed-size unit of data that is used to manage the mapping of data
between MDisks and volumes. The size of the extent can range from 16 MB -
8 GB.
External storage
External storage refers to MDisks that are SCSI logical units that are presented
by storage systems that are attached to and managed by the clustered system.
Failback
Failback is the restoration of an appliance to its initial configuration after the
detection and repair of a failed network or component.
Failover
Failover is an automatic operation that switches to a redundant or standby
system or node in a software, hardware, or network interruption. See also
Failback.
Field-replaceable unit
Field-replaceable units (FRUs) are individual parts that are replaced entirely
when any one of the unit’s components fails. They are held as spares by the IBM
service organization.
FlashCopy
FlashCopy refers to a point-in-time copy where a virtual copy of a volume is
created. The target volume maintains the contents of the volume at the point in
time when the copy was established. Any subsequent write operations to the
source volume are not reflected on the target volume.
FlashCopy mapping
A FlashCopy mapping is a continuous space on a direct-access storage volume
that is occupied by or reserved for a particular data set, data space, or file.
FlashCopy relationship
See FlashCopy mapping.
Flash drive
A data storage device that uses solid-state memory to store persistent data.
Flash module
A modular hardware unit that contains flash memory, one or more flash
controllers, and associated electronics.
Global Mirror
Global Mirror (GM) is a method of asynchronous replication that maintains data
consistency across multiple volumes within or across multiple systems. Global
Mirror is generally used where distances between the source site and target site
cause increased latency beyond what the application can accept.
Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy
bitmap (64 KiB or 256 KiB) in the SVC. A grain is also the unit to extend the real
size of a thin-provisioned volume (32 KiB, 64 KiB, 128 KiB, or 256 KiB).
Hop
One segment of a transmission path between adjacent nodes in a routed
network.
Host ID
A host ID is a numeric identifier that is assigned to a group of host FC ports or
Internet Small Computer System Interface (iSCSI) host names for LUN mapping.
For each host ID, SCSI IDs are mapped to volumes separately. The intent is to
have a one-to-one relationship between hosts and host IDs, although this
relationship cannot be policed.
788 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Host mapping
Host mapping refers to the process of controlling which hosts have access to
specific volumes within a cluster. Host mapping is equivalent to LUN masking.
Before SVC V6.1, this process was known as VDisk-to-host mapping.
Hot extent
A hot extent is a frequently accessed volume extent that gets a performance
benefit if it is moved from an HDD onto a Flash Disk.
HyperSwap
Pertaining to a function that provides continuous, transparent availability against
storage errors and site failures, and is based on synchronous replication.
Image mode
Image mode is an access mode that establishes a one-to-one mapping of extents
in the storage pool (existing LUN or (image mode) MDisk) with the extents in the
volume.
Image volume
An image volume is a volume in which a direct block-for-block translation exists
from the MDisk to the volume.
I/O Group
Each pair of SVC cluster nodes is known as an input/output (I/O) Group. An I/O
Group has a set of volumes that are associated with it that are presented to host
systems. Each SVC node is associated with exactly one I/O Group. The nodes in
an I/O Group provide a failover and failback function for each other.
Internal storage
Internal storage refers to an array of MDisks and drives that are held in
enclosures and in nodes that are part of the SVC cluster.
Input/output group
A collection of volumes and node relationships that present a common interface
to host systems. Each pair of nodes is known as an I/O group.
iSCSI initiator
An initiator functions as an iSCSI client. An initiator typically serves the same
purpose to a computer as a SCSI bus adapter would, except that, instead of
physically cabling SCSI devices (like hard drives and tape changers), an iSCSI
initiator sends SCSI commands over an IP network.
iSCSI session
The interaction (conversation) between an iSCSI Initiator and an iSCSI Target.
iSCSI target
An iSCSI target is a storage resource located on an iSCSI server.
Latency
The time interval between the initiation of a send operation by a source task and
the completion of the matching receive operation by the target task. More
generally, latency is the time between a task initiating data transfer and the time
that transfer is recognized as complete at the data destination.
Licensed capacity
The amount of capacity on a storage system that a user is entitled to configure.
License key
An alphanumeric code that activates a licensed function on a product.
790 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Local and remote fabric interconnect
The local fabric interconnect and the remote fabric interconnect are the SAN
components that are used to connect the local and remote fabrics. Depending on
the distance between the two fabrics, they can be single-mode optical fibers that
are driven by long wave (LW) gigabit interface converters (GBICs) or Small
Form-factor Pluggables (SFPs), or more sophisticated components, such as
channel extenders or special SFP modules that are used to extend the distance
between SAN components.
Local fabric
The local fabric is composed of SAN components (switches, cables, and so on)
that connect the components (nodes, hosts, and switches) of the local cluster
together.
Machine signature
A string of characters that identifies a system. A machine signature might be
required to obtain a license key.
Managed disk
An MDisk is a SCSI disk that is presented by a RAID controller and managed by
the SVC. The MDisk is not visible to host systems on the SAN.
Metro Mirror
Metro Mirror (MM) is a method of synchronous replication that maintains data
consistency across multiple volumes within the system. Metro Mirror is generally
used when the write latency that is caused by the distance between the source
site and target site is acceptable to application performance.
Mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies.
The primary physical copy is known within the SVC as copy 0 and the secondary
copy is known within the SVC as copy 1.
Node canister
A node canister is a hardware unit that includes the node hardware, fabric and
service interfaces, and serial-attached SCSI (SAS) expansion ports. Node
canisters are specifically recognized on IBM Storwize products. In SVC, all these
components are spread within the whole system chassis, so we usually do not
consider node canisters in SVC, but just the node as a whole.
Node rescue
The process by which a node that has no valid software installed on its hard disk
drive can copy software from another node connected to the same Fibre Channel
fabric.
NPIV
N_Port ID Virtualization (NPIV) is a Fibre Channel feature whereby multiple Fibre
Channel node port (N_Port) IDs can share a single physical N_Port.
Object Storage
Object storage is a general term that refers to the entity in which Cloud Object
Storage organizes, manages, and stores with units of storage, or just objects.
Oversubscription
Oversubscription refers to the ratio of the sum of the traffic on the initiator N-port
connections to the traffic on the most heavily loaded ISLs, where more than one
connection is used between these switches. Oversubscription assumes a
symmetrical network, and a specific workload that is applied equally from all
initiators and sent equally to all targets. A symmetrical network means that all the
initiators are connected at the same level, and all the controllers are connected at
the same level.
Parent pool
Parent pools receive their capacity from MDisks. All MDisks in a pool are split into
extents of the same size. Volumes are created from the extents that are available
in the pool. You can add MDisks to a pool at any time either to increase the
number of extents that are available for new volume copies or to expand existing
volume copies. The system automatically balances volume extents between the
MDisks to provide the best performance to the volumes.
Partnership
In Metro Mirror or Global Mirror operations, the relationship between two
clustered systems. In a clustered-system partnership, one system is defined as
the local system and the other system as the remote system.
Point-in-time copy
A point-in-time copy is an instantaneous copy that the FlashCopy service makes
of the source volume. See also “FlashCopy service” on page 788.
792 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy
mapping. The preparing phase flushes a volume’s data from cache in preparation
for the FlashCopy operation.
Primary volume
In a stand-alone Metro Mirror or Global Mirror relationship, the target of write
operations that are issued by the host application.
Private fabric
Configure one SAN per fabric so that it is dedicated for node-to-node
communication. This SAN is referred to as a private SAN.
Public fabric
Configure one SAN per fabric so that it is dedicated for host attachment, storage
system attachment, and remote copy operations. This SAN is referred to as a
public SAN. You can configure the public SAN to allow SVC node-to-node
communication also. You can optionally use the -localportfcmask parameter of
the chsystem command to constrain the node-to-node communication to use only
the private SAN.
Quorum disk
A disk that contains a reserved area that is used exclusively for system
management. The quorum disk is accessed when it is necessary to determine
which half of the clustered system continues to read and write data. Quorum
disks can either be MDisks or drives.
Quorum index
The quorum index is the pointer that indicates the order that is used to resolve a
tie. Nodes attempt to lock the first quorum disk (index 0), followed by the next disk
(index 1), and finally the last disk (index 2). The tie is broken by the node that
locks them first.
RACE engine
The RACE engine compresses data on volumes in real time with minimal effect
on performance. See “Compression” on page 784 or “Real-time Compression”
on page 793.
Real capacity
Real capacity is the amount of storage that is allocated to a volume copy from a
storage pool.
Real-time Compression
Real-time Compression is an IBM integrated software function for storage space
efficiency. The RACE engine compresses data on volumes in real time with
minimal effect on performance.
RAID 0
RAID 0 is a data striping technique that is used across an array that provides no
data protection.
RAID 1
RAID 1 is a mirroring technique that is used on a storage array in which two or
more identical copies of data are maintained on separate mirrored disks.
RAID 10
RAID 10 is a combination of a RAID 0 stripe that is mirrored (RAID 1). Therefore,
two identical copies of striped data exist, with no parity.
RAID 5
RAID 5 is an array that has a data stripe, which includes a single logical parity
drive. The parity check data is distributed across all the disks of the array.
RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, which are
calculated with different algorithms. Therefore, this level can continue to process
read and write requests to all of the array’s virtual disks in the presence of two
concurrent disk failures.
Rebuild area
Reserved capacity that is distributed across all drives in a redundant array of
drives. If a drive in the array fails, the lost array data is systematically restored
into the reserved capacity, returning redundancy to the array. The duration of the
restoration process is minimized because all drive members simultaneously
participate in restoring the data. See also “Distributed RAID or DRAID” on
page 785.
794 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Relationship
In Metro Mirror or Global Mirror, a relationship is the association between a
master volume and an auxiliary volume. These volumes also have the attributes
of a primary or secondary volume.
Reliability is the degree to which the hardware remains free of faults. Availability
is the ability of the system to continue operating despite predicted or experienced
faults. Serviceability is how efficiently and nondisruptively broken hardware can
be fixed.
Remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on)
that connect the components (nodes, hosts, and switches) of the remote cluster
together. Significant distances can exist between the components in the local
cluster and those components in the remote cluster.
Secondary volume
Pertinent to remote copy, the volume in a relationship that contains a copy of data
written by the host application to the primary volume. See also “Relationship” on
page 795.
Snapshot
A snapshot is an image backup type that consists of a point-in-time view of a
volume.
Solid-state disk
A solid-state disk (SSD) or Flash Disk is a disk that is made from solid-state
memory and therefore has no moving parts. Most SSDs use NAND-based flash
memory technology. It is defined to the SVC as a disk tier generic_ssd.
Space efficient
See “Thin provisioning” on page 798.
Spare
An extra storage component, such as a drive or tape, that is predesignated for
use as a replacement for a failed component.
Spare goal
The optimal number of spares that are needed to protect the drives in the array
from failures. The system logs a warning event when the number of spares that
protect the array drops below this number.
Stand-alone relationship
In FlashCopy, Metro Mirror, and Global Mirror, relationships that do not belong to
a consistency group and that have a null consistency-group attribute.
Statesave
Binary data collection that is used for problem determination by service support.
796 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Storage area network
A SAN is a dedicated storage network that is tailored to a specific environment,
which combines servers, systems, storage products, networking products,
software, and services.
Stretched system
A stretched system is an extended high availability (HA) method that is supported
by SVC to enable I/O operations to continue after the loss of half of the system. A
stretched system is also sometimes referred to as a split system. One half of the
system and I/O Group is usually in a geographically distant location from the
other, often 10 kilometers (6.2 miles) or more. A third site is required to host a
storage system that provides a quorum disk.
Striped
Pertaining to a volume that is created from multiple MDisks that are in the storage
pool. Extents are allocated on the MDisks in the order specified.
Support Assistance
A function that is used to provide support personnel remote access to the system
to perform troubleshooting and maintenance tasks.
Symmetric virtualization
Symmetric virtualization is a virtualization technique in which the physical
storage, in the form of a RAID, is split into smaller chunks of storage known as
extents. These extents are then concatenated, by using various policies, to make
volumes. See also “Asymmetric virtualization” on page 782.
Synchronous replication
Synchronous replication is a type of replication in which the application write
operation is made to both the source volume and target volume before control is
given back to the application. See also “Asynchronous replication” on page 782.
Thin-provisioned volume
A thin-provisioned volume is a volume that allocates storage when data is written
to it.
Throttles
Throttling is a mechanism to control the amount of resources that are used when
the system is processing I/Os on supported objects. The system supports
throttles on hosts, host clusters, volumes, copy offload operations, and storage
pools. If a throttle limit is defined, the system either processes the I/O for that
object, or delays the processing of the I/O to free resources for more critical I/O
operations.
T10 DIF
T10 DIF is a Data Integrity Field (DIF) extension to SCSI to enable end-to-end
protection of data from host application to physical media.
Unique identifier
A unique identifier (UID) is an identifier that is assigned to storage-system logical
units when they are created. It is used to identify the logical unit regardless of the
LUN, the status of the logical unit, or whether alternate paths exist to the same
device. Typically, a UID is used only once.
Virtualization
In the storage industry, virtualization is a concept in which a pool of storage is
created that contains several storage systems. Storage systems from various
vendors can be used. The pool can be split into volumes that are visible to the
host systems that use them. See also “Capacity licensing” on page 782.
Virtualized storage
Virtualized storage is physical storage that has virtualization techniques applied
to it by a virtualization engine.
798 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Vital product data
Vital product data (VPD or VDP) is information that uniquely defines system,
hardware, software, and microcode elements of a processing system.
Volume
A volume is an SVC logical device that appears to host systems that are attached
to the SAN as a SCSI disk. Each volume is associated with exactly one I/O
Group. A volume has a preferred node within the I/O Group. Before SVC 6.1, this
volume was known as a VDisk.
Volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored
volumes have two copies. Non-mirrored volumes have one copy.
Volume protection
To prevent active volumes or host mappings from inadvertent deletion, the
system supports a global setting that prevents these objects from being deleted if
the system detects that they have recent I/O activity. When you delete a volume,
the system checks to verify whether it is part of a host mapping, FlashCopy
mapping, or remote-copy relationship. In these cases, the system fails to delete
the volume, unless the -force parameter is specified. Using the -force
parameter can lead to unintentional deletions of volumes that are still active.
Active means that the system detected recent I/O activity to the volume from any
host.
Write-through mode
Write-through mode is a process in which data is written to a storage device at
the same time that the data is cached.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in
this document (note that some publications referenced in this list might be available in
softcopy only):
IBM b-type Gen 5 16 Gbps Switches and Network Advisor, SG24-8186
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521
Implementing the IBM Storwize V5000 Gen2 (including the Storwize V5010, V5020, and
V5030), SG24-8162
Implementing the IBM Storwize V7000 and IBM Spectrum Virtualize V7.8, SG24-7938
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
The following Redbooks domains related to this book are also useful resources:
IBM Storage Networking Redbooks
http://www.redbooks.ibm.com/Redbooks.nsf/domains/san
IBM Flash Storage Redbooks
http://www.redbooks.ibm.com/Redbooks.nsf/domains/flash
IBM Software Defined Storage Redbooks
http://www.redbooks.ibm.com/Redbooks.nsf/domains/sds
IBM Disk Storage Redbooks
http://www.redbooks.ibm.com/Redbooks.nsf/domains/disk
IBM Storage Solutions Redbooks
http://www.redbooks.ibm.com/Redbooks.nsf/domains/storagesolutions
IBM Tape Storage Redbooks
http://www.redbooks.ibm.com/Redbooks.nsf/domains/tape
Other resources
These publications are also relevant as further information sources:
IBM System Storage Master Console: Installation and User’s Guide, GC30-4090
IBM System Storage Open Software Family SAN Volume Controller: CIM Agent
Developers Reference, SC26-7545
Referenced websites
These websites are also relevant as further information sources:
IBM Storage home page
http://www.ibm.com/systems/storage
SAN Volume Controller supported platform
http://ibm.co/1FNjddm
SAN Volume Controller at IBM Knowledge Center
http://www.ibm.com/support/knowledgecenter/STPVGU/welcome
Cygwin Linux-like environment for Windows
http://www.cygwin.com
Open source site for SSH for Windows and Mac
http://www.openssh.com/
Windows Sysinternals home page
http://www.sysinternals.com
Download site for Windows PuTTY SSH and Telnet client
http://www.chiark.greenend.org.uk/~sgtatham/putty
802 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8.1
Help from IBM
IBM Support and downloads
ibm.com/support
SG24-7933-06
ISBN 0738442836
Printed in U.S.A.
®
ibm.com/redbooks