IBM Spectrum Virtualize Implementation Guide
IBM Spectrum Virtualize Implementation Guide
Redpaper
International Technical Support Organization
June 2019
REDP-5466-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
This edition applies to IBM Spectrum Virtualize for Public Cloud Version 8.3 and IBM Storewize Version 7.8.
© Copyright International Business Machines Corporation 2017, 2019. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Now you can become a published author, too . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction to IBM Spectrum Virtualize for Public Cloud . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 The evolution of IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 IBM Spectrum Virtualize for Public Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Primers of storage virtualization and software-defined-storage . . . . . . . . . . . . . . . 4
1.2.2 Benefits of IBM Spectrum Virtualize for Public Cloud . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3 Features of IBM Spectrum Virtualize for Public Cloud . . . . . . . . . . . . . . . . . . . . . . 7
1.2.4 IBM Spectrum Virtualize on IBM Cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Use cases for IBM Spectrum Virtualize for Public Cloud . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.1 Hybrid scenario: on-premises to IBM Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 Cloud-native scenario: Cloud to cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud
deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1 Provisioning cloud resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.1.1 Ordering servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 Provisioning IBM Cloud Block Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.1 Cloud Block Storage overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.2 Provisioning Block Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3 IBM Spectrum Virtualize networking considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Chapter 4. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.1 IBM Spectrum Virtualize for Public Cloud installation . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.1.1 Downloading the One-click installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.1.2 Fully Automated installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.1.3 Semi Automated installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2 Configuring Spectrum Virtualize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.1 Log in to cluster and complete the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2.2 Configure Cloud quorum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.2.3 Installing the IP quorum application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.2.4 Configure the back-end storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.2.5 Configuring Call Home with CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.2.6 Upgrading to second I/O group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.3 Configuring replication from on-prem IBM Spectrum Virtualize to IBM Spectrum Virtualize
for IBM Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.4 Configuring Remote Support Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Chapter 5. Typical use cases for IBM Spectrum Virtualize for Public Cloud . . . . . . 119
5.1 Whole IT services deployed in the Public Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.1.1 Business justification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.1.2 Highly available deployment models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.2 Disaster recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5.2.1 Business justification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.2.2 Common DR scenarios with IBM Spectrum Virtualize for Public Cloud . . . . . . . 126
5.3 IBM FlashCopy in the Public Cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.3.1 Business justification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
5.3.2 FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.3.3 Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.3.4 Crash consistent copy and hosts considerations . . . . . . . . . . . . . . . . . . . . . . . . 130
5.4 Workload relocation into the Public Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.4.1 Business justification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.4.2 Data migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.4.3 Host provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.4.4 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Appendix A. Guidelines for disaster recovery solution in the Public Cloud. . . . . . . 157
Plan and design for the worst case scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Recovery tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Design for production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
On-premise to DR cloud considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Cloud to DR cloud considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Cloud to DR on-premises considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
iv Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Common pitfalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Plan and design for the best scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Single-points-of-failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Only have a Plan A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Poor DR testing methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Networking aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Five networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
DR test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Failover or emergency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Fallback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Full versus partial failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
User-to-DR server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Server-to-DR servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Network virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Network function virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Bring Your Own IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Contents v
vi Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Cloud™ Passport Advantage®
Aspera® IBM FlashSystem® PowerVM®
Bluemix® IBM Resiliency Services® Real-time Compression™
DB2® IBM Spectrum™ Redbooks®
Easy Tier® IBM Spectrum Control™ Redpaper™
FlashCopy® IBM Spectrum Protect™ Redbooks (logo) ®
HyperSwap® IBM Spectrum Storage™ Storwize®
IBM® IBM Spectrum Virtualize™ System Storage®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Other company, product, or service names may be trademarks or service marks of others.
viii Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Preface
IBM® Spectrum Virtualize is a key member of the IBM Spectrum™ Storage portfolio. It is a
highly flexible storage solution that enables rapid deployment of block storage services for
new and traditional workloads, on-premises, off-premises and in a combination of both.
IBM Spectrum Virtualize™ for Public Cloud provides the IBM Spectrum Virtualize functionality
in IBM Cloud™. This new capability provides a monthly license to deploy and use Spectrum
Virtualize in IBM Cloud to enable hybrid cloud solutions, offering the ability to transfer data
between on-premises private clouds or data centers and the public cloud.
This IBM Redpaper™ publication gives a broad understanding of IBM Spectrum Virtualize for
Public Cloud architecture and provides planning and implementation details of the common
use cases for this product.
This publication helps storage and networking administrators plan and implement install,
tailor, and configure IBM Spectrum Virtualize for Public Cloud offering. It also provides a
detailed description of troubleshooting tips.
IBM Spectrum Virtualize is also available on AWS. For more information, see Implementation
guide for IBM Spectrum Virtualize for Public Cloud on AWS, REDP-5534.
Authors
This paper was produced by a team of specialists from around the world working at the
International Technical Support Organization, Austin Center.
x Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Pierluigi Buratti is an Executive IT Architect at IBM Italy –
Resiliency Services, with more than 20 years spent in
designing and implementing Disaster Recovery and Business
Continuity solutions for Italian and European clients, using the
latest technologies for data mirroring and data center
interconnectivity. Since 2011, he is working in the Global
Development of IBM Resiliency Services®, leading the design
of DRaaS technologies and solutions.
Robin Findlay
IBM UK
Henry E Butterworth, Long Wen Lan, Yu Yan Chen, Xiaoyu Zhang, Qingyuan Hou
IBM China
The team would like to express thanks to IBM Gold Partner e-TechServices for providing
infrastructure as a service utilizing e-TechServices’ cloud systems as a contribution to the
development and test environment for the use cases covered in this book. Special thanks to
Javier Suarez, Senior Systems Engineer, Marc Spindler, CEO, and Mario Ariet, President.
Your efforts will help to increase product acceptance and customer satisfaction, as you
expand your network of technical contacts and relationships. Residencies run from two to six
weeks in length, and you can participate either in person or as a remote resident working
from your home base.
Find out more about the residency program, browse the residency index, and apply online:
ibm.com/redbooks/residencies.html
Preface xi
Comments welcome
Your comments are important to us.
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks® publications in one of the following ways:
Use the online Contact us review Redbooks form:
ibm.com/redbooks
Send your comments in an email:
[email protected]
Mail your comments:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xii Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
1
Chapter 1. Introduction
This chapter describes IBM Spectrum Virtualize implemented in a cloud environment and
referred to as IBM Spectrum Virtualize for Public Cloud. A brief overview of the technology
behind the product introduces the drivers and business values of using IBM Spectrum
Virtualize in the context of public cloud services. It also describes how the solution works from
a high-level perspective.
IBM Spectrum Virtualize Software only is available starting with IBM Spectrum Virtualize
V7.7.1. This publication describes IBM Spectrum Virtualize for Public Cloud V8.1.1.
Nevertheless, one of the challenges for these organizations is how to integrate those public
cloud capabilities with the existing back-end. Organizations want to retain flexibility without
introducing new complexity or requiring significant new capital investment.
In this sense, coming from the IBM Spectrum Storage™ family, IBM Spectrum Virtualize for
Public Cloud supports clients in their IT architectural transformation and transition towards the
cloud service model, enabling hybrid cloud strategies or, for cloud-native workload, providing
the benefits of familiar and sophisticated storage functionality on the public cloud data
centers, enhancing the existing cloud offering.
Running on-premises (on-prem), IBM Spectrum Virtualize software supports capacity built
into storage systems, and capacity in over 400 different storage systems from IBM and other
vendors. This wide range of storage support means that the solution can be used with
practically any storage in a data center today and integrated with its counter part IBM
Spectrum Virtualize for Public Cloud, which supports IBM Cloud block storage offering in its
two variants: Performance and Endurance storage options.
IBM SAN Volume controller is based on an IBM project started in the second half of 1999 at
the IBM Almaden Research Center. The project was called COMmodity PArts Storage
System or COMPASS. However, most of the software has been developed at the IBM Hursley
Labs in the UK. One goal of this project was to create a system that was almost exclusively
composed of commercial off the shelf (COTS) standard parts. Yet, it had to deliver a level of
performance and availability that was comparable to the highly optimized storage controllers
of previous generations.
COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices. The
first documentation that covered this project was released to the public in 2003 in the form of
the IBM Systems Journal, Vol. 42, No. 2, 2003, “The software architecture of a SAN storage
control system” by J. S. Glider, C. F. Fuente, and W. J. Scales.
2 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
The article is available at this website.
The first release of IBM System Storage SAN Volume Controller was announced in July 2003.
This SDS layer is designed to virtualize and optimize storage within the data center or
managed private cloud service. Whether in an on-premises private or managed cloud
service, this offering reduces the complexities and cost of managing SAN FC- or iSCSI-based
storage while improving availability and enhancing performance. For more information, see
Implementing IBM Spectrum Virtualize software only, REDP-5392.
Part of the IBM Spectrum family, IBM Spectrum Virtualize for Public Cloud (released in 2017)
is the solution adapted for public cloud implementations of IBM Spectrum Virtualize Software
only. At the time of the writing of this book, the software supports the deployment on any
Intel-based cloud bare-metal servers (not virtualized environment) and is backed by the
storage available on the public cloud catalog.
The license pricing aligns with the monthly consumption model of both servers and back-end
storage within IBM Cloud. IBM Spectrum Virtualize for Public Cloud provides a new solution
to combine on-premises and cloud storage for higher flexibility at lower cost for a
comprehensive selection of use cases complementing the existing implementation and
options for IBM Spectrum Virtualize and IBM SAN Volume Controller.
Table 1-1 shows the features of IBM Spectrum Virtualize for both on-premises and public
cloud products at a glance.
Storage Built- in and more than 400 different IBM Cloud Performance and
supported systems from IBM and others Endurance storage
Licensing Tiered cost per TB (IBM SAN Volume Simple, flat cost per capacity
approach Controller) or per enclosure (Storwize Monthly licensing
family)
Reliability, Integrated RAS capabilities Flexible RAS: IBM Cloud and software
availability, and RAS capabilities
serviceability
(RAS)
Service IBM support for hardware and IBM support for software in the IBM
software Cloud environment
Chapter 1. Introduction 3
1.2 IBM Spectrum Virtualize for Public Cloud
Designed for software-defined storage environments, IBM Spectrum Virtualize for Public
Cloud represents the solution for public cloud implementations and includes technologies that
both complement and enhance public cloud offering capabilities.
For example, traditional practices that provide data replication simply by copying storage at
one facility to largely identical storage at another facility aren’t an option where public cloud is
concerned. Also, using conventional software to replicate data imposes unnecessary loads
on application servers. More detailed use cases will be analyzed further in Chapter 5, “Typical
use cases for IBM Spectrum Virtualize for Public Cloud” on page 119.
IBM Spectrum Virtualize for Public Cloud delivers a powerful solution for the deployment of
IBM Spectrum Virtualize software in public clouds, starting with IBM Cloud. This new
capability provides a monthly license to deploy and use IBM Spectrum Virtualize in IBM Cloud
to enable hybrid cloud solutions, offering the ability to transfer data between on-premises data
centers using any IBM Spectrum Virtualize-based appliance and IBM Cloud.
With a deployment designed for the cloud, the IBM Spectrum Virtualize for Public Cloud can
be deployed in any of over 25 IBM Cloud data centers around the world where, after
provisioning the infrastructure, an install script automatically installs the software.
The aggregation of volumes into storage pools enables us to better manage capacity,
performance, and multiple tierings for the workloads. IBM Spectrum Virtualize for Public
Cloud provides virtualization only at the disk layer (block-based) of the I/O stack, and for this
reason is referred to as block-level virtualization, or the block aggregation layer. For the sake
of clarity, the block-level volumes provided by the IBM Cloud are exposed as iSCSI target
volumes, and are seen by IBM Spectrum Virtualize managed disk (MDisk).
These MDisks are then aggregated into a storage pool, sometimes referred to as a managed
disk group (mdiskgrp). IBM Spectrum Virtualize then creates logical volumes (referred to as
volumes or VDisks) which are striped across all of the MDisks inside of their assigned pool.
The virtualization terminology is included into the wider concept of software-defined storage
(SDS), an approach to data storage in which the programming that controls storage-related
tasks is decoupled from the physical storage hardware. This separation allows SDS solutions
to be placed over any existing storage systems or, more generally, installed on any
commodity x86 hardware and hypervisor.
Shifting to a higher level in the IT stack allows for a deeper integration and response to
application requirements for storage performance and capabilities. SDS solutions offer a full
suite of storage services (equivalent to traditional hardware systems) and federation of
multiple persistent storage resources: internal disk, cloud, other external storage systems, or
cloud/object platforms.
4 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
In general, SDS technology leverages the following concepts:
Shared nothing architecture (or in some cases a partial or fully shared architecture): with
no single point of failure and nondisruptive upgrade.
Scale-up or scale-out mode: adding building blocks for predictable increase in capacity,
performance and resiliency.
Multiple classes of service: file-based, object-based, block-based, auxiliary/storage
support service. SDS solutions maybe also be integrated together into a hybrid or
composite SDS solution.
High availability (HA) and disaster recovery (DR): able to tolerate level of availability and
durability as self healing and adjusting.
Lower TCO: lowering the TCO for those workloads capable of using SDS.
Chapter 1. Introduction 5
In this sense, IBM Spectrum Virtualize for Public Cloud not only offers data replication
between Storwize family, FlashSystem V9000, FlashSystem 9100, IBM SAN Volume
Controller, or VersaStack and Public Cloud, but extends replication to all types of
supported virtualized storage on-premises. Working together, IBM Spectrum Virtualize
and IBM Spectrum Virtualize for Public Cloud support synchronous and asynchronous
mirroring between the cloud and on-premises for more than 400 different storage
systems from a wide variety of vendors. In addition, they support other services, such
as IBM FlashCopy® and IBM Easy Tier®.
– Disaster recovery strategies between on-premises and public cloud data centers as
alternative DR solutions
One of the reasons to replicate is to have a copy of the data from which to restart
operations in case of emergency. IBM Spectrum Virtualize for public cloud enables this
for virtual and physical environments, thus adding new possibilities compared to
software replicators in use today that handle virtual infrastructure only.
– Benefit from familiar, sophisticated storage functionality in the cloud to implement
reverse mirroring
IBM Spectrum Virtualize enables the possibility to reverse data replication to offload
from Cloud Provider back to on-premises or to another Cloud provider.
IBM Spectrum Virtualize, both on-premises and on cloud, provides a data strategy that is
independent of the choice of infrastructure, delivering tightly integrated functionality and
consistent management across heterogeneous storage and cloud storage. The software layer
provided by IBM Spectrum Virtualize on premises or in the cloud can provide a significant
business advantage by delivering more services faster and more efficiently, enabling real-time
business insights and supporting more customer interaction.
This software-only version of the established IBM Storwize family provides a compelling
solution to how SDS can be implemented in numerous types of solutions for storage
environments. IBM Spectrum Virtualize provides the following benefits of storage
virtualization and advanced storage capabilities:
Support for more than 400 different storage systems from a wide variety of vendors
Storage pooling and automated allocation with thin provisioning
Easy Tier automated tiering
IBM Real-time Compression™, enabling storing up to five times as much data for even the
most demanding applications
Software encryption to improve data security on existing storage (IBM Spectrum Virtualize
for Public Cloud uses cloud infrastructure encryption services)
FlashCopy and remote mirror for local and remote replication
6 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Support for virtualized and containerized server environments including VMware (VVOL),
Microsoft Hyper-V, IBM PowerVM®, Docker, and Kubernetes
Figure 1-1 shows an overview of the IBM Spectrum Virtualize software-only solution.
Figure 1-1 IBM Spectrum Virtualize Software only with customer provided hardware
Table 1-1 summarizes IBM Spectrum Virtualize for Public Cloud features and benefits.
Table 1-2 IBM Spectrum Virtualize for Public Cloud features and benefits
Feature Benefits
Single point of control for cloud Designed to increased management efficiency
storage resources Designed to help to support application availability
Pools the capacity of multiple storage Helps to overcome the volume size limitations
volumes Helps to manage storage as a resource to meet
business requirements, and not just as a set of
independent volumes
Helps administrator to better deploy storage as required
beyond traditional “islands”
Can help to increase the use of storage assets
Insulate applications from maintenance or changes to
storage volume offering
Clustered pairs of Intel servers that Use of cloud-catalog Intel servers foundation
are configured as IBM Spectrum Designed to avoid single point of hardware failures
Virtualize for Public Cloud data
engines
Chapter 1. Introduction 7
Feature Benefits
Manage tiered storage Helps to balance performance needs against
infrastructures costs in a tiered storage environment
Automated policy-driven control to put data in the right
place at the right time automatically among different
storage tiers/classes
Easy-to-use IBM Storwize family Single interface for storage configuration, management,
management interface and service tasks regardless the configuration available
from public cloud portal
Helps administrators use storage assets/volumes more
efficiently
IBM Spectrum Control™ Insights and IBM Spectrum
Protect™ for additional capabilities to manage capacity
and performance
Advanced network-based copy Copy data across multiple storage systems with IBM
services FlashCopy
Copy data across metropolitan and global distances as
needed to create high-availability storage solutions
between multiple data centers
Thin provisioning and snapshot Reduce volume requirements by using storage only
replication when data changes
Improve storage administrator productivity through
automated on-demand storage provisioning
Snapshots available on lower tier storage volumes
Third parties native integration Integration with VMware vRealize and Site Recovery
Manager
8 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Note: The following features are not supported in the first IBM Spectrum Virtualize for
Public Cloud release:
Stretched cluster
IBM HyperSwap®
Real-time Compression
Data deduplication
Encryption
Data reduction
Unmap
Cloud backup
Transparent cloud tiering
Hot spare node
Distributed RAID
N-Port ID Virtualization
Some of these features are already in the plan for future releases and will be prioritized for
implementation based on customer feedback.
IBM Cloud infrastructure (shown in Figure 1-2 on page 10) is a proven, established platform
for today’s computing needs, by deploying IBM Spectrum Virtualize on a cloud platform,
features of IBM SAN Volume Controller and IBM Spectrum Virtualize software only are further
enhanced for the changing environments. Customers can decide which configuration to start
with, just like a regular IBM SAN Volume Controller or Storwize product a two node cluster
can be upgraded all the way up to an eight node cluster dynamically without impacting the
production environment.
Chapter 1. Introduction 9
+LJK/HYHO$UFKLWFW
+RVW +RVW
L6&6,)& L6&6,
.90 .90
,%06SHFWUXP9LUWXDOL]H ,%06SHFWUXP9LUWXDOL]H
,3UHSOLFDWLRQ ,3FOXVWHULQJ
IRU3XEOLF&ORXG (node 1) IRU3XEOLF&ORXG (node 2)
5+(/[ 5+(/[
L6&6,
,%0&ORXGEORFNVWRUDJH
(QGXUDQFH3HUIRUPDQFH
2QSUHPLVHV ,%0&ORXG
Figure 1-2 High-level architecture of IBM Spectrum Virtualize for Public Cloud
Among multiple infrastructure deployment models on IBM Cloud, IBM Spectrum Virtualize is
supported on bare-metal servers. IBM Cloud bare-metal servers provide the raw horsepower
for processor-intensive and disk I/O-intensive workloads. They’re also completely
customizable, down to the exact specifications, which enables unmatched control of the cloud
infrastructure.
IBM Cloud bare metal provides 10 Gbps network interfaces that sit on a Triple Network
Architecture with dedicated backbone network. IBM Cloud offers a wide range of data centers
and Network Points of Presence (PoPs) throughout the world. Connecting these data centers
and PoPs is a redundant, high-speed network. Therefore, no traffic between data centers or
PoPs is ever routed over the Internet but rather stays in IBM Cloud’s private network. Better
yet, all network traffic on the internal network is unmetered and therefore without incremental
cost.
This creates compelling deployment architecture opportunities, especially for failover and
disaster recovery where it is now possible to mirror data between data centers without having
to pay for the (sometimes significant) traffic between data centers. The Triple Network
Architecture provides three network interfaces to every server regardless of if it is a
bare-metal server or a virtual compute instance. Each server is complimented with a five
physical network interface card (NIC) configuration:
Public internet access
Private network
Management access
The available private interfaces are 2 x 10 Gbps and currently it is not possible to move public
interfaces to private network to increase the number of NIC private ports. Because in IBM
Cloud native features there is a physical separation between public and private network
interfaces, and in the IBM Spectrum Virtualize deployment, all the intra-nodes traffic is routed
within the private network.
Networking for IBM Spectrum Virtualize for Public Cloud is all IP based with no Fibre Channel
(FC), which is not supported by IBM Cloud. This includes inter-node communication and
inter-cluster replication (remote replication, on-premises to cloud or cloud to cloud).
10 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
IBM Cloud provides IBM Spectrum Virtualize with flash-backed block storage on
high-performance iSCSI targets. The storage is presented as a block-level device that
customers can format to best fit their needs. The iSCSI storage resides on the private
network and does not count toward public and private bandwidth allotments. Options
available are either Performance in granular IOPS (Input/output operations per second)
increments 1,000 - 48,000 or predefined per GB Endurance tier.
Both are available as volumes sized 20 GB - 12 TB. All volumes are encrypted by default with
IBM Cloud-managed encryption.
Note: IBM Cloud Endurance and Performance volumes when used as back-end storage
for IBM Spectrum Virtualize on IBM Cloud are no different from a technical stand point.
Once the IOPS profile fits the application requirements from an IBM Spectrum Virtualize
perspective the two solutions are identical. The only notable advantage is the IBM Cloud
Performance storage granularity during the definition of the IOPS profile. This allows for
much more accurate capability estimation, which minimizes waste.
An install script automatically installs IBM Spectrum Virtualize for Public Cloud software, and
the software installation must be performed after first purchasing the bare-metal servers using
an IBM Cloud IaaS account (formerly known as SoftLayer® and Bluemix).
For more information about architecture and networking, see Chapter 2, “Solution
architecture” on page 17. For more information about installation steps, see Chapter 4,
“Implementation” on page 71.
1.3 Use cases for IBM Spectrum Virtualize for Public Cloud
From a high-level perspective, IBM Spectrum Virtualize (on premises or in the cloud) delivers
leading benefits that improve how to use storage in three key ways:
Improving data value
IBM Spectrum Virtualize software helps reduce the cost of storing data by increasing
utilization and accelerating applications to speed business insights.
Increasing data security
IBM Spectrum Virtualize helps enabling a high-availability strategy that includes protection
for data and application mobility and disaster recovery.
Enhancing data simplicity
IBM Spectrum Virtualize provides a data strategy that is independent of infrastructure,
delivering tightly integrated functionality and consistent management across
heterogeneous storage.
Chapter 1. Introduction 11
Figure 1-3 shows the deployment models for IBM Spectrum Virtualize.
Figure 1-3 IBM Spectrum Virtualize in the Public Cloud deployment models
These three key benefits span over multiple use cases where IBM Spectrum Virtualize
applies. In fact, IBM Spectrum Virtualize for Public Cloud provides a new solution to combine
on-premises and cloud storage for higher flexibility at lower cost for a comprehensive
selection of use cases both for hybrid cloud solutions and cloud native architectures. This
includes and is not limited to:
Data migration and disaster recovery (DR) to Public Cloud
Data Center extension or consolidation to Public Cloud
Data migration and disaster recovery (DR) between Public Cloud data centers
Federation to/between multiple cloud providers (statement of direction)
As shown in Figure 1-4 on page 13, many of the existing on-premises environments are
heterogeneous and composed of several different technologies, such as VMware, Hyper-V,
KVM, Oracle, IBM. In order to achieve data consistency when migrating or replicating, the
usage of a storage-based replica is the preferred solution rather than multiple specific tools
working all together that introduce complexity in their steady state management.
12 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 1-4 Heterogeneous environment are migrated or replicated to IBM Cloud
IBM Spectrum Virtualize on IBM Cloud enables hybrid cloud solutions, offering the ability to
transfer data between on-premises data centers using any IBM Spectrum Virtualize based
appliance, such as IBM SAN Volume Controller, Storwize family products, FlashSystem
V9000, FlashSystem 9100 and VersaStack with Storwize family or IBM SAN Volume
Controller appliances, or IBM Spectrum Virtualize Software Only and IBM Cloud. Other
non-IBM vendors are also supported.
Figure 1-5 shows a typical scenario. Through IP-based replication with Global or Metro Mirror,
or Global Mirror with Change Volumes, users can create secondary copies of their
on-premises data in the public cloud for disaster recovery, workload redistribution, or
migration of data from on premises data centers to the public cloud.
)DLORYHU
90ZDUHGHSOR\HG
5HSOLFDWLRQ*URXS 5HSOLFDWLRQ*URXS 5HSOLFDWLRQ*URXS RQ90VHUYHUVLQ
,%0&ORXG
90 90 90 90 90 90
Y6SKHUH Y6SKHUH
L6&6,WR,%0&ORXGKRVW
VHUYHUV
,%0&ORXGEDUHPHWDOVHUYHUV
,%06WRUZL]HIDPLO\
)ODVK6\VWHP9 ,%06SHFWUXP
9HUVD6WDFNRU69& 6XSSRUWHGVWRUDJH 9LUWXDOL]HIRU
DOPRVW400 different V\VWHPV 3XEOLF&ORXG
,%0&ORXG(QGXUDQFHRU
storage systems 3HUIRUPDQFHEORFNVWRUDJHWLHUV
Figure 1-5 Hybrid scenario: VMware on IBM Cloud with Site Recovery Manager solution
Chapter 1. Introduction 13
In this sense, IBM Spectrum Virtualize for Public Cloud represents the ideal target, by
abstracting the storage layer, to avoid any dependency with specific vendor (both at storage
and application layer), even if multiple storage technologies are involved.
You can create two-node to eight-node high-availability clusters, similar to on premises IBM
SAN Volume Controller appliances on IBM Cloud. IBM Cloud block storage can be easily
managed through IBM Spectrum Virtualize for Public Cloud for persistent data storage, and
as the target of remote copy services.
In this case, the replication scenario from multiple storage resources is not applicable but the
complex and heterogeneous environment still applies. IBM Spectrum Virtualize for Public
Cloud extends and enhances the capabilities of IBM Cloud block storage by using
enterprise-class features, such as local and remote copy services. Replication features are
often available among cloud providers but limited to specific RPOs and feature that are not
negotiable or adjustable because of the standardized nature of public clouds.
For this reason IBM Spectrum Virtualize for Public Cloud overcomes some of the limitations of
the public cloud catalog, is a good fit also for cloud to cloud scenario. As shown in Figure 1-5,
the VMware distributed over multiple IBM Cloud data centers is also a common use case.
Last but not least, is the necessity of the users to maintain the existing skills and tools and
access them on cloud as it was their own data center, for a smooth transition to off-premises
environments.
1.4 Licensing
Within the public cloud model, servers and storage resources are provisioned and prices
based either on a monthly or hourly usage. In order to adapt to the public cloud flexibility, IBM
Spectrum Virtualize for Public Cloud has monthly licensing based on the number of terabytes
of IBM Cloud block storage that is managed by IBM Spectrum Virtualize for Public Cloud.
Additional options for metered usage are also available so you can purchase more capacity
as needed.
The IBM Cloud compute, bare-metal nodes and backend storage capacity are part of a
separate purchase and, with network equipment, are not included in the licensing. For IBM
Spectrum Virtualize for Public Cloud, licenses are available on IBM Marketplace and Passport
Advantage® under IBM Spectrum Virtualize for Public Cloud (5737-F08) and two licensing
models (it is not available as shrink wrap). These are all-inclusive and flat $/TB:
IBM Spectrum Virtualize for Public Cloud 10 Terabytes Monthly Base License
IBM Spectrum Virtualize for Public Cloud 1 Terabyte Monthly License Incremental capacity
14 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
After the licenses are purchased, the deployment model follows a Bring Your Own License to
IBM Cloud. A semi-automated install script for IBM Spectrum Virtualize will install the
software according to the procedure described in 4.1.3, “Semi Automated installation” on
page 78. The automated capacity metering each month is available through Call Home for
additional capacity purchase.
Note: Additional capacity purchases are available through Passport Advantage according
to the license terms for this offering. Automated billing is not enabled at this time.
Chapter 1. Introduction 15
16 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
2
However, each environment is unique and as such, it is important to review the planning
considerations in Chapter 3, “Planning and preparation for the IBM Spectrum Virtualize for
Public Cloud deployment” on page 31 before designing your solution.
Each system provisioned in the IBM Cloud is connected to a public and private network in the
IBM Cloud PoD (point of delivery). The public network is internet routable and accessible by
default. This network is used to serve internet and web applications in the cloud and to allow
users to access the internet if needed.
The private network is internal to IBM Cloud and is used typically for services provided by IBM
in the cloud. This includes access to block, file, and Object Storage. This also includes
communication between servers that are provisioned in the cloud. Additionally, clients can
end a Multiprotocol Label Switching (MPLS) connection into the private network and allow
access between their on-premises data center resources and IBM Cloud.
When deployed in the cloud environment, a network gateway appliance as seen in Figure 2-1
controls access going to and from both the public and private networks provisioned to a
particular environment. This appliance can be used to secure cloud servers and applications.
This appliance can also be used to terminate IPSec VPN connections between sites over the
internet.
Both the public and the private networks converge at the IBM Cloud backbone network
environment and the POP (Point of Presence). These network connections serve as the
gateway for external connections coming into the IBM Cloud through MPLS, the termination
point for internet access, and as a network to link together multiple IBM Cloud data centers.
18 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
2.2 Storage virtualization
Storage virtualization is a term that is used extensively throughout the storage industry. It can
be applied to various technologies and underlying capabilities. In reality, most storage devices
technically can claim to be virtualized in one form or another. IBM describes storage
virtualization as a technology that makes one set of resources resemble another set of
resources, preferably with more desirable characteristics. It is a logical representation of
resources that is not constrained by physical limitations and hides part of the complexity. It
also adds or integrates new function with existing services and can be nested or applied to
multiple layers of a system.
The focus of this publication is virtualization at the disk layer, which is referred to as
block-level virtualization or the block aggregation layer. A description of file system
virtualization is beyond the intended scope of this book.
One of the nodes within the system is assigned the role of the configuration node. The
configuration node manages the configuration activity for the system and owns the cluster IP
address that is used to access the management Graphical User Interface (GUI) and
Command Line Interface (CLI) connections. If this node fails, the system chooses a new node
to become the configuration node.
Because the active nodes are installed in pairs, each node maintains cache coherence with
its partner to provide seamless failover functionality and fault tolerance, which is described
next.
A specific volume is always presented to a host server by a single I/O group in the system.
When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to one specific I/O group in the system. Under normal conditions, the I/Os for that
specific volume are always processed by the same node within the I/O group. This node is
referred to as the preferred node for this specific volume. As soon as the preferred node
receives a write into its cache, that write is mirrored to the partner node before the write is
acknowledged back to the host. Reads are serviced by the preferred node. More on this in
section 2.3.7, “Cache” on page 23.
Both nodes of an I/O group act as the preferred node for their own specific subset of the total
number of volumes that the I/O group presents to the host servers. However, both nodes also
act as failover nodes for their respective partner node within the I/O group. Therefore, a node
takes over the I/O workload from its partner node, if required. For this reason, it is mandatory
for servers that are connected to use multipath drivers to handle these failover situations.
If required, host servers can be mapped to more than one I/O group within the Spectrum
Virtualize system. Therefore, they can access volumes from separate I/O groups. You can
move volumes between I/O groups to redistribute the load between the I/O groups. Modifying
the I/O group that services the volume can be done concurrently with I/O operations if the
host supports nondisruptive volume moves and is zoned to support access to the target I/O
group.
It also requires a rescan at the host level to ensure that the multipathing driver is notified that
the allocation of the preferred node changed, and the ports by which the volume is accessed
changed. This modification can be done in the situation where one pair of nodes becomes
overused.
20 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
2.3.3 System
The current IBM Cloud Spectrum Virtualize system or clustered system consists of 1 - 4 I/O
groups. Certain configuration limitations are then set for the individual system. For example,
at the time of writing, the maximum number of volumes that is supported per system is 10000,
or the maximum managed disk that is supported is ~28 PIB (pebibytes) or 32 PB (petabytes)
per system.
All configuration, monitoring, and service tasks are performed at the system level.
Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a
management IP address is set for the system.
Note: The management IP is also referred to as the system or cluster IP and is active on
the configuration node. Each node in the system is also assigned a service IP to allow for
individually interacting with the node directly.
A process is provided to back up the system configuration data onto disk so that it can be
restored if there is a disaster. This method does not back up application data. Only the
Spectrum Virtualize system configuration information is backed up.
For the purposes of remote data mirroring, two or more systems must form a partnership
before relationships between mirrored volumes are created.
For more information about the maximum configurations that apply to the system, I/O group,
and nodes, see the IBM Spectrum Virtualize 8.2.1 configuration limits web page.
2.3.4 MDisks
The IBM Spectrum Virtualize system and its I/O groups view the storage that is presented to
the LAN by the back-end controllers as several disks or LUNs, which are known as managed
disks or MDisks. Because Spectrum Virtualize does not attempt to provide recovery from
physical failures within the back-end controllers, an MDisk often is typically provisioned from a
RAID array.
However, the application servers do not see the MDisks at all. Rather, they see several logical
disks, which are known as virtual disks or volumes, which are presented by the I/O groups
through the LAN (iSCSI) to the servers.The MDisks are placed into storage pools where they
are divided into several extents used to create the virtual disks or volumes.
For more information about the total storage capacity that is manageable per system
regarding the selection of extents, see the IBM Spectrum Virtualize 8.2.1 configuration limits
web page.
MDisks presented to Spectrum Virtualize can have the following modes of operation:
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and has no metadata that is stored
on it. Spectrum Virtualize does not write to an MDisk that is in unmanaged mode, except
when it attempts to change the mode of the MDisk to one of the other modes.
Managed MDisk
Managed MDisks are members of a storage pool and they contribute extents to the
storage pool. This mode is the most common and normal mode for an MDisk.
Each MDisk in the storage pool is divided into extents. The size of the extent is selected by
the administrator when the storage pool is created, and cannot be changed later. The size of
the extent can be 16 MiB (mebibyte) - 8192 MiB, with the default being 1024 MiB.
It is a preferred practice to use the same extent size for all storage pools in a system. This
approach is a prerequisite for supporting volume migration between two storage pools. If the
storage pool extent sizes are not the same, you must use volume mirroring to copy volumes
between pools.
2.3.6 Volumes
Volumes are logical disks that are presented to the host or application servers by the
Spectrum Virtualize. The hosts cannot see the MDisks; they can see only the logical volumes
that are created from combining extents from a storage pool or passed through Spectrum
Virtualize in the case of Image Mode objects.
22 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Sequential
A sequential volume is where the extents are allocated from one MDisk. This is usually
only used when provisioning Storwize (V7000, V5000, and V3000) volumes as backend
storage to Spectrum Virtualize systems and requires specification of the MDisks from
which extents will be drawn. A second MDisk is specified if it is a mirrored sequential
volume.
Image mode
Image mode volumes are special volumes that have a direct relationship with one MDisk.
The most common use case of image volumes is a data migration from your old (typically
non-virtualized) storage to the Spectrum Virtualize-based virtualized infrastructure.
When the image mode volume is created, a direct mapping is made between extents that
are on the MDisk and the extents that are on the volume. The logical block address (LBA)
x on the MDisk is the same as the LBA x on the volume, which ensures that the data on
the MDisk is preserved as it is brought into the clustered system.
Some virtualization functions are not available for image mode volumes, so it is often useful to
migrate the volume into a new storage pool. After migration, the data then resides in a volume
that is backed by a fully managed pool.
2.3.7 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive experience seek and latency time at the drive level, which can result in
1 ms - 10 ms of response time (for an enterprise-class disk).
IBM Spectrum Virtualize provides a flexible cache model, and the node’s memory can be
used as read or write cache. The cache management algorithms allow for improved
performance of many types of underlying disk technologies. IBM Spectrum Virtualize’s
capability to manage, in the background, the destaging operations that are incurred by writes
(in addition to still supporting full data integrity) assists with IBM Spectrum Virtualize’s
capability in achieving good database performance.
The cache is separated into two layers: upper cache and lower cache.
Figure 2-3 on page 24 shows the separation of the upper and lower cache.
The upper cache delivers fast write response times to the host by being as high up in the I/O
stack as possible. The lower cache works to help ensure that cache between nodes are in
sync, pre-fetches data for an increased read cache hit ratio on sequential workloads, and
optimizes the destaging of I/O to the backing storage controllers.
Combined, the two levels of cache also deliver the following functionality:
Pins data when the LUN goes offline
Provides enhanced statistics for IBM Spectrum Control or Storage Insights, and maintains
compatibility with an earlier version
Provides trace data for debugging
Reports media errors
Resynchronizes cache correctly and provides the atomic write functionality
24 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Ensures that other partitions continue operation when one partition becomes 100% full of
pinned data
Supports fast-write (two-way and one-way), flush-through, and write-through
Integrates with T3 recovery procedures
Supports two-way operation
Supports none, read-only, and read/write as user-exposed caching policies
Supports flush-when-idle
Supports expanding cache as more memory becomes available to the platform
Supports credit throttling to avoid I/O skew and offer fairness/balanced I/O between the
two nodes of the I/O group
Enables switching of the preferred node without needing to move volumes between I/O
groups
Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the
Easy Tier function that is turned on in a multitier storage pool over a 24-hour period.
Next, it creates an extent migration plan that is based on this activity, and then dynamically
moves high-activity or hot extents to a higher disk tier within the storage pool. It also moves
extents whose activity dropped off or cooled down from the high-tier MDisks back to a
lower-tiered MDisk. The condition for hot extents is frequent small block (64 Kb or less) reads.
Easy Tier: The Easy Tier function can be turned on or off at the storage pool and volume
level.
The IBM Easy Tier function can make it more appropriate to use smaller storage pool extent
sizes. The usage statistics file can be off-loaded from the Spectrum Virtualize nodes. Then,
you can use the IBM Storage Advisor Tool (STAT) to create a summary report. STAT is
available at no initial cost at this website.
2.3.9 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A
host within the Spectrum Virtualize is a collection of iSCSI-qualified names (IQNs) that are
defined on the specific server.
The iSCSI software in IBM Spectrum Virtualize supports IP Address failover when a node is
shut down or rebooted. As a result a node failover (when a node is rebooted) can be handled
without having a multipath driver that is installed on the iSCSI attached server.
2.3.11 iSCSI
The iSCSI function is a software function that is provided by the IBM Spectrum Virtualize
code, IBM introduced software capabilities to allow the underlying virtualized storage to
attach to IBM Spectrum Virtualize using iSCSI protocol.
The major functions of iSCSI include encapsulation and the reliable delivery of Command
Descriptor Block (CDB) transactions between initiators and targets through the Internet
Protocol network, especially over a potentially unreliable IP network.
Every iSCSI node in the network must have an iSCSI name and address. An iSCSI name is a
location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI
name, which stays constant for the life of the node. The terms initiator name and target name
also refer to an iSCSI name.
An iSCSI address specifies not only the iSCSI name of an iSCSI node, but a location of that
node. The address consists of a host name or IP address, a TCP port number (for the target),
and the iSCSI name of the node. An iSCSI node can have any number of addresses, which
can change at any time, particularly if they are assigned by way of Dynamic Host
Configuration Protocol (DHCP). An IBM Spectrum Virtualize node represents an iSCSI node
and provides statically allocated IP addresses.
2.3.12 IP replication
IP replication allows data replication between IBM Spectrum Virtualize family members. IP
replication uses IP-based ports of the cluster nodes.
The configuration of the system is straightforward and IBM Storwize family systems normally
find each other in the network and can be selected from the GUI.
IP connections that are used for replication can have long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many “hops”
between switches and other appliances in the network. Traditional replication solutions
transmit data, wait for a response, and then transmit more data, which can result in network
utilization as low as 20% (based on IBM measurements). In addition, this scenario gets worse
the longer the latency.
26 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Bridgeworks SANSlide technology, which is integrated with the IBM Storwize family, requires
no separate appliances and so requires no additional cost or configuration steps. It uses
artificial intelligence (AI) technology to transmit multiple data streams in parallel, adjusting
automatically to changing network environments and workloads.
SANSlide improves network bandwidth utilization up to 3x. Therefore, customers can deploy
a less costly network infrastructure, or take advantage of faster data transfer to speed
replication cycles, improve remote data currency, and enjoy faster recovery.
Synchronous remote copy ensures that updates are committed at both the primary and the
secondary volumes before the application considers the updates complete. Therefore, the
secondary volume is fully up to date if it is needed in a failover. However, the application is
fully exposed to the latency and bandwidth limitations of the communication link to the
secondary volume. In a truly remote situation, this extra latency can have a significant
adverse effect on application performance at the primary site.
Special configuration guidelines exist for SAN fabrics and IP networks that are used for data
replication. There must be considerations about the distance and available bandwidth of the
intersite links.
A function of Global Mirror designed for low bandwidth has been introduced in IBM Spectrum
Virtualize. It uses change volumes that are associated with the primary and secondary
volumes. These change volumes are used to record changes to the primary volume that are
transmitted to the remote volume on an interval specified by the cycle period. When a
successful transfer of changes from the master change volume to the auxiliary volume has
been achieved within a cycle period, a snapshot is taken at the remote site from the auxiliary
volume onto the auxiliary change volume to preserve a consistent state and a freeze time is
recorded. This function is enabled by setting the Global Mirror cycling mode.
Figure 2-4 shows an example of this function where you can see the association between
volumes and change volumes.
FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the
management operations to be coordinated so that a common single point in time is chosen
for copying target volumes from their respective source volumes.
With IBM Spectrum Virtualize, multiple target volumes can undergo FlashCopy from the same
source volume. This capability can be used to create images from separate points in time for
the source volume, and to create multiple images from a source volume at a common point in
time. Source and target volumes can be thin-provisioned volumes.
Reverse FlashCopy enables target volumes to become restore points for the source volume
without breaking the FlashCopy relationship, and without waiting for the original copy
operation to complete. IBM Spectrum Virtualize supports multiple targets, and therefore
multiple rollback points.
Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery
of their applications and databases. An IBM solution to this is provided by IBM Spectrum
Protect, which is described on this website.
28 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
In this environment, the on-prem data center is connected to the IBM Cloud using an IPsec
VPN that terminates to a network gateway appliance in the cloud. In addition to this, we are
using native IP replication between a Storwize system and IBM Spectrum Virtualize. This
storage-based replication allows for data consistency between sites.
Note: At the time of this writing, the IBM Cloud had recently undergone multiple
rebrandings from IBM SoftLayer to IBM Bluemix to the current IBM Cloud. Some of the
screens illustrated continue to carry some SoftLayer and Bluemix branding. In the
document, all references call the portal the IBM Cloud Portal.
All the following sections presume that the installer has a login ID to the IBM Cloud Portal,
which can be accessed from http://control.bluemix.net.
Users ordering servers need privileges to provision servers, enable public network
connections, configure VLANs and subnets, and order storage on the IBM Cloud Portal
account.
Note: Because we used a special demo account for this configuration, the prices you see
in this section might be different from the prices you see when provisioning your own
resources on the IBM Cloud portal.
32 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
2. At the top of the Server List page, an input field labeled Select Data Center with pull-down
selection list enables selection of the cloud data center where the servers are to be
provisioned. Select a data center before proceeding to the server selection. Care must be
taken in selecting the Data Center to ensure that the chosen data center has the
appropriate resources that are needed for Spectrum Virtualize, especially the network
card configurations and backend storage options. Minimizing distance is an important
consideration, but if the chosen datacenter does not contain the appropriate resources,
proximity is irrelevant (see Figure 3-2).
After the data center is selected, a list of servers is updated to display only those models
available in the selected data center.
Tip: If you choose a server from the server list before selecting the data center, your
selection is lost when the data center selection updates the list.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 33
3. Scroll down to the Dual Processor Multi-Core Servers. IBM Spectrum Virtualize requires a
dual-processor server with a minimum of 12 cores. Servers in the E5-2600 model class
are suggested. The selected server must be at least V3, support 64 GB RAM, and have
slots for at least four disk drives. A server is selected by clicking the server monthly price,
usually referred to as the monthly recurring cost (MRC) (see Figure 3-3).
In Figure 3-4 on page 35, the E5-2620-V4 server is selected for configuration and ordering.
When the server configuration page is displayed, the selected server appears in a list of
related models, and allows a change of model for the final selection.
Tip: Spectrum Virtualize does not support Intel TXT verification capabilities so the TXT
option should be left cleared.
If the server allows multiple RAM configurations, it can be changed. Spectrum Virtualize
currently cannot utilize more than 64 GB of RAM so there is no benefit in selecting more
than the minimum 64 GB of RAM for the server. This might change in future releases, in
which case additional RAM could be selected.
34 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 3-4 System configuration
4. In the Operating System select section, select the category Red Hat, and click the radio
button for Red Hat Enterprise Linux 7.x, as shown in Figure 3-5 on page 36. The
annotation indicating that a third-party agreement applies means that you need to accept
the Terms and Conditions for Red Hat support on the order completion window.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 35
Figure 3-5 Select Redhat
5. In the Hard Drive section, the server comes preconfigured with a single 1 TB SATA (Serial
Advanced Technology Attachment) drive, as shown in Figure 3-6. The standard
configuration of Spectrum Virtualize requires two independent physical disk partitions: one
for the system boot, operating system and application installation, and another for
application data. IBM Cloud preferred practices advise always protecting the boot
partition with a RAID-1 mirrored configuration. For this document, we are suggesting a
similar RAID-1 configuration for the second application data disk. Technically, the
installation works with only two JBOD (just a bunch of disks) disks in the system, but loss
of either disk would be the equivalent of a cluster node loss. We recommend the use of
RAID as defined in Figure 3-8 on page 37.
6. Click Add Disk three times to add three 1 TB drives. An icon for each added drive displays
in the frame as you add them. Do not change the selection in the drive list for other drive
types or sizes. If you have inadvertently selected and added another model drive, you can
remove it by clicking the drive icon and clicking Remove selected disks.
36 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
After adding the four 1 TB drives, click two of the drives as shown in Figure 3-7, then
select Create RAID Storage Group.
7. Select RAID-1 in the RAID Group configuration control selection as shown in Figure 3-8.
8. Leave the selection for Linux Basic partition scheme set. Do not select the check box for
the Redhat Logical Volume Manager to be installed on the disk partitioning, as
indicated by the green box in Figure 3-9 on page 38. Select Done to complete the RAID
partition configuration.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 37
Figure 3-9 Partition Template: Linux Basic
9. Repeat the process for the 3rd and 4th drives, clicking the drive icons to select them, the
click Create RAID Partition. When the 2nd partition appears in the Advanced Storage
Groups and Partitions control. Select RAID-1 from the pull-down menu, leave the LVM
box cleared, and click Done to complete the process. For the second RAID partition, you
are not offered a partition template but otherwise all the other choices remain the same for
the second pair of drives. The resulting configuration should show two 1000 GB RAID-1
partitions configured as shown in Figure 3-10 on page 39.
38 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 3-10 Configuring the 2nd RAID partition
10.For the networking options, several different choices are being selected. The first choice,
Public Bandwidth, in presented under each of three different categories: Limited,
Unlimited, and Private Network Only. In most circumstances, the nodes of IBM
Spectrum Virtualize cluster are located in the Data Tier of a multitier virtual data center
architecture; and, as a result would not be connected to the public network at all. See
Figure 3-11 on page 40.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 39
Figure 3-11 Networking options
When a server is connected to the public network, outgoing network traffic is metered and
is subject to extra charges if the monthly data transfer exceeds the no-charge 500 GB
monthly transfer volume. More data volumes can also be purchased for additional
charges. The unlimited section has an option for paying a fixed fee for unmetered
bandwidth There are also more options in the IBM Cloud networking for Bandwidth
Pooling: the bandwidth allocations for multiple servers can be pooled and used by a
single server.
40 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
The full discussion of this feature is beyond scope of this section. However, if an internet
VPN (persistent bidrectional IPsec tunnel, not simply a client VPN to access the
environment for management) is to be used for replication with a remote site, the
implementer should consider pooling the bandwidth of the Spectrum Virtualize servers
with the network gateway (that is, a Virtual Router Appliance) required for an internet VPN.
With Bandwidth pooling, the 500 GB per month of each of the cluster servers and the
network gateway server can be used by the network gateway before any excess
bandwidth charges would accrue.
For this section, we selected the no charge 500 GB bandwidth option. As of this writing,
you select the Private Network Only option, and only have options of the number and
speed for connections on the private network.Server network interfaces can be selected in
either single or dual NICs.
If the server was enabled for both public and private, the default selection is for a single
1 Gbps network port on both the Public and Private networks, or just a single 1 Gbps on
the Private network if Private Networks Only is selected. Dual network connections are
available for both 1 Gbps and 10 Gbps Ethernet. The dual connections can be selected in
HA/Bonded mode or Dual Unbonded. IBM Spectrum Virtualize requires Dual Unbonded
10 Gbps connections.
At the time of initial publication of this document, the IBM Spectrum Virtualize for Public
Cloud installation procedure required access to the internet to install the software from an
internet-based install server. These interfaces then needed to be protected with a firewall
or disconnected from the public network.
11.Currently, it is recommend that the implementer select the Private Network Only option,
which requires a selection of 0 GB public bandwidth and provide a reduced set of
selection options for private network connections only (see Figure 3-12 on page 42).
This means that the installation command must be run from a machine with access to the
IBM Cloud private network. This can easily be accomplished by configuring the
implementer’s account with client VPN access and installing it on the implementer’s
notebook.
Alternatively, a bare-metal or virtual server can be provisioned in addition to the Spectrum
Virtualize nodes and the installation can be started from that host. Getting to that host can
be done again by using the implementer’s machine with the VPN client or the installation
host can have a public interface that is accessible from the internet.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 41
Figure 3-12 Network Options
12.After the networking interface selection, in most cases no further selections are required.
In most data centers all servers are deployed with redundant power supplies and there is
no option to select or deselection. However, if redundant power supply appears as an
option for your server, it should be selected, as shown in Figure 3-13.
13.After completing the disk configuration, none of the subsequent additional service options
are relevant for the IBM Spectrum Virtualize servers. However, in the service add-ons it is
important to modify the response to the automatically included Cloud monitoring.
Automated Notification should be selected rather than Automated Reboot as the
response from a failed monitor detection.
42 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
If public network interfaces are provisioned on your server, the monitoring is performed on
the public interface.The current recommendation is to configure the Spectrum Virtualize
nodes with private only interfaces. However, this information might be useful for other
hosts that are configured in the IBM Cloud account.
Note: Disabling or firewalling the public interface could result in your server being
rebooted when the server stops responding to ping.
By default, the Cloud Account Master Account email is notified when the host fails to
respond to a ping. Cloud account IDs to be notified can be added or deleted under the
Portal Accounts →Manage →Subscriptions option after the server is provisioned (see
Figure 3-14).
14.When all options are selected, click ADD TO ORDER. An order validation window is briefly
displayed If the order validates, a new page for order completion is opened. If the
validation finds missing or inconsistent server specifications, the order page is displayed
with a message identifying the problematic parameters. When the order verifies, the
Check Out window is displayed.
Some additional specifications are required before the order can finally be submitted. See
Figure 3-15.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 43
Figure 3-15 Checkout window
15.The Checkout page provides cost elements broken out for the servers being ordered. On
the right side of the page. Check boxes must be selected accepting the terms and
conditions for the IBM Cloud Master Services agreement and for any third party licensed
components included in the Cloud billing.
44 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
In the case of the IBM Spectrum Virtualize servers, the licensing terms for the Red Hat
operating system must be accepted. This is shown in Figure 3-16. Before the order can be
completed and submitted for provisioning, some further specification is required.
16.In this example, four servers are being provisioned in a single order. The check out page
requires specification of host and domain names, VLANs, optional SSH keys for server
logon. In this case, the servers were being ordered on IBM Cloud account in a data center
where the account has not previously provisioned servers or requested pre-provisioning of
VLANs. Thus fields for selecting the Front-End VLAN (Public Network) and Back-End
VLAN (Private Network) for each server were not offered.
If IBM Spectrum Virtualize was being ordered for an account where servers had already
been provision and placed on existing VLANs, input fields enabling specification for which
of the existing account VLANs the servers should be placed on would have been included
on the submission form.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 45
For each server, a host name and domain name for the servers must be specified. The
domain name does not have to exist or have been previously registered. The host/domain
name will be associated with the private network primary subnet IP addresses assigned to
the hosts on the IBM Cloud internal network but they will not be forwarded or part of
external name resolution unless the user registers them (see Figure 3-17).
17.When all information has been completed. The order is submitted with the Submit bottom
in the right costs panel, as shown in Figure 3-18.
46 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
18.Following the order submission, a ticket is opened for the provisioning request. The status
of the server provisioning can be monitored through the account Devices menu as shown
in Figure 3-19, Figure 3-20, Figure 3-21, and Figure 3-22 on page 48.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 47
3.2 Provisioning IBM Cloud Block Storage
This section covers provisioning of IBM Cloud Block Storage for our scenario.
With the Performance Storage offering, you can select the wanted storage volume size and
then separately select the number of IOPS entitled on the volume. IOPS are provisioned in
increments of 100 and can range from as low as 100 to as high as 48,000 per single volume.
However, the complete range of IOPS values is not available for all volume sizes.
In Figure 3-22, The ranges of available IOPS for each of the defined LUN volume sizes is
shown. Smaller volumes have a lower maximum IOPS that can be provisioned. Conversely,
larger volumes have higher minimum IOPS that can be provisioned. The gray areas in
Figure 3-22 indicate unsupported volume size and IOPS combinations.
The green areas indicate volume size and IOPS ranges that are only available in the newer,
higher performance storage data centers. The numbers in the cells indicate the equivalent IO
density of volume size/IOPS combinations. As a rule, any I/O Density greater or equal to 2 is
implemented on SSD and has a lower latency than the storage with I/O density lower than 2.
Endurance storage offers the same set of predefined volume sizes as Performance Storage.
With Endurance storage, volumes are provisioned in one of four Storage Tiers, which are
defined by their IO Density: 0.25, 2.0, 4.0, and 10.0 IOPS per GB, with the first tier only on
spinning disk and remaining tiers on SSD.
48 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
With the storage tiers defined in IOPS per GB of capacity, the IOPS delivered on a given LUN
depends on the size of the LUN and storage tier in which it is provisioned.
In Figure 3-23 the IOPS delivered for each of the standard LUN sizes and storage tiers is
shown. As described previously, the green cells indicate combinations of LUN size and
storage tier that are only available in the high-performance storage IBM Cloud data centers.
In planning for your IBM Spectrum Virtualize deployment, the implementer should have a
target capacity and anticipated IO storage density in mind from the customer requirements. In
a typical customer environment, the data center has multiple tiers of storage and usage
profile defining the percentage of total capacity in each tier.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 49
An example profile of storage classes and customer allocation is shown in Table 3-1.
A 2 TB 8 IOPS/GB < 1 ms
B 20 TB 4 IOPS/GB < 2 ms
C 50 TB 2 IOPS/GB < 5 ms
So, for the example above four 500 GB LUNs, where each LUN delivers 4000 IOPS combined
in a disk group provides 2 TBs and 16,000 IOPS, that is the 8 IOPS per GB storage density
required for class A. Alternatively, 8 x 250 GB volumes where each LUN delivers 2000 IOPS
would similarly meet the customer Class A space and performance requirement. Where exact
multiples of four disks do not meet requirements, multiples of 2 disks can also be used. Disk
groups should not be provisioned with odd numbers of disks in the group.
Before provisioning the storage for your implementation, you should determine appropriate
number, size, and IOPS for the volumes in your disk groups. From the tables of available LUN
sizes and IOPS provisioning listed above, it should be clear Performance Storage provides
much greater flexibility for selecting both a capacity and IOPS per LUN.
However, if the customer only needs storage at a specific storage tier, Endurance storage can
provide a simpler solution. Both offerings can be used by IBM Spectrum Virtualize
interchangeably. Recognize, too, that IBM Spectrum Virtualize aggregates both the capacity
and the IOPS of the LUNs that are configured together in a disk group.
50 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
3.2.2 Provisioning Block Volumes
This section walks through the IBM Cloud Portal options and screens used to provision block
storage volumes.
1. From the IBM Cloud Portal home window, select and click Storage, as shown in
Figure 3-24.
2. On the Block Storage window, a list of all the storage volumes already allocated on the
account are shown. Click Order Block Storage, as shown in Figure 3-25.
3. In the Order Block Storage control, click the selection list in the Select Storage Type field.
For this example, we are using Performance storage. Endurance storage is also an
option. Portable Storage is not recommended and is not available in newer data centers.
Select Monthly billing unless you are planning to use the storage for a short time only.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 51
Hourly billing allows storage to be deprovisioned without charging through the end of the
month, but is more expensive than Monthly Billing if it used or provisioned for a full month
period. There might be scenarios where this is desirable, but you should be aware of the
higher costs if you use it (see Figure 3-26).
52 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
4. When prompted, select the data center where the storage should be provisioned. This
must be the same data center where your IBM Spectrum Virtualize servers are
provisioned. Note the asterisks next to some data center names. These are the data
centers where the high-performance storage options are deployed: storage with the
capacity and performance options indicated in the green cells of the tables in the Cloud
Storage overview (see Figure 3-27).
Figure 3-27 Select the data center where the storage should be provisioned
5. Select the size of the volume to provision. Note the range of minimum and maximum IOPS
that are available for the size of the volume you have selected. Ensure the IOPS level
required is available for the size being selected. If the IOPS required is not available,
reconsider the number, size, and IOPS for the volumes intended for your disk group (see
Figure 3-28 on page 54).
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 53
Figure 3-28 Select Storage Size
6. Enter the requested IOPS for the volume. Any value evenly divisible by 100 can be entered
between the minimum and maximum IOPS allowed for the size volume selected. The
quantity of snapshot space should always be left or set to zero. Snapshot space is for
Cloud Storage functionality that is not used, and is incompatible with IBM Spectrum
Virtualize. Select Linux formatting for the storage volume, as shown in Figure 3-29 on
page 55.
54 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 3-29 Select IOPS
7. After entering all the specifications for the storage, an order confirmation window prompts
you to confirm your selections and acceptance of the Cloud Services terms and
conditions, as shown in Figure 3-30.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 55
8. Upon placing the order, your provisioning begins. Usual provisioning time is approximately
5 minutes. When provisioning completes, the ordered volume displays in the device list
under the Storage option on the portal home window (see Figure 3-31).
You must complete the above procedure for each storage LUN to be incorporated in your
Disk Group. At this point, there is no way to request provisioning of multiple identical LUNs
(see Figure 3-31).
9. In Figure 3-32 on page 57, the specification window for Endurance storage is shown. In
the case of Endurance storage the location, size, storage tier, and format type are all
specified in a single control. By selecting a Storage Tier based on storage density, the
number of IOPS delivered on the LUN is a function of the size of the LUN, as shown in
Figure 3-23 on page 49.
56 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 3-32 Order Block Storage
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 57
The automated installation procedure automatically creates the appropriate portable subnet
on the private VLAN that the servers for the IBM Spectrum Virtualize cluster have been
provisioned on. For each server in the cluster, five unique IP addresses are allocated and
configured onto the server on the portable private subnet. In addition, a sixth IP address is
configured on each server, but it is a cluster address with the same address value shared by
all the nodes in the cluster.
For reinstallations, the semi-auto or manual install must be used, unless the creation of a
portable subnet is needed or wanted. If the IPsec tunnel between on-prem and IBM Cloud is
configured for the original installation and the portable private subnet that is associated with
that installation. a new portable private subnet requires reconfiguration of the Virtual Routing
Appliance and the on-prem device that serves as the other end of the IPsec tunnel.
If the semi-automated or manual installation procedures are used instead of the automated
install, the IT manager needs to request the portable subnet through the cloud portal (unless
this is a reinstallation) and allocate the needed IP addresses. In this case, “allocating” the
address is simply the choosing which IP addresses in the subnet range are assigned to each
of the five addressees needed for each node and the single shared address for the cluster.
Address allocation and inventory keeping for portable subnets is not performed or maintained
by the cloud portal. The cloud network routers accept and process which addresses are
configured on the server NICs.
When VLANs are provisioned on the cloud account, they are initially setup with only a single
subnet, called the primary subnet. Addresses on the on primary subnet can only be assigned
by the cloud provisioning engine. When servers are provisioned with multiple, unbonded
Network Interface Cards (NICs), only the first NIC on the network (Public or Private) is
assigned an IP address. If additional IP addresses are needed for the server (for example, IP
addresses for a second NIC, or IP addresses for an HA cluster or VMs running on the host), a
portable or secondary subnet must be allocated on the VLAN. IBM Spectrum Virtualize
automated install.
For IBM Spectrum Virtualize, five IP addresses are needed for each host node in the IBM
Spectrum Virtualize cluster, plus a sixth address for the cluster that will be used by all nodes.
The following section shows how to allocate a private subnet required for allocating an
assigning these addresses. This procedure is only required when the manual or semi
automated installation procedure is used for IBM Spectrum Virtualize. The fully automated
procedure includes logic to allocate the subnet.
58 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Complete the following steps:
1. From the Cloud Portal home window, select the Network option, as shown in Figure 3-33.
2. At the bottom of the Network options page, select Order under the Subnets/IPs section
(see Figure 3-34).
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 59
3. The additional IP addresses for the IBM Spectrum Virtualize nodes is allocated on the
private network. Therefore, a Portable Subnet is needed on the Private Network. Select
Portable Private, as shown in Figure 3-35.
60 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
4. For this example, we have allocated a 64-address subnet. The automated IBM Spectrum
Virtualize installation script also always creates a 64-address subnet by default. This
accommodates even an eight-node cluster, which requires 41 addresses. Additional
addresses could potentially be allocated for storage clients to access IBM Spectrum
Virtualize iSCSI targets on the same subnet, eliminating the need for routing between
clients and IBM Spectrum Virtualize (see Figure 3-36).
After the number of IP addresses is selected, the user is presented with a list of Private
Network VLANs already provisioned on the account. In this example, the demonstration
cloud account was no longer available. Therefore, the data center and VLAN number is
different than other examples. However, you are presented with a list of all VLANs
available on the account, including those in other data centers.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 61
5. Select the private VLAN where your IBM Spectrum Virtualize servers have been
provisioned, as shown in Figure 3-37.
62 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
6. A justification questionnaire is required for the subnet. Complete the form with information
appropriate to your intended use. Additional addresses are a semi-constrained resource
on the IBM Cloud, and the information allows automated planning processes to determine
when allocated address spaces can be deprovisioned (see Figure 3-38).
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 63
7. Accept the Cloud Agreement terms and complete the order. The subnet is available
immediately, as shown in Figure 3-39.
A Network Gateway, sometimes called a Vyatta, is only required if the IBM Spectrum
Virtualize is to be configured for replication through an internet VPN. If your IBM Spectrum
Virtualize is for a single site only, is replicating with a IBM Spectrum Virtualize in another IBM
Cloud data center, or a private Direct Link connection is being provisioned between the
customer network and the IBM Cloud, then a Network Gateway is not required.
64 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
To provision a network gateway, complete the following steps:
1. Select the Network menu from the portal home window and select Network Appliances,
as shown in Figure 3-40.
2. A list of already provisioned appliances is displayed. Normally one would not expect any to
be listed, but if the customer cloud account has been zoned into multiple firewalled
separate VLANs, there might be multiple already provisioned. Select Order Gateway from
the upper right corner of the page (see Figure 3-41).
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 65
3. The Network Gateway is just another instance of a cloud bare-metal server. The same list
of servers available for provisioning, as was provided for the IBM Spectrum Virtualize node
selection, is presented (see Figure 3-42).
4. For the gateway, a Dual Processor server is advised when an IPSec VPN will be
terminated on the gateway. The RAM is suggested at 64 GB for IPSec VPN (see
Figure 3-43).
5. Only two drives are required, configured in RAID1 with Linux Basic partition map, as
shown in Figure 3-44 on page 67.
66 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 3-44 Disk Controller 1
6. A Network Gateway must be configured with both Public and Private networks. In
Figure 3-45, a 1 Gbps Redundant (Bonded) network connection is selected. Depending
on the intended replication data volume, a 10 Gbps connection might be wanted, but a
redundant connection should be selected. This differs from the configuration used with the
IBM Spectrum Virtualize cluster hosts.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 67
7. Complete the server configuration and verify order (see Figure 3-46).
8. On the order completion page, you are required to specify VLANs that the gateway will
manage. The pull-down selection for the back-end and front-end VLANs allows either
automatic assignment or selection from one of the existing VLANs on the account. The
Network Gateway should be provisioned after the IBM Spectrum Virtualize cluster hosts
have completed provisioning.
Select the back-end and front-end VLANs on which the IBM Spectrum Virtualize servers
were placed when they provisioned. This action makes the Network Gateway server the
default router for all subnets on those VLANs. This has several effects on the network
environment that are explained in the Network Gateway configuration section of this
document.
Assign a host and domain name to the Network Gateway. These names resolve in the IBM
Cloud internal network, but are not published to any externally visible domain name
servers. They are mainly used for naming within the IBM Cloud portal inventory and device
listing screens (see Figure 3-47).
68 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
9. Complete the order for the gateway, as shown in Figure 3-48.
Servers might take as long as four hours to complete provisioning; however, simple, small
servers, such as the Vyatta, often require less than one hour.
Chapter 3. Planning and preparation for the IBM Spectrum Virtualize for Public Cloud deployment 69
70 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
4
Chapter 4. Implementation
This chapter describes how to implement IBM Spectrum Virtualize for Public Cloud
environment and provides detailed instructions about the following topics:
Downloading the One-click installer
Fully Automated installation
Semi Automated installation
Configuring Spectrum Virtualize
Configure Cloud quorum
Configure the back-end storage
Configuring Call Home with CLI
For more information about the Home Call configuration, see Chapter 6, “Supporting the
solution” on page 135.
When the bare-metal servers are ready, you can install IBM Spectrum Virtualize for Public
Cloud by using the One-click installation methods that described in this section. One-click
cluster deployment is a tool that helps the user to install IBM Spectrum Virtualize for Public
Cloud automatically.
Both modes install IBM Spectrum Virtualize for Public Cloud and create the cluster
automatically.
The fully automated mode automatically determines the initial configuration parameters for
configuration of IBM Spectrum Virtualize for Public Cloud, and automatically orders the
required IP addresses from IBM Cloud. The semi-automated mode requires that the user
orders the IP addresses (or uses an existing subnet) and generates and edits a configuration
file to provide the installer script the needed parameters.
For each installation method, both GUI and command-line interface (CLI) are shown for
comparison. Some common steps must be done, regardless of the installation method that is
used. Figure 4-1 shows some common steps that must be completed.
72 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
The first step for installing IBM Spectrum Virtualize for Public Cloud is to download the
One-click installer, as described next.
In addition to the license, you must download the One-click installer, which is an application
that runs on a local machine with internet access. This application facilitates the installation of
the IBM Spectrum Virtualize for Public Cloud software on multiple bare-metal servers to
create the system. You must download the One-click installer that is based on the operating
system of the machine that you are using to run the installation. The One-click installer is
available for Red Hat Enterprise Linux 7.x (RHEL 7.x), macOS, and Windows.
Chapter 4. Implementation 73
4. To decompress the package, use the Winzip application on your system.
IBM Cloud includes different types of API keys. Here, we use an infrastructure key.
The API Key is used during the installation to access the API for the following purposes:
Discover the passwords of the servers.
Allocate a range of IP addresses for the cluster.
Configure the storage at postinstallation.
Tip: It is best to generate the API key and perform the installation as a user without
purchasing power to protect the client from the small risk of the installation script making
purchases in error.
To create your API key, see this IBM Cloud web page.
Important: If in your environment you implemented a VPN firewall that is supplied by IBM
Cloud IaaS (Vyatta) do not use this procedure. At the time of this writing, the Fully
Automated procedure cannot interact with the Vyatta to make the subnet that was
automatically allocated by the script by using API accessible. Instead, use the Semi
Automated procedure that is described in 4.1.3, “Semi Automated installation” on page 78.
74 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
To run the Fully Automated installation, complete the following steps:
1. On your notebook or server:
a. For RHEL and MacOS hosts, change to the one-click-install* directory and run the
command, as shown in Example 4-1.
b. For Windows host, change to the one-click-install* directory, and run the command
as shown in Example 4-2.
Note: The bm_server_name1/2 that is shown in Example 4-1 or Example 4-2 is the
name you gave to your IBM Spectrum Virtualize Nodes when provided, as described in
Chapter 3, “Planning and preparation for the IBM Spectrum Virtualize for Public Cloud
deployment” on page 31. Use the host name only, not the Fully Qualified Domain Name
(FQDN).
2. During the installation process, the output shows your nodes’ nonce (number occurring
once). Follow the steps that are described in the “ACTION REQUIRED” section to activate
your nodes, as shown in Example 4-3.
Chapter 4. Implementation 75
The SV_Cloud cluster will be deployed on: itso-dal10-sv-n1 itso-dal10-sv-n2
server name: nonce
# itso-dal10-sv-n1: D455F4
# itso-dal10-sv-n2: D45D14
ACTION REQUIRED
1.Please use server's nonce to get USVNID from
https://www.ibm.com/support/home/spectrum-virtualize.
2.Put all USVNID files (such as D455F4.txt) into current working directory:
C:\Users\IBM_ADMIN\Desktop\one-click-install-WIN(2)
3. To complete the required action that is described in Figure 4-1 on page 72, see this page
of the IBM Support website (log in required). Complete the following steps:
a. Download the activation key for each node in the IBM Spectrum Virtualize cluster, as
shown in Figure 4-3 on page 77.
76 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-3 Downloading activation key example
Note: If you do not save the activation keys in this manner, the Fully Automated script
prompts you to confirm and does not continue until the keys are saved in the expected
directory.
4. After approximately 20 minutes, an IBM Spectrum Virtualize for Public Cloud cluster is
ready. The deployment script saves the cluster configuration report in the file report.json
in the One-click installation working directory.
Chapter 4. Implementation 77
5. You can skip ahead to the sections Log in to Cluster to log in to the IBM Spectrum
Virtualize for Public Cloud cluster and proceed with Configure Backend Storage.
2. User and password can be collected from the window by expanding the device in the
device list, as shown in Figure 4-5. You see the window that is shown in Figure 4-6 on
page 79. Click show password to make it visible.
78 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-6 User and password example
3. Allocate IP addresses for use by Spectrum Virtualize from the portable private subnet that
is in the same VLAN the private network for the bare-metal servers. The required number
of IP addresses for the installation is five IPs per node, plus one cluster IP address. For
more information about ordering and provisioning a portable private subnet, see 3.3, “IBM
Spectrum Virtualize networking considerations” on page 57.
Example 4-5 on page 80 shows a new yaml file that was completed with your IP address.
This example is for an IBM Spectrum Virtualize cluster with only two nodes.
Note: The password and private key parameters are mutually exclusive.
Chapter 4. Implementation 79
Example 4-5 yaml file example
# version=8.1.3.0-180512_1402
cluster:
ipAddress: 10.183.120.10 # cluster ip
gateway: 10.183.120.1
netmask: 255.255.255.192
site1:
BareMetalServers:
- servername: svcln1 # the name showed in cloud web portal
publicIpAddress: 169.47.145.151
privateIpAddress: 10.183.62.215
user: root # username with root privilege
password: G4BbwxXV # login password for user.
# privateKey: C:\Users\ADMIN\.ssh\bm01_private_key
serial: SL01EBEO # Bare Metal server serial number
id: 1 # select the SpecV node id for this node
serviceIp:
netmask: 255.255.255.192
ipAddress: 10.183.120.11
gateway: 10.183.120.1
portIp:
- netmask: 255.255.255.192
ipAddress: 10.183.120.20
gateway: 10.183.120.1
- netmask: 255.255.255.192
ipAddress: 10.183.120.21
gateway: 10.183.120.1
nodeIp:
- netmask: 255.255.255.192
ipAddress: 10.183.120.15
gateway: 10.183.120.1
- netmask: 255.255.255.192
ipAddress: 10.183.120.16
gateway: 10.183.120.1
- servername: svcln2
publicIpAddress: 169.47.145.154
privateIpAddress: 10.183.62.194
user: root # username with root privilege
password: U8qp9t5w # login password for user
# privateKey: C:\Users\ADMIN\.ssh\bm02_private_key
serial: SL019TYC # Bare Metal server serial number
id: 2
serviceIp:
netmask: 255.255.255.192
ipAddress: 10.183.120.12
gateway: 10.183.120.1
portIp:
- netmask: 255.255.255.192
ipAddress: 10.183.120.22
gateway: 10.183.120.1
- netmask: 255.255.255.192
ipAddress: 10.183.120.23
gateway: 10.183.120.1
nodeIp:
- netmask: 255.255.255.192
80 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
ipAddress: 10.183.120.17
gateway: 10.183.120.1
- netmask: 255.255.255.192
ipAddress: 10.183.120.18
gateway: 10.183.120.1
Note: If your configuration is with four Nodes, edit your yaml file and add two sections
for the nodes.
5. Save your yaml file by using a useful name; for example, sample.yaml.
6. Run the configuration file validation command from the directory where you saved your
installation files, as shown in Example 4-6.
7. After the validation process is complete, run the command to start the installation process
that is shown in Example 4-7.
Note: As part of the installation procedure, you are presented with activation codes for
each node. In this example, these codes are D2F9D8 and 3A0EC0. It is required as part of
the installation process to download the activation keys per the instructions that are
presented in the command output and store them in the same directory as the installation
script, as explained in 4.1.2, “Fully Automated installation” on page 74.
At the time of this writing, the web page to download the key files works in Internet Explorer
only. In Firefox and Chrome, pasting the link location in the address bar (and removing the
unsafe: prefix from the URL) allows you to view the text file in the browser. Pasting it into a
text file that is named NONCE.txt (replace NONCE with the string that is provided for the node)
is sufficient to proceed.
ACTION REQUIRED
1.Please use server's nonce to get USVNID from
https://www.ibm.com/support/home/spectrum-virtualize.
2.Put all USVNID files (such as D2F9D8.txt) into current working
directory:/Users/jfincher/Downloads/SV_Cloud_Installer.
Chapter 4. Implementation 81
Download completed, will start installing soon.
Installing:
Progress: |****************************************| 100% Complete
82 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-9 Logging in by using GUI
2. You are redirected to the Welcome window. Click Next, as shown in Figure 4-10.
Chapter 4. Implementation 83
After the License Agreement window, you are redirected to the change password window,
as shown in Figure 4-11.
84 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
4. You can change your cluster default name, as shown in Figure 4-12. Then, click Apply
and Next.
Chapter 4. Implementation 85
5. Enter your capacity license in accordance with your IBM agreement, as shown in
Figure 4-13. Then, click Apply and Next.
86 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
6. Set your Date and Time in accordance to your specific policy. In our example, we
configured Date and Time manually by using our environment Time Zone, as shown in
Figure 4-14. Then, click Apply and Next.
We suggest the use of Network Time Protocol (NTP) and configuring it as shown in
Figure 4-14.
Chapter 4. Implementation 87
7. In the next windows, you are prompted to enter location information about your IBM
Spectrum Virtualize cluster and some contact information, as shown in Figure 4-15 and
Figure 4-16.
88 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
8. You are prompted to configure the Inventory Setting, as shown in Figure 4-17.
We suggest setting Inventory Reporting and Configuration Reporting ON to help IBM
Support Center better support you if potential issues and debugging occur.
9. You are prompted to configure Home Call (SMTP Server) alert and Support Assistance,
as shown in Figure 4-18 and Figure 4-19 on page 90.
Chapter 4. Implementation 89
Figure 4-19 Support Assistance window
90 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Note: For more information about how to configure a Remote Support Proxy, see 4.4,
“Configuring Remote Support Proxy” on page 115.
A summary of your configuration is shown. You cluster setup completes and you are
redirected to your IBM Spectrum Virtualize GUI Dashboard, as shown in Figure 4-21,
Figure 4-22, and Figure 4-23 on page 92.
Chapter 4. Implementation 91
Figure 4-23 Spectrum Virtualize Dashboard example
Your IBM Spectrum Virtualize cluster now is complete. Configuring the Cloud quorum is
described next.
The IP quorum application is required for two- and four-node system in IBM Spectrum
Virtualize for Public Cloud configurations. In two-node systems, the IP quorum application
maintains availability after a node failure. In systems with four nodes, an IP quorum
application is necessary handle with other failure scenarios. The IP quorum application is a
Java application that runs on a separate bare-metal or virtual server in IBM Cloud.
There are strict requirements on the IP network for the use of IP quorum applications. All IP
quorum applications must be reconfigured and redeployed to hosts when certain aspects of
the system configuration change. These aspects include adding or removing a node from the
system, when node service IP addresses are changed, changing the system certificate, or
when an Ethernet connectivity issue occurs.
An Ethernet connectivity issue prevents an IP quorum application from accessing a node that
is still online.
To view the state of an IP quorum application in the management GUI, select Settings →
System →IP Quorum, as shown in Figure 4-24 on page 93.
92 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-24 IP Quorum example from the GUI
Even with IP quorum applications on a bare-metal server, quorum disks are required on each
node in the system. In a Cloud environment where IBM Spectrum Virtualize connectivity with
its back-end storage is iSCSI, the Quorum disks cannot be on external storage or internal
disk as in IBM SAN Volume Container or IBM Storwize. Therefore, they are automatically
allocated on the bare-metal server internal disks.
The use of the IBM Spectrum Virtualize command lsquorum shows only the IP Quorum.
For stable quorum resolutions, an IP network must meet the following requirements:
Provide connectivity from the servers that are running an IP quorum application to the
service IP addresses of all nodes.
The network must also deal with possible security implications of exposing the service IP
addresses because this connectivity can also be used to access the service assistant
interface if the IP Network security is configured incorrectly.
Port 1260 is used by IP quorum applications to communicate from the hosts to all nodes.
Chapter 4. Implementation 93
The maximum round-trip delay must not exceed 80 milliseconds (ms), which means 40 ms
each direction.
A minimum bandwidth of 2 MB per second is guaranteed for node-to-quorum traffic.
For more information about IP quorum configuration, see IBM Knowledge Center.
We assume that you ordered the back-end storage as described in Chapter 3, “Planning and
preparation for the IBM Spectrum Virtualize for Public Cloud deployment” on page 31.
You can obtain at this web page the target IP address of the storage you just purchased, as
shown in Figure 4-25.
94 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
To configure your IBM Spectrum Virtualize back-end storage with GUI, complete the following
steps:
1. In the management GUI, navigate to the Pool →External Storage menu in the GUI, as
shown Figure 4-26.
Chapter 4. Implementation 95
– Node port ID: Your IBM Spectrum Virtualize node port ID. Ports ID 1 and ID2 are
available. It is recommended that both be used in a round-robin fashion to get better
workload balance and redundancy of each LUN (MDisk).
4. Select the storage you want to configure by right-clicking it and selecting Include, as
shown in Figure 4-29.
Now your back-end storage configuration is completed and you can create pools, volumes,
and hosts as you do with any Spectrum Virtualize installation. For more information, see IBM
Knowledge Center.
96 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
The following elements are shown in Example 4-8:
username: The name that you specified in the MDisk configuration (IBM Cloud user name)
key: The same as you specified in MDisk configuration (IBM Cloud API Key)
IP: IP address of IP quorum server
ibmcustomer: Specifies the customer number that is assigned when a software license is
automatically added to the entitlement database
ibmcountry: Specifies the country ID used for entitlement and Call Home system
This command uses IBM Cloud APIs to get most of the information that are needed to enable
the email functions, such as the contact information and detailed address of the machine
(software datacenter).
The IP quorum server is chosen here because it is required to have public network access,
and an SMTP server is suggested to be configured there. If necessary, the chemailserver or
mkemailserver commands can be used after running this command to update it with another
SMTP server, or add a new one.
For more information about the Home Call configuration, see Chapter 6, “Supporting the
solution” on page 135 in this book.
We are assuming that an IBM Spectrum Virtualize cluster with two nodes is up and running.
To add two nodes to your cluster, complete the following steps:
1. Gather all of the information that is required for the sample yaml file as though you are
performing the semi-automatic installation procedure.
2. Secure copy as part of the SSH/SFTP suite of tools (such as PuTTY) the
deploy_one_node.sh script to the servers that will hold the Spectrum Virtualize instances
to be added into the running cluster.
3. Open an SSHH session to the bare-metal server by using an SSH client of your choosing.
4. Run the deploy_one_node.sh script, as shown in Example 4-9 on page 98.
Chapter 4. Implementation 97
Note: The following command arguments are available for the script to run in the order
indicated:
Service IP address
Service IP default gateway
Service IP subnet mask
Node IP 1 address
Node IP 1 gateway
Node IP 1 mask
Node IP 2 address
Node IP 2 gateway
Node IP 2 mask
Port ID of node IP 2 (normally the value 2)
Serial number, server name
Node ID.
All of these parameters are required for the script to successfully run the sntask initnode
command.
Tip: If the script fails to successfully run sntask initnode and you are prompted to add
the force flag to the command, you can use a text editor to add -f to the last command
in the script and re-run it so that it looks like the following example:
/usr/bin/sntask initnode -f -sip ${1} -gw ${2} -mask ${3} -nodeip1 ${4}
-nodegw1 ${5} -nodemask1 ${6} -nodeport1 ${7} -nodeip2 ${8} -nodegw2 ${9}
-nodemask2 ${10} -nodeport2 ${11} -serial ${12} -name ${13} -id ${14} ${15}
7. As described in Step 3 of 4.1.2, “Fully Automated installation” on page 74, use the nonce
command to get the activation key.
8. Secure copy as part of the SSH/SFTP suite of tools (such as PuTTY) the activation key to
the node’s service IP upgrade directory by using and activate the node software, as shown
in Example 4-11.
98 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Example 4-11 Activating the node
jfincher$ scp NONCE0.txt [email protected]:/upgrade
The authenticity of host '10.183.120.3 (10.183.120.3)' can't be established.
ECDSA key fingerprint is SHA256:4KDS/hL1/tDUdtK78SxdUMjjdp2WWPKwaTEXcMPg4lA.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.183.120.3' (ECDSA) to the list of known hosts.
Password:
NONCE0.txt
100% 446 2.1KB/s 00:00
jfincher$ ssh -l superuser 10.183.120.3
Password:
IBM_Spectrum_Virtualize::superuser>satask chvpd -idfile /upgrade/NONCE0.txt
IBM_Spectrum_Virtualize::superuser>Connection to 10.183.120.3 closed by remote
host.
Connection to 10.183.120.3 closed.
Tip: The superuser password for the node at this point is passw0rd.
9. Repeat steps 1 - 8 for the second node to add into the cluster.
10.Log in into your running IBM Spectrum Virtualize cluster by using the lsnodecandidate
command. You see the two new nodes that were configured into candidate state and
made visible to the cluster through their private IP links, as shown in Example 4-12.
The same check can be done by using the GUI, as shown in Figure 4-30.
11.From the GUI (see Figure 4-30), click Click to add to add the two nodes. You are
redirected to the next window, as shown in Figure 4-31.
Chapter 4. Implementation 99
Figure 4-31 Adding nodes example
12.You can now check that your new nodes are added to your cluster, as shown in
Figure 4-33.
100 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-33 Two nodes added example
13.Change the node names according to your naming convention by using the GUI, as shown
in Figure 4-34.
14.Configure your new node’s port IPs by using the cfgportip command, as shown in
Example 4-13.
15.Authorize your back-end storage to the new nodes port IP addresses, as shown in
Figure 4-35 and Figure 4-36 on page 102 by using IBM the Cloud IaaS web portal. Repeat
the steps for all your MDisks that you want to make visible to the new nodes.
Note: Each block storage LUN must be authorized by IP address. Each LUN can be
authorized to a single IP address per node only. The IP addresses that are used must
correlate to the same port number on each node in the cluster (that is, LUN 1 is authorized
to the IP addresses that corresponds to port 1 on each node in the cluster).
16.Validate that the port IP addresses are configured for each of the nodes and that the
Storage Port IPv4 is listed as enabled for all IP addresses to be used to access storage.
The Port IP address configuration is shown in Figure 4-37.
102 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-37 Port IP address configuration from GUI
17.Find the iSCSI Qualified Name (IQN) that the IBM Cloud assigned to the IP address when
you authorized the IP address to the LUN. This information can be found in the block
storage LUN details, as shown in Figure 4-38 on page 103.
18.When you expand the capacity of your IBM Spectrum Virtualize system from two nodes to
four nodes, you have IBM Cloud storage that is managed by the two existing nodes. The
two new nodes cannot access this storage until you synchronize your user name,
password, and IQN in the IBM Spectrum Virtualize for Public Cloud software on each of
the new nodes. You now must run through the CLI procedure as shown in Example 4-14.
The required credentials can be obtained from the IBM Cloud IaaS web portal. If you do
not run this command, your MDisks are not accessible by the new nodes and they are in
degraded state because a fundamental requirement (except in stretch clusters and
HyperSwap configurations) of IBM Spectrum Virtualize is that all MDisks be visible to all
nodes in the cluster.
IBM_Spectrum_Virtualize:Cluster_10.183.120.10:superuser>svctask chiscsiportauth
-src_ip 10.183.120.23 -iqn iqn.2018-04.com.ibm:ibm02su1541323-i105805909 -username
IBM02SU1541323-I105805909 -chapsecret MrSL5DDey5vavQaU
IBM_Spectrum_Virtualize:Cluster_10.183.120.10:superuser>svctask
detectiscsistorageportcandidate -srcportid 2 -targetip 10.3.174.137
IBM_Spectrum_Virtualize:Cluster_10.183.120.10:superuser>addiscsistorageport 0
IBM_Spectrum_Virtualize:Cluster_10.183.120.10:superuser>svctask
detectiscsistorageportcandidate -srcportid 2 -targetip 10.3.174.138
IBM_Spectrum_Virtualize:Cluster_10.183.120.10:superuser>lsiscsistorageportcandidat
e
IBM_Spectrum_Virtualize:Cluster_10.183.120.10:superuser>addiscsistorageport 0
19.To check your MDisk’s connectivity, run lsiscsistorageport and lsmdisk, as shown in
Example 4-15.
IBM_Spectrum_Virtualize:Cluster_10.183.120.10:superuser>lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam
e:UID:tier:encrypt:site_id:site_name:enclosure_id:distributed:dedupe:over_provisio
ned:supports_unmap
0:mdisk0:online:unmanaged:::500.0GB:00000000000000AA:controller0:600a09803830372f4
12449776c455a5500000000000000000000000000000000:tier_enterprise:no::::no:no:no:no
1:mdisk1:online:unmanaged:::500.0GB:00000000000000A9:controller0:600a09803830372f3
23f496e5353517900000000000000000000000000000000:tier_enterprise:no::::no:no:no:no
2:mdisk2:online:unmanaged:::250.0GB:0000000000000000:controller1:600a09803830446d5
25d4b744a2f566a00000000000000000000000000000000:tier_enterprise:no::::no:no:no:no
104 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
4.3 Configuring replication from on-prem IBM Spectrum
Virtualize to IBM Spectrum Virtualize for IBM Cloud
In this section, we describe how to configure a replication from an on-prem solution that can
be a Storwize or IBM SAN Volume Controller to IBM Spectrum Virtualize for IBM Cloud
solution.
Our example uses a Storwize system in the on-prem data center and a four-nodes IBM
Spectrum Virtualize for IBM Cloud as a DR Storage solution.
The scenario we are describing uses IBM Spectrum Virtualize Global Mirror with Change
Volume (GM-CV) to replicate the data from the on-prem data center to IBM Cloud.
This implementation starts with the assumption that the IP connectivity between on-prem and
IBM Cloud was established through a MPLS or VPN connection. Because several methods
are available to implement the IP connectivity, this book does not consider that specific
configuration. For more information, contact you IBM Cloud Technical Specialist.
2. You are prompted to choose which copy group to use, as shown in Figure 4-40.
Figure 4-41 IBM Spectrum Virtualize for IBM Cloud configuration completed
4. Run the same configuration for the on-prem Storwize Storage system or IBM SAN Volume
Controller, as shown in Figure 4-42, Figure 4-43 on page 107, and Figure 4-44 on
page 107.
Note: The on-prem solution has a different GUI because it is running on an older IBM
Spectrum Virtualize software version than the version that is installed on the IBM Spectrum
Virtualize in IBM Cloud. For more information about supported and interoperability
versions, see the IBM interoperability matrix at this web page.
106 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-43 On-prem Copy Group example
5. Create a Cluster partnership between on-prem and IBM Spectrum Virtualize for IBM
Cloud from the on-prem GUI, as shown in Figure 4-45.
As you can see in Figure 4-47, the partnership is partially completed. You must complete
the partnership on the IBM Spectrum Virtualize for IBM Cloud GUI, as shown in
Figure 4-48 and Figure 4-49 on page 109.
108 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-49 Partnership example
7. In our example, we have an on-prem 100 GiB volume with its Change Volume (CV) that
must be replicated to a 100 GiB volume in the IBM Cloud that is defined in our IBM
Spectrum Virtualize for Public Cloud. The on-prem volumes are thin-provisioned, but this
is not a specific requirement; instead, it is a choice. The CV can be thin-provisioned or fully
provisioned, regardless of whether the master or auxiliary volume is thinly provisioned or
space-efficient.
The CV only needs to store the changes accumulated during the cycle period and should
therefore use as real capacity as possible (see Figure 4-51).
Figure 4-53 GM CV
10.Select the remote system (as shown in Figure 4-54), and select the volumes that must be
in relationship (as shown in Figure 4-55 on page 111).
110 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-55 Master and auxiliary volumes example
In our example, we choose No, do not add a master change volume at this time, as
shown in Figure 4-56.
12.Add the CV volumes to your relationship on both sides, as shown in Figure 4-58,
Figure 4-59, and Figure 4-60 on page 113.
112 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-60 Add change volume to IBM Cloud site
13.Start your relationship from the on-prem site, as shown in Figure 4-61.
14.Create a GM consistency group and add your relationship to it, as shown in Figure 4-62 on
page 114 and Figure 4-63 on page 114.
Now you can see the status of your consistency group, as shown in Figure 4-64 and
Figure 4-65 on page 115.
114 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 4-65 Copying status
In our example, we show the status from IBM Spectrum Virtualize for IBM Cloud GUI.
When the copy approaches completion, the CV algorithm starts to prepare a freeze time in
accordance with the cycling windows as defined in Figure 4-57 on page 112. When your copy
reaches 100%, a FlashCopy is taken from the Auxiliary Volume to the Auxiliary-CV to be used
if a real disaster or DR test occurs. At 100%, the status is “Consistent Copying”, as shown in
Figure 4-66.
This example shows how to configure a GM-CV relationship from an on-prem solution to an
IBM Spectrum Virtualize for IBM Cloud solution.
The steps that were shown in this example used the GUI, but they can also be run with the
CLI.
For more information about how to manage Storwize or IBM Spectrum Virtualize or SVC
Copy Functions, see the following publications:
Implementing the IBM Storwize V7000 with IBM Spectrum Virtualize V8.1, SG24-7938
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521
Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V8.1, SG24-7933
The first step is to obtain the remote support proxy software from your product support page.
At the time of this writing, this code is under the Others category, as shown in Figure 4-67.
After the code is downloaded to the administrators notebook, you must upload the file to the
server in which the proxy is to be installed. This process can be done by using the scp
command. You also must install the redhat-lsb package if it is not installed.
When the file is uploaded to the server and all pre-requisite packages are installed, you can
proceed with the installation, as shown in Example 4-16.
Tip: For the installation to succeed, ensure that the required packages are installed. On
Red Hat systems, install the redhat-lsb package. On SUSE systems, install the insserv
package. In both cases, install bzip2.
When the installer is started, you are presented with the International License Agreement for
Non-Warranted Programs. To complete the installation, enter 1 to accept the license
agreement and complete the installation.
When the installation completes, you must configure the proxy server to listen for
connections. You can do this by editing the configuration file supportcenter/proxy.conf,
which is in the /etc directory. The minimum modification required is to edit the fields
ListenInterface and ListenPort. By default, the file includes “?” as the value for both of
these fields.
116 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
To complete the configuration, specify the ListenInterface to be the interface name in Linux
that can access the IBM Spectrum Virtualize clusters. This can be determined by using the
ifconfig command, and identifying the interface that accesses the IBM Cloud private
network. Also, set the ListenPort to the TCP port number to listen on for remote support
requests. A sample configuration file is shown in Example 4-17.
# Mandatory configuration
# Network interface and port that the storage system will connect to
ListenInterface eth0
ListenPort 8988
#Remote support for SVC and Storwize systems on the following front servers
ServerAddress1 129.33.206.139
ServerPort1 443
ServerAddress2 204.146.30.139
ServerPort2 443
# Optional configuration
# Restricted user
# User nobody
# Log file
# LogFile /var/log/supportcenter_proxy.log
When the service is configured, the service must be started to allow the server to start
listening for requests. Optionally, you can also configure the service to start on system start.
To start the service, you can use the service or systemctl command.
To have the service start on system start, you can use the chkconfig command. Both of these
processes are shown in Example 4-18.
When the service is started, you are ready to configure IBM Spectrum Virtualize to use the
proxy to initiate remote support requests.
118 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
5
Figure 5-1 The two major deployment models for Public Cloud
Cloud-native implementations (aka whole IT services deployed in the Public Cloud) are
composed of several use cases, all with the lowest common denominator of having a full
application deployment in the Public Cloud data centers. The technical details and final
architecture will depend, along with roles and responsibilities, on SaaS, PaaS or IaaS usage.
Within the IaaS domain the transparency of cloud services is the highest, as the user’s
visibility (and responsibility) into the application stack is much deeper compared to the other
delivery models. On the other side, the burden for its deployment is higher as all the
components have to be designed from the server up. IBM Spectrum Virtualize for Public
Cloud, at the time of writing, is framed only within IaaS cloud delivery model, allowing the
user to interact with their storage environment as they did on-prem, which provides more
granular control over performance.
120 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
The drivers that motivate businesses towards cloud-native deployment span from capex and
opex reduction, better resource management and controls against shadow IT, more flexibility
and scalability along with drastically improved capillarity in delivering IT service due to the
global footprint of the cloud data centers.
The cloud environment is highly focused on standardization and automation at its core.
Because of this focus, the full spectrum of features and customization that are available in a
typical on-premise or outsourcing deployment might not be natively available in the cloud
catalog.
Nevertheless, the client does not lose the performances and capabilities when deploying a
cloud-native application. In this sense, the storage virtualization with IBM Spectrum Virtualize
for Public Cloud allows the IT staff to maintain the technical capabilities and skills to deploy,
run, and manage highly available and highly reliable cloud-native applications in a Public
Cloud.
In this context, the IBM Spectrum Virtualize for Public Cloud acts as a bridge between the
standardized cloud delivery model and the enterprise assets the client leverages in their
traditional IT environment.
Cloud deployment does not guarantee 100% uptime, or that the backups are available by
default or even that the application is automatically replicated between different sites. These
security, availability, and recovery features are likely not client responsibility if the service is
delivered in the SaaS model. It is partially the user’s responsibility in PaaS, but is entirely the
client’s design responsibility in the case of the IaaS model.
Having reliable cloud deployments means meeting the required Service Level Agreement
(SLA), a guaranteed service availability, and uptime. Companies using Public Cloud IaaS can
meet required SLAs either by implementing highly available solutions and duplicating the
infrastructure in the same data center or in two or more in-campus data centers (for example
IBM Dallas10 and Dallas09) to maintain business continuity in case of failures. If business
continuity is not enough to reach the desired SLA then disaster recovery (DR)
implementations, splitting the application into multiple cloud data centers (usually with a
distance of at least 300 km [186.4 miles]) prevents failure in case of a major disaster in the
campus-area.
The following highly available deployment models are available for an application that is fully
deployed on Public Cloud:
On a single primary site
All the solution’s components are duplicated (or more) within the same data center. This
solution tolerate only the failure of single components, but not the data center
unavailability.
On multi-site
The architecture splits among multiple cloud data centers within the same campus to
tolerate the failure of an entire datacenter or spread globally to recover the solution in case
of a major disaster affecting the campus area.
Chapter 5. Typical use cases for IBM Spectrum Virtualize for Public Cloud 121
Highly available cloud deployment on a single primary site
When fully moving an application to cloud IaaS as a primary site for service delivery, a
reasonable approach is implementing at least a highly available architecture. By having each
component (servers, network components and storage) redundant without any Single Point of
Failures (SPoF).
Within the single site deployment, storage is usually deployed as native cloud storage.
Leveraging Public Cloud catalog storage, users can use the intrinsic availability (and SLAs) of
the storage service, whether as object storage (for example IBM Cloud Object Storage), file,
or block storage. IBM Cloud block storage (delivered as Endurance or Performance format) is
natively highly available (as multiple 9s).
A typical use case of the IBM Cloud highly available architecture is a VMware environment
where physical hosts are N+1 with datastores that are hosted on the cloud block storage and
shared simultaneously among multiple hosts.
IBM Spectrum Virtualize for Public Cloud, when deployed as clustered pair of Intel bare-metal
servers, mediates the cloud block storage to the workload hosts. In the specific context of
single site deployment, IBM Spectrum Virtualize for Public Cloud supports more features,
which enhances the Public Cloud block-storage offering. This is true at the storage level
where IBM Spectrum Virtualize for Public Cloud resolves some limitations because of the
standardized model of the Public Cloud providers: a maximum number of LUNs per host, a
maximum volume size, and poor granularity in the choice of tiers for storage snapshots.
IBM Spectrum Virtualize for Public Cloud also provides a new view for the storage
management other than the cloud portal, which gives an high level view of the storage
infrastructure and some limited specific operations at the volume level (such as volume size,
IOPS tuning and snapshot space increase). What is not provided is a holistic view of the
storage from the application perspective. More detailed reasons are highlighted in Table 5-1.
Table 5-1 Benefits of IBM Spectrum Virtualize for Public Cloud on single site deployment
Feature Benefits
Single point of control for cloud Designed to increase management efficiency and to
storage resources help to support application availability
Pools the capacity of multiple storage Helps to overcome volume size limitations
volumes Helps to manage storage as a resource to meet
business requirements, and not just as a set of
independent volumes
Helps administrator to better deploy storage as required
beyond traditional “islands”
Can help to increase the use of storage assets
Insulate applications from maintenance or changes to
storage volume offering
122 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Feature Benefits
Easy-to-use IBM Storwize family Single interface for storage configuration, management,
management interface and service tasks regardless the configuration available
from Public Cloud portal
Helps administrators use storage assets/volumes more
efficiently
IBM Spectrum Control Insights and IBM Spectrum
Protect for additional capabilities to manage capacity
and performance
Advanced network-based copy Copy data across multiple storage systems with IBM
services FlashCopy
Copy data across metropolitan and global distances as
needed to create high-availability storage solutions
between multiple data centers
Thin provisioning and snapshot Reduce volume requirements by using storage only
replication when data changes
Improve storage administrator productivity through
automated on-demand storage provisioning
Snapshots available on lower tier storage volumes
The active-passive is usually the best fit for many cloud use cases including the DR as shown
in 5.2, “Disaster recovery” on page 124. The possibility to provision compute resources
on-demand in a few minutes, having just the storage always provisioned and aligned with a
specific RPO represent a huge driver for a cost effective DR infrastructure and lowers the
TCO.
The replication among multiple cloud data centers is no different from the traditional
approach, except for the number of available tools in cloud. The considerations that are
described in 1.3.1, “Hybrid scenario: on-premises to IBM Cloud” on page 12 for a hybrid
environment are still applicable.
Chapter 5. Typical use cases for IBM Spectrum Virtualize for Public Cloud 123
Solutions that are based on hypervisor or application-layer replication, such as VMware,
Veeam, and Zerto are available in the Public Cloud if the environment is heterogeneous (such
as virtual servers, bare metals, and multiple hypervisor). Storage based replication is still the
preferable approach.
Storage-based replication is available in almost every cloud provider. IBM Cloud for example
allows for block-level replication on IBM Endurance storage. A volume can be replicated to
another cloud site with a minimum recovery point objective (RPO) of 60 minutes. The
replication features are specific to the cloud offering and are not editable nor tunable. For
example, the remote copy of Endurance Storage is not accessible until unavailability of the
primary is declared or its replica is limited to specific data centers pairs.
For this reason, the model does not fit all clients’ requests. Some of these deltas are covered
by an application or hypervisor level replica that is limited to a specific environment and
specific infrastructure to replicate (for example VMware with vReplicator and DR startup
managed by SRM).
However, asynchronous mirroring that uses Global Mirror with Change Volumes (GMCV)
allows for a minimum RPO of 2 minutes (the change volume cycle period ranges from
1 minute to 1 day and we recommend setting the cycle period to be half of the RPO) and is
capable of replicating a heterogeneous environment.
Also, Spectrum Virtualize supports several third-party integrations, such as VMware Site
Recovery Manager (SRM), to automate failover at the application layer while the storage
replica is used. SRM also automates the taking of storage snapshots with FlashCopy for
testing purposes.
Technology is just one crucial piece of a disaster recovery (DR) solution, and not the one that
dictates the overall approach.
In this section we talk about the IBM Spectrum Virtualize for IBM Cloud DR approach and
benefits. In addition in Appendix A, “Guidelines for disaster recovery solution in the Public
Cloud” on page 157 we cover the suggested practices and some considerations you should
take into account when creating a DR solution.
Disaster recovery strategy is the predominant aspect of the overall resiliency solution
because it determines what classes of physical events the solution can address, and sets the
requirements in terms of distance, and sets constraints on technology.
Considering the cloud space, most cloud providers offer a redundant infrastructure, with the
following several layers of resiliency:
Local: A physical and logical segregation zone (for example, availability zone or availability
set) within a Cloud Service Provider (CSP) location (physical datacenter) that is
independent from other zones for what pertains to power supply, cooling, and networking.
Site: CSPs group multiple sites in a so-called region. Using different sites within the same
region offers a better level of protection in cases of limited natural disaster, compared to
two different zones on a single site because sites in the same region are usually in close
proximity (tens of kilometers or miles).
124 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Region: Using two sites in two regions in the same geography represents the top level of
protection against natural disasters because sites are generally over 400 km (248 miles)
apart.
Geography: Selecting two sites in different geographies extends the level of protection
against natural disasters, as geographies are generally over 1500 km (1000 miles) apart,
which represents the latest option in terms of wider protection requirements.
IBM Cloud operates over 60 datacenters in six regions and 18 availability zones around the
world, as shown in Figure 5-2.
Figure 5-2 IBM Cloud with more being added all the time
For more information about data cloud centers, see this IBM Cloud web page.
Table 5-2 Drivers and challenges and capabilities IBM Spectrum Virtualize for Public Cloud provide
Adoption drivers Challenges Spectrum Virtualize for IBM Public
capabilities
The promise of reduced Opex and Hidden costs Optimized for Cloud Block Storage
Capex Availability of data when EasyTier solution to optimize the
needed most valuable storage usage
maximizing Cloud Block Storage
performance
Thin Provisioning to control the
storage provisioning
Snapshots feature for backup and
DR solution
High availability clusters
architecture
Chapter 5. Typical use cases for IBM Spectrum Virtualize for Public Cloud 125
Adoption drivers Challenges Spectrum Virtualize for IBM Public
capabilities
Leveraging the cloud for Backup and Covering virtual and physical A storage based, serverless
Disaster Recovery environments replication with options for low
Solutions to meet a range of RPO/RTO named:
RPO/RTO needs – Global Mirror for
Asynchronous replication with
a RPO close to “0”
– Metro Mirror for Synchronous
replication
– Global Mirror with Change
Volumes for Asynchronous
replication with a tunable RPO
– Supports virtualized and
bare-metal applications (unlike
VM-based solutions)
At the time of this writing, IBM Spectrum Virtualize for Public Cloud includes the following
DR-related features:
Can be implemented in over 60 data centers in 19 countries. For more information, see
this web page.
Was first available on IBM Cloud; Amazon Web Services Marketplace availability
announced for June 25, 2019.
Is deployed on IBM Cloud Infrastructure.
Offers data replication with Storwize family, V9000, IBM SAN Volume Controller,
FlashSystem 9100 or VersaStack and Public Cloud.
Supports 2,4,6 or 8 node clusters in IBM Cloud.
Offers data services for IBM Cloud Block Storage.
Offers common management with IBM Spectrum Virtualize GUI with full admin access
and dedicated instance.
No incoming data transfer cost.
No bandwidth cost within IBM Cloud.
Replicates between IBM Cloud data centers.
5.2.2 Common DR scenarios with IBM Spectrum Virtualize for Public Cloud
The following most common scenarios can be implemented with IBM Spectrum Virtualize for
Public Cloud:
IBM Spectrum Virtualize Hybrid Cloud disaster recovery for “Any to Any”, Physical and
Virtualized applications as shown in Figure 5-3 on page 127.
126 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Client Data Center
Application and
virtual machines
Oracle
failover / failback IBM Cloud
SAP
Oracle
Day SAP
IBM Spectrum
Virtualize
on IBM Cloud bare
metal servers
Real time IP
replication through
Global Mirror to Cloud block storage
Heterogeneous storage
cloud
Transform to hybrid IT with IBM Spectrum Virtualize DR to cloud Leverage cloud infrastructure and services
© Copyright IBM Corporation 2017
IBM Spectrum Virtualize for Public Cloud DR solution with VMware Site Recovery
Manager (SRM) as shown in Figure 5-4.
As shown in Figure 5-4, a customer can deploy a storage replication infrastructure in a Public
Cloud with the IBM Spectrum Virtualize for Public Cloud.
Chapter 5. Typical use cases for IBM Spectrum Virtualize for Public Cloud 127
The following are the details of this scenario:
Primary storage sits on customer’s physical data center. Customer has on-premises SVC
cluster installed or IBM Storwize solution.
Secondary storage sits on the DR site which includes a virtual IBM Spectrum Virtualize
cluster running in the Public Cloud.
The virtual IBM Spectrum Virtualize cluster manages the storage provided by Cloud
Service Provider (CSP).
A replication partnership that uses Global Mirror with Changed Volumes is established
between on-premises IBM Spectrum Virtualize cluster or Storwize solution and the virtual
IBM Spectrum Virtualize cluster to provide disaster recovery.
When talking about disaster recovery it is important to mention that IBM Spectrum Virtualize
for Public Cloud is an important piece of a more complex solution that has some prerequisites
considerations, and recommended best practices that need to be applied.
Note: Refer to Appendix A, “Guidelines for disaster recovery solution in the Public Cloud”
on page 157 we cover preferred practices when designing a resiliency solution, and
considerations for using the cloud space as a possible alternative site.
Regardless of your business needs, FlashCopy within the IBM Spectrum Virtualize is flexible
and offers a broad feature set, which makes it applicable to many scenarios.
128 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
5.3.2 FlashCopy mapping
The association between the source volume and the target volume is defined by a FlashCopy
mapping. The Flashcopy mapping can have three different types, four attributes, and seven
different states.
The FlashCopy mapping has 4 property attributes (clean rate, copy rate, autodelete,
incremental) described later in this chapter and 7 different states described later in this
chapter as well. The actions users can perform on a FlashCopy mapping are:
Create: Define a source, a target and set the properties of the mapping
Prepare: The system needs to be prepared before a FlashCopy copy starts. It basically
flushes the cache and makes it “transparent” for a short time, so no data is lost.
Start: The FlashCopy mapping is started and the copy begins immediately. The target
volume is immediately accessible.
Stop: The FlashCopy mapping is stopped (either by the system or by the user). Depending
on the state of the mapping, the target volume is usable or not.
Modify: Some properties of the FlashCopy mapping can be modified after creation.
Delete: Delete the FlashCopy mapping. This does not delete any of the volumes (source
or target) from the mapping.
The source and target volumes must be the same size. The minimum granularity that IBM
Spectrum Virtualize supports for FlashCopy is an entire volume. It is not possible to use
FlashCopy to copy only part of a volume.
Important: As with any point-in-time copy technology, you are bound by operating system
and application requirements for interdependent data and the restriction to an entire
volume.
The source and target volumes must belong to the same IBM Spectrum Virtualize system, but
they do not have to be in the same I/O group or storage pool.
Chapter 5. Typical use cases for IBM Spectrum Virtualize for Public Cloud 129
Volumes that are members of a FlashCopy mapping cannot have their size increased or
decreased while they are members of the FlashCopy mapping.
All FlashCopy operations occur on FlashCopy mappings. FlashCopy does not alter the
source volumes. However, multiple operations can occur at the same time on multiple
FlashCopy mappings because of the use of consistency groups.
FlashCopy mappings can be part of a consistency group, even if there is only one mapping in
the consistency group. If a FlashCopy mapping is not part of any consistency group it is
referred as stand-alone.
Both of these layers have various levels and methods of caching data to provide better speed.
Because the IBM SAN Volume Controller and, therefore, FlashCopy sit below these layers,
they are unaware of the cache at the application or operating system layers.
To ensure the integrity of the copy that is made, it is necessary to flush the host operating
system and application cache for any outstanding reads or writes before the FlashCopy
operation is performed. Failing to flush the host operating system and application cache
produces what is referred to as a crash consistent copy.
The resulting copy requires the same type of recovery procedure, such as log replay and file
system checks, that is required following a host crash. FlashCopies that are crash consistent
often can be used following file system and application recovery procedures.
Various operating systems and applications provide facilities to stop I/O operations and
ensure that all data is flushed from host cache. If these facilities are available, they can be
used to prepare for a FlashCopy operation. When this type of facility is unavailable, the host
cache must be flushed manually by quiescing the application and unmounting the file system
or drives.
130 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
The target volumes are overwritten with a complete image of the source volumes. Before the
FlashCopy mappings are started, it is important that any data that is held on the host
operating system (or application) caches for the target volumes is discarded. The easiest way
to ensure that no data is held in these caches is to unmount the target Volumes before the
FlashCopy operation starts.
Preferred practice: From a practical standpoint, when you have an application that is
backed by a database and you want to make a FlashCopy of that application’s data, it is
sufficient in most cases to use the write-suspend method that is available in most modern
databases, because the database maintains strict control over I/O.
This method is as opposed to flushing data from both the application and the backing
database, which is always the suggested method because it is safer. However, this method
can be used when facilities do not exist or your environment includes time sensitivity.
IBM Spectrum Protect Snapshot protects data with integrated, application-aware snapshot
backup and restore capabilities using FlashCopy technologies in the IBM Spectrum
Virtualize.
You can protect data that is stored by IBM DB2 SAP, Oracle, Microsoft Exchange, and
Microsoft SQL Server applications. You can create and manage volume-level snapshots for
file systems and custom applications.
An ideal case with regards to a hybrid cloud solution would be the relocation of a specific
segment of the environment that is particularly well suited, such as development. Another
might be a specific application group that doesn’t require either the regulatory isolation or low
response time integration with on-premises applications.
Chapter 5. Typical use cases for IBM Spectrum Virtualize for Public Cloud 131
While performance may or may not be a factor, it should not be assumed that cloud
deployments will automatically provide diminished performance. Depending on the location of
the cloud service data center and the intended audience for the migrated service, the
performance could conceivably be superior than on-premises pre-migration.
In summary, moving a workload into the cloud may provide similar functionality with better
economies due to scaling of physical resources in the cloud provider. Moreover, the cost of
services in the cloud are structured and easily measurable and predictable.
The first method has already been discussed in previous sections and is essentially the same
process as disaster recovery. The only difference being that instead of a persistent
replication, once the initial synchronization is completed, the goal is to schedule the cutover of
the application onto the compute nodes in the cloud environment attached to the IBM
Spectrum Virtualize storage. This method is likely the preferred method for bare-metal Linux
or Microsoft Windows environments.
Host side mirroring would require the server to have concurrent access to both local and
remote storage which is not feasible. Also, because the object is to relocate the workload
(both compute and storage) into the cloud environment, that will more easily be accomplished
by replicating the storage and once synchronized, bringing up the server in the cloud
environment and making the appropriate adjustments to the server for use in the cloud.
The second method is largely impractical as it requires the host to be able to access both
source and target simultaneously and the practical impediments to creating an iSCSI (the
only connection method currently available for IBM Spectrum Virtualize in the Public Cloud)
connection from on-premises host systems into the cloud are beyond the scope of this use
case. Traditional VMware Storage vMotion is similar to this but again, would require the target
storage to be visible via iSCSI to the existing
The third method entails the use of third party software or hardware to move the data from
one environment to another. The general idea is that the target system would have an
operating system and some empty storage provisioned to it that would act as a landing pad
for data that is on the source system. Going into detail about these methods is also outside
the scope of this document, but suffice it to say that the process would be no different
between an on-premises to cloud migration as it would be to an on-premises to on-premises
migration.
VMware environments, however, do have some interesting options that uses either a
combination of the first two methods or something similar to the second and the third method.
The first of the two options is a creative migration method that involves setting up a pair (or
multiple pairs) of transit datastores that remain in a remote copy relationship (see Figure 5-5
on page 133). After these are sync (or if they are set up from scratch, they can be created
with the sync flag and then assigned to the VMware clusters) selected VMware guests can be
storage vMotioned into these datastores.
132 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
After that process is complete, a batch of guests can be scheduled for cutover with Site
Recovery Manager. When cut over, then the guest can be storage vMotioned out of the cloud
transit datastore into a permanent datastore in the IBM Cloud environment.
For ESX clusters on vSphere 5.1 or higher, there are circumstances under which vMotion
without shared storage is possible, with the appropriate licensing. As long as the conditions
are met for the two vSphere clusters, a guest can be moved from an ESX host from one
cluster to another, and the data will move to a datastore that is visible to the target ESX host.
This falls somewhere between the second and the third migration methods as it employs
something similar to mirroring but really leverages VMware as a migration appliance. For
more information, see the VMware Documentation.
Host Mirror VMWare vSphere 5.1 or higher Simple versus limited scope
Chapter 5. Typical use cases for IBM Spectrum Virtualize for Public Cloud 133
5.4.3 Host provisioning
In addition to the replication of data, it is necessary for compute nodes and networking to be
provisioned within the Cloud provider upon which to run the relocated workload. Currently, in
the IBM Cloud, bare-metal and virtual servers are available. Within the bare-metal options,
Intel processor based machines and Power8 machines with OpenPOWER provide high
performance on Linux-based platforms. As Spectrum Virtualize in the Public Cloud matures
and expands into other Cloud Service Providers, other platforms might become available.
134 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
6
Figure 6-1 IBM Spectrum Virtualize for Public Cloud solution components
In this solution, the cloud provider is responsible for providing the infrastructure, network
components and storage, and support and assistance for this portion. The cloud user or any
involved third party will be responsible for deploying and configure the solution from the
network layer up to the OS and the software installed. IBM Systems support is responsible for
providing support and assistance with the IBM Spectrum Virtualize application.
As per current state, the solution consists of multiple parties with different roles and
responsibilities. For this reason is good practice, in such cross-functional projects and
processes, to clarify roles and responsibilities with a workflow definition for handling tasks and
problems when they arise. In this sense a responsibility assignment matrix, also known as
RACI matrix, describes the participation by various roles in completing tasks or deliverables
for a project or business process, splitting as (R) Responsible, (A) Accountable, (C)
Consulted and (I) Informed.
The RACI matrix will be specific for each solution deployment: how the cloud service is
provided, who is the final user, who are multiple parties involved and so forth. To assist in
creating a workflow for handling problems when they arise, we created Table 6-1 as an
example.
136 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Situation Client Cloud Provider Spectrum Virtualize
In the situations where the cloud provider is responsible or accountable, the client should
collect as much detail about the problem that is known and open a ticket with the cloud
provider. In the situations where IBM Spectrum Virtualize is responsible or accountable, the
client should collect as much detail about the problem and diagnostic data surrounding the
event and raise a PMR with IBM. In the situations where the client is responsible, it is up to
the client to be as detailed as possible in any requests or questions raised to the cloud
provider or IBM Spectrum Virtualize or any other third party involved in the support.
In this page, you can review support documentation, review tickets, and create tickets, as
shown in Figure 6-3.
After a ticket is generated, a representative from IBM Cloud support reviews and updates the
ticket. An email is sent to the master account and to all the customer representative accounts
that are assigned to the ticket or entitled to receive it, as shown in Figure 6-3. The accuracy of
email addresses must be verified in advance for the correct funcionality of email notifications.
After you receive a Problem Management Record (PMR) or ticket number, you can begin
working with support to troubleshoot the problem. You might be asked to collect diagnostic
data or to open a remote support session for an IBM Support representative to dial in to the
system and investigate.
Complete the following steps to configure email notifications and emphasizes what is specific
to Call Home:
1. Prepare your contact information that you want to use for the email notification and verify
the accuracy of the data. From the GUI’s left menu, select Settings →Notifications (see
Figure 6-4).
2. Select Email and then, click Enable Notifications (see Figure 6-5 on page 139).
For the correct functionality of email notifications, ask your network administrator if Simple
Mail Transfer Protocol (SMTP) is enabled on the network and is not blocked by firewalls or
the foreign destination “@de.ibm.com” is not blocked.
138 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Be sure to test the accessibility to the SMTP server by using the telnet command (port
25 for a non-secured connection, port 465 for Secure Sockets Layer (SSL)-encrypted
communication) using any server in the same network segment.
3. After clicking Next on the welcome panel, penter the information about the location of the
system (see Figure 6-6) and contact information of IBM Spectrum Virtualize administrator
(see Figure 6-7 on page 140) to be contactable by IBM Support. Always keep this
information current.
4. Configure the IP address of your company SMTP server, as shown in Figure 6-8. When
the correct SMTP server is provided, you can test the connectivity by using the Ping
option to its IP address. You can configure additional SMTP servers by clicking the + at the
end of the entry line.
140 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
5. The summary window opens. Verify it and click Finish. You are returned to the Email
Settings window where you can verify email addresses of IBM Support
([email protected]) and optionally add local users who also must receive notifications
(see Figure 6-9).
The default support email address [email protected] is predefined by the system to
receive Error Events and Inventory, we recommend not changing these settings.
You can modify or add local users by using Edit mode after the initial configuration was
saved.
The Inventory Reporting function is enabled by default for Call Home. Rather than
reporting a problem, an email is sent to IBM that describes your system hardware and
critical configuration information. Object names and other information, such as IP
addresses, are not included. By default the inventory email is sent on a weekly basis,
allowing an IBM Cloud service to analyze and inform you if the hardware or software that
you are using requires an update because of any known issues.
Figure 6-9 shows the configured email notification and Call Home settings.
6. After completing the configuration wizard we can test the email function. To do so, enter
Edit mode, as shown in Figure 6-10. In the same window, email recipients can be defined
or any contact and location information can be changed as needed.
Note: The Test button appear for new email users after first saving and then edit again.
142 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 6-12 Disabling or enabling email notifications
Assuming the problem encountered was an unexpected node restart that has logged a
2030 error, we collect the default logs plus the most recent statesave from each node to
capture the most relevant data for support.
Note: When a node unexpectedly restarts, it first dumps its current statesave
information before it restarts to recover from an error condition. This statesave is critical
for support to analyze what occurred. Collecting a snap type 4 creates statesaves at the
time of the collection, which is not useful for understanding the restart event.
The procedure to create the snap on an IBM Spectrum Virtualize system, including the
latest statesave from each node, starts. This process might take a few minutes (see
Figure 6-15).
144 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Collecting logs by using the CLI
Complete the following steps to use the CLI to collect and upload a support package as
requested by IBM Support:
1. Log in to the CLI and to run the svc_snap command that matches the type of snap
requested by IBM Support:
– Standard logs (type 1):
svc_snap upload pmr=ppppp,bbb,ccc gui1
– Standard logs plus one existing statesave (type 2):
svc_snap upload pmr=ppppp,bbb,ccc gui2
– Standard logs plus most recent statesave from each node (type 3):
svc_snap upload pmr=ppppp,bbb,ccc gui3
– Standard logs plus new statesaves:
svc_livedump -nodes all -yes
svc_snap upload pmr=ppppp,bbb,ccc gui3
2. We collect the type 3 (option 3) and have it automatically uploaded to the PMR number
that is provided by IBM Support, as shown in Example 6-1.
3. If you do not want to automatically upload the snap to IBM, do not specify the ‘upload
pmr=ppppp,bbb,ccc’ part of the commands. In this case, when the snap creation
completes, it creates a file named in the following format:
/dumps/snap.<panel_id>.YYMMDD.hhmmss.tgz
It takes a few minutes for the snap file to complete (longer if statesaves are included).
4. The generated file can then be retrieved from the GUI under the Settings →Support →
Manual Upload Instructions twisty →Download Support Package. Click Download
Existing Package, as shown in Figure 6-16.
146 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
6.3.4 Uploading files to the Support Center
If you chose not to have IBM Spectrum Virtualize upload the support package automatically, it
mighht still be uploaded for analysis by using the Enhanced Customer Data Repository
(ECuRep). Any uploads are associated with a specific problem management report (PMR).
The PMR is also known as a service request and is a mandatory requirement when
uploading.
4. After the files are selected, click Upload to continue, and follow the directions.
Typically, an IBM Spectrum Virtualize cluster is initially configured with the following IP
addresses:
One service IP address for each IBM node.
One cluster management IP address, which is set when the cluster is created.
The SAT is available even when the management GUI is not accessible. The following
information and tasks can be accomplished with the Service Assistance Tool:
Status information about the connections and the nodes
Basic configuration information, such as configuring IP addresses
Service tasks, such as restarting the Common Information Model (CIM) object manager
(CIMOM) and updating the WWNN
148 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Details about node error codes
Details about the hardware such as IP address and Media Access Control (MAC)
addresses.
The SAT GUI is available by using a service assistant IP address that is configured on each
node. It can also be accessed through the cluster IP addresses by appending /service to the
cluster management IP.
If the clustered system is down, the only method of communicating with the nodes is through
the SAT IP address directly. Each node can have a single service IP address on Ethernet
port 1 and should be configured for all nodes of the cluster, including any Hot Spare Nodes.
To open the SAT GUI, enter one of the following URLs into any web browser:
http(s)://<cluster IP address of your cluster>/service
http(s)://<service IP address of a node>/service
3. The current selected Spectrum Virtualize node is displayed in the upper left corner of the
GUI. In Figure 6-21, this is node ID 1. Select the desired node in the Change Node section
of the window. You see the details in the upper left change to reflect the selected node.
Note: The SAT GUI provides access to service procedures and shows the status of the
nodes. It is advised that these procedures should only be carried out if directed to do so by
IBM Support.
For more information about how to use the SA Tool, see this website.
Note: Clients who have purchased Enterprise Class Support (ECS) are entitled to IBM
Support using Remote Support Assistance to quickly connect and diagnose problems.
However, because IBM Support might choose to use this feature on non-ECS systems at
their discretion, we recommend configuring and testing the connection on all systems.
150 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
If you are enabling Remote Support Assistance, then ensure that the following prerequisites
are met:
Ensure that call home is configured with a valid email server.
Ensure that a valid service IP address is configured on each node on the IBM Spectrum
Virtualize.
If your IBM Spectrum Virtualize is behind a firewall or if you want to route traffic from
multiple storage systems to the same place, you must configure a Remote Support Proxy
server. Before you configure remote support assistance, the proxy server must be
installed and configured separately. During the set-up for support assistance, specify the
IP address and the port number for the proxy server on the remote support centers panel.
If you do not have firewall restrictions and the IBM Spectrum Virtualize nodes are directly
connected to the Internet, request your network administrator to allow connections to
129.33.206.139 and 204.146.30.139 on Port 22.
Both uploading support packages and downloading software require direct connections to
the Internet. A DNS server must be defined on your IBM Spectrum Virtualize for both of
these functions to work.
To ensure that support packages are uploaded correctly, configure the firewall to allow
connections to the following IP addresses on port 443: 129.42.56.189, 129.42.54.189, and
129.42.60.189.
To ensure that software is downloaded correctly, configure the firewall to allow connections
to the following IP addresses on port 22: 170.225.15.105,170.225.15.104,
170.225.15.107, 129.35.224.105, 129.35.224.104, and 129.35.224.107.
Figure 6-22 shows a pop-up that appears in the GUI after updating to V8.1, to prompt you to
configure your IBM Spectrum Virtualize for Remote Support, you might select not to enable it,
open a tunnel when needed or to open a permanent tunnel to IBM.
Choosing to set up support assistance opens a wizard to guide us through the configuration.
Figure 6-24 on page 153 shows the first wizard panel, where we can choose not to enable
remote assistance by selecting I want support personnel to work on-site only or enable
remote assistance by choosing I want support personnel to access my system both
on-site and remotely. We chose to enable remote assistance and click Next.
152 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Figure 6-24 Remote Support wizard enable or disable
The next panel, shown in Figure 6-25, lists the IBM Support centers IP addresses and SSH
port that will need to be open in your firewall, here we can also define a Remote Support
Assistance Proxy if we have multiple IBM Spectrum Virtualize clusters in the same cloud,
enabling firewall configuration only being required for the Proxy Server and not every storage
system. We do not have a proxy server and leave the field blank. Click Next.
154 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
When we have completed the remote support set-up, we can view the current status of any
remote connection, start a new session, test the connection to IBM, and reconfigure the
setup. As shown in Figure 6-27, we successfully tested the connection. Now, click Start New
Session to open a tunnel for IBM Support to connect.
156 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
A
Your design should not only consider a single plan, but several possible alternatives to better
adapt and be able to work in a degraded or impacted ecosystem.
Of course, costs are always an important consideration but nevertheless you should have a
clear view of the minimal acceptable conditions and the other more optional elements that
may be eliminated when cost is a challenge, but with a full appreciation for the consequences
of the omission of those elements.
Your design should not be limited to the IT only. Your IT is dependant on the following factors
that can be impacted as well:
Key personnel availability
External networks services
Dependencies on critical providers
Road conditions (when for example planning to physical transfers of personnel, backup)
Disaster recovery (DR) resources availability when required
Recovery tiers
The DR solution can have recovery tiers with a different set of appropriate Recovery Time
Objective (RTO) and Recovery Point Objective (RPO) requirements for each wave or tier.
P1 Class: With a near-immediate restart (4-12 hours), which can be implemented with
dedicated stand-by resources at the DR site associated to a technological data replication.
Although this RTO window might appear to be too wide, you have to consider that your
restart will be in an emergency and in this condition, the best you can achieve is the
equivalent of restarting from power failure at your on-premises. P1 might be asynchronous
replication with up to a 5 minute possible delay (GMCV with 150 sec cycle period).
P2 Class: With a restart within one day, you could leverage shared or re-usable assets at
the alternate site. Since the time is short, usually this spare capacity must be already there
and cannot be acquired at the time of the disaster (ATOD).
P3 Class: With a restart after two days, you can use additional compute power that can be
freed-up or provisioned on-demand at the alternate site.
158 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Note: Depending on business need, P2 and P3 classes could have either the same
RPO or longer RPOs depending on business process or needs. For instance, if this is
back office documentation that is only refreshed on a quarterly basis, or web servers,
then an RPO of a day or a week might be acceptable and might lend itself to tape
restoration instead of storage replication.
The previous list is just a general definition; you might have more tiers or different
combinations of requirements.
Whatever the recovery tier is, four basic things are essential to provide a usable DR solution:
A copy of your production data in the DR site according to the RPO requirement.
Alternate IT resources (compute) to restart the IT operation in emergency according to the
RTO requirement.
An alternate connection to your production network (WAN).
Ability to operate and run production at the alternate DR location.
The bigger the emergency, the more services (hence resources) you need to operate during
the crisis.
Compute power could be acquired on-demand leveraging the cloud service provider’s (CSP)
ability to scale the infrastructure.
On the storage side, you can opt to replicate and run the DR test simulation with low
performance and using low cost storage, but that performance might not be enough to sustain
actual production. Thus, you might need to move your production data over a different tier of
storage. You have to evaluate what this means for your cloud provider. This means having a
clear picture of the effort in migrating the data to the new tier and the associated monetary
and time (which might impact your RTO).
Another thing to remember is that cloud is a different business than Disaster Recovery (DR).
Appendix A. Guidelines for disaster recovery solution in the Public Cloud 159
Most DR providers actively use resource syndication to contain the costs. Resource
syndication means the same resource is used to provide service to different customers that
might not be exposed to the same concurrent event.
In other words, the compute resource that you use for your DR solution might also be used by
other customers of the DR provider, not sharing the same risk situation with you.
DR providers have consolidated this in years of experience in planning and running the DR
business, so their site planning is such that it offers a reasonable guarantee that their
customers in the same risk zone will have their contract honored with the availability of the
agreed upon resources.
Cloud providers do not syndicate, because this way of operating is not within their business
model. CSP planning algorithms do not take disaster recovery concurrent requests into
consideration when performing a planning for a site, thus in case of concurrent on-demand
provisioning requests, they will provide resources simply based on the timeline of requests up
to when they have exhausted the resources in that specific site. At this point, the CSP cannot
fulfill requests at that particular site in a reasonable time.
Tip: Be careful to search for CSP SLA on provisioning and read all the caveats.
DR test
Disaster recovery testing is what you have to do to ensure your ability to be resilient, hoping
not to have to do it in real life.
There are two types of DR testing you can execute to verify your resiliency capability:
DR simulation
Switch-over
DR simulation
DR simulations are mainly done to verify and audit the emergency runbooks and check the
RTO and RPOs provided by the in an environment that, as closely as possible, resembles a
real emergency.
This means introducing disruptions on the replication network connections before interrupting
the communications among the sites, simulating the sudden lost of the primary site.
You can only do this if your data replication solution is resilient, and such a disruption does
not have a negative impact on the production, which would continue running on the primary
site. For instance, if you suddenly disrupted the partnership in a IBM Spectrum Virtualize
Metro Mirror relationship, any write activity to your master volumes would suspend. Similarly,
if you simulated a site failure with Enhanced Stretch Cluster by isolating the primary site from
the secondary site and the quorum, the primary site would go offline, by design, until
connectivity is restored. So, in IBM Spectrum Virtualize, the only likely scenario appropriate
for this test would be Global Mirror with Change Volumes (GMCV). Refer to section “Plan and
design for the worst case scenario” on page 158 for more information.
160 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Thus, it is important that network streams flowing to the DR test environment are copies of
the real production. Usually these network flows are intercepted and duplicated at the primary
site (that receives the real flow), and sent to the DR site which operates on a separate
network from production.
Switch-over
Opposite to the DR simulation, in the switch-over test, production is moved from the primary
site to the DR site to verify and audit the ability to run and sustain production operations for an
extended period.
In this option we do not test the DR runbooks, as this would imply the emergency restart, and
would have an impact on the production.
Similarly, because this move impacts the production, you should be careful in performing this
switch to minimize all those impacts through scheduling, communication and planning.
Production operations are shut down at the primary site and started up at the DR site.
Replication is reversed once the switchover is complete and production is running at the DR
site.
Production runs in the DR site for whatever period you have chosen before returning to the
Primary site. During this period, your production site performs as the DR site until the next
switch-over.
Updates to the production running at the DR site is kept and replicated back to the Primary
site.
DR test frequency
DR test frequency depends on many factors, but in general you should have a DR test at least
once per year. This is the bare minimal frequency accepted by auditors to prove your ability to
recover.
In setting the frequency of your cloud DR testing you should strive for more frequent than
once a year, depending on multiple elements specific to the cloud, such as the following
concepts:
How dynamic is your production environment? The more your production environment
changes, the more you need to exercise a DR test to verify that changes introduced have
not affected your ability to recover,
How dynamic is your CSP infrastructure? As an example, if you have chosen to provision
some of the DR resources on demand, you should consider that you have no control on
the type of resources and technologies you will be able to find at the moment of a disaster.
In fact, the CSP may have changed the underlying technologies (for example, the servers)
or configurations (for example, network topologies) or service levels (for example, time to
provision new resources) since your last test. So, it is suggested to increase the testing
frequency in such a case, especially if you believe that your applications may be sensitive
to changes in the underlying infrastructure.
Of course, each test carries a cost associated with the use of the DR site, effort and so on, so
leveraging automation, tools and software products like IBM Cloud Resiliency Orchestrator,
operations required to run a DR test can be simplified, allowing the associated costs and
effort and test more frequently. More automation will also enhance overall recoverability and
improve the recovery time objective (RTO).
Appendix A. Guidelines for disaster recovery solution in the Public Cloud 161
On-premise to DR cloud considerations
There are some considerations when planning to use a cloud service provider to implement a
DR solution for a production running on-premises:
Resource pricing
On-demand resources are cheaper, as they are usually billed by usage, which means you
only pay when you use the resource. It is important to evaluate all the caveats associated with
the usage billing, and how that might impact your total cost of ownership (TCO) with hidden
and unplanned costs - as in emergency the use of on-demand resources which might incur
higher costs compared to what you have originally planned.
162 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Cloud to DR cloud considerations
Here are some considerations on top of what has been listed for on-premises to DR cloud,
when planning to use a cloud service provider to implement a DR solution for a production
running on cloud:
Particular attention has to be placed in investigating the different methodologies, tools and
interfaces of the two cloud providers. In fact, they might strongly impact the portability of
workloads among different provides and could create a true lock in. In the following we cover
some of the technical domains to which particular attention should be paid.
Monitoring
Often native tools have been used to implement the monitoring of the cloud environment.
Hence, moving a workload to a different CSP may involve changing monitoring frameworks,
which may involve the installation of different clients on the host systems or integration back
to on-premise notification systems or event correlation. However, the native monitoring tools
are often limited to fully address enterprise level monitoring requirements. For example, they
work quite well to monitor the cloud resources, but may lacking in the areas of middleware or
databases deployed on the VMs. So, we suggest adopting additional tools that augmenting
the native monitoring tools. In our case we have introduced a cloud monitoring tool that
natively integrates with the CSP’s monitoring via APIs. There are also other tools with similar
capabilities. The introduction of such a tool reduces CSP dependency since the tool can
natively integrate with multiple cloud monitoring frameworks and facilitate DR processes.
APIs
Any scripts or applications that interchanges data with the CSP native APIs are impacted in
the case of a relocation of the workloads and this has to be taken into account.
Network
Networking is one of the most dynamic element between different CSPs. Key features that
have to be taken into account include possibility of bringing the original IP when moving
workloads to a cloud. Other functionalities to look for are the ability to interconnect different
subscriptions environments and to interconnect virtual networks between different cloud sites.
Provisioning
The interface (graphical or APIs) to provision resources on a cloud varies from CSP to CSP.
We suggest introducing a brokerage tool (such as the IBM Cloud Brokerage) to consistently
manage resources provisioning and cost management across CSPs at the DR site.
Backup
Generally CSPs provide a mechanism for performing basic backups. To implement a more
complete and sophisticated backup functionality, consider implementing an additional product
or framework. One example is Azure IaaS VM backup. On AWS, tools are available in the
marketplace. It is strongly suggested to evaluate the trade off between using a CSP native
backup capability versus an independent backup software that can be utilized on any CSP. In
fact, using the native tools may generate a lock in or at least make it difficult to manage your
workload on a different provider. This could be overcome using a tool, such as IBM Spectrum
Protect, to be installed on cloud and to store backup data on the CSP storage resources.
Appendix A. Guidelines for disaster recovery solution in the Public Cloud 163
When implementing a DR cloud to cloud on the same CSP, of course, the solution is
simplified as the same technology is leveraged in both the primary and the secondary site.
However, these considerations remains valid in a more broad context to avoid lock-ins.
The use of the same provider for DR is the easiest choice, however it could not be possible in
all cases. Reasons could be, for example, availability of a secondary site in a given region and
SLAs provided by the CSP to support the DR.
164 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Reprovisioning operating system and subsystem (for example database management
system) licenses might also impact your recovery time objective and constitute a substantial
cost.
Common pitfalls
Based on our experience, here are some common pitfalls when implementing a DR solution
on cloud:
Some examples:
Designing the solution to use low cost or low performance components in the DR site
(such as storage, server). While these systems may be able to prove the concept of a
disaster recovery, they will likely not be able to run your full volume production workload for
an extended period of time in case of a emergency.
Reuse your decommissioned production components (such as storage, server) due to
technology change in the DR solution. If you have changed your production technology it
was probably because these old component were not a good fit for your requirements any
longer.
Single-points-of-failure
Single-points-of-failure (SPOF) can be anywhere in a solution, not only on the technology
side. Your solution might dependent on people, vendors, providers, and other external
dependencies. It is essential that you identify clearly in advance most of your SPOF or at
least have a plan to mitigate your dependencies. Also be prepared to discover SPOF during
the first sessions of your DR test.
Among SPOFs, Provider Risk is a condition to consider in your DR plan. When you have both
Production and DR on the same provider your risk has increased and must be carefully
considered.
Having a Plan B with a different recovery time objective might help you to mitigate additional
risks, while not adding too much costs.
Your Plan B recovery time objective might not be the one expected, meaning you will need
more time to restart your operations, and thus you will suffer greater business impact from the
emergency, but restarting later is far better than not restarting at all.
Appendix A. Guidelines for disaster recovery solution in the Public Cloud 165
A possible Plan B in a two sites topology might be to have a periodic third site backup of the
data, at least located at a distance sufficient to avoid the risk of being affected by the same
events that affect the primary and principal DR site.
Testing is essential to guarantee that you have a valid solution, also but the validity of the
solution is dependent on establishing the correct conditions under which you are testing.
Invalid conditions will invalidate the solution no matter how rigorously you have tested it.
If you plan to perform a DR test, by doing an orderly shutdown of operations at the primary
site and an orderly start at the DR site, you can be sure that your DR site environment is valid
and capable of supporting a workload that you are able to dynamically relocate. But is this
what will happen during an emergency? You must build in processes and budget time into
your RTO to account for resolving inconsistent application states and reconciling
interdependent applications.
You should design your tests to mimic as much as possible the possible emergency
conditions, by simulating a so called rolling disaster condition, where your IT service is
impacted progressively by the emergency. This is the best way to test your solution and have
a reasonable understanding whether it is resilient (ability to resist to stress conditions).
Networking aspects
In this section, we briefly illustrate key aspects in network design that might impact the DR
solution.
Five networks
In a DR solution we can basically classify the networks in five types:
Management and monitoring
Replication
DR test
Failover (or emergency)
Fallback
In cloud, the concept of these networks continue to be valid but may need to be implemented
differently depending on the CSP specific network services and policies.
166 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
You can normally create network tunnels within the replication network to sustain these
functions.
Replication
Replication is the connectivity required to duplicate data from the primary to the DR site. It is
mainly determined by the synchronization method:
Online full synchronization
In the online synchronization, a full copy the entire volume (including empty spaces) is
transmitted over the network to the DR site, together with delta updates that are captured
and sent. In the online synchronization, this might not be a one-shot operation.
Sometimes, you need to periodically perform a full-refresh of the replicated data, or to
perform a bulk-transfer of a good portion of your source data and iteratively replicate the
deltas until they are sufficiently small.
Offline synchronization
In the offline synchronization, the large part of the data is made available at the DR site
through alternative means (disk images), and thus only delta updates that are captured
and sent will traverse the network. This allows you to reduce the network requirement for
replication, but you should carefully and methodically determine the frequency of the
periodic refreshes, as this might impact your RTO/RPO.
The safest way to size your bandwidth requirement for is to size for online synchronization,
and not just daily updates.
In the cloud, the replication network is generally provided by the CSP. For simplicity let us
assume that both the production and DR sites are in the same CSP’s cloud. In such a case
the replication network is part of the site to site connectivity that the CSP makes available to
its clients. It is important to verify the costs (if any) and performance of such a connectivity
while planning for a DR solution.
DR test
As we have seen previously, two types of DR testing exist, and their network requirements of
are different:
DR simulation: Run while the production operations continue on-premises, DR test users
access the DR simulation environment by pointing to different servers reprovisioned at the
DR site. If the original IP addresses of the recovered servers are needed, they’ll have to be
NATed to these different IP addresses to avoid duplicated IP addresses in production or,
even worse, real transaction are executed toward the DR servers instead of the real
production servers.
Another option could be to perform policy based routing based on the source address of
the test users.
Switch-over: Run by moving the production to the DR site. From a networking perspective
this is equivalent to the real emergency, as the entire production networks have to be
routed to the DR site, while replication network flow is reversed (from DR site to
on-premises), to bring updated data back to on-premise, which now works as the DR site.
Appendix A. Guidelines for disaster recovery solution in the Public Cloud 167
In the cloud, the test network is provisioned as part of the DR cloud resources and is often
subject to limitations as mentioned before. If the DR is implemented on the same CSP, it
would be easier to configure a network to simulate or switch the production environment.
Failover or emergency
The entire production network has to be routed to the DR site, while replication network flow
is reversed (from DR Site to on-premise), to brings updated data back to on-premises.
During the emergency the network functions (routing and security) are also transferred from
on-premises to the DR site and if the primary site is physical on-premises, it might need to be
virtualized to adapt to the target CSP requirements. A pre-check and maintenance of this
physical-to-virtual network functions is a key success factor during the emergency and has
the same importance as the Data Replicator or the Server Reprovisioning technique.
Apply all the considerations for cloud to DR cloud to this failover (or emergency) network as
well.
Fallback
Fallback presents the same challenges as the DR simulation by switch-over seen before, as
you keep having the production running at DR site, while intercepting and sending updates
back to the fallback site.
The quantity of data that flows back to the fallback sites depends on the emergency
happened.
For short terms emergencies, where the original site is unavailable for a period of time, but
that has left servers and storage intact, a delta-resynch might fit the need to bring the
operation back to the on-premise.
For other emergencies that has forced to a change to the site server or storage in the original
on-premises might require a full resynchronization of data, and so the fallback will happen
ordinately at the most convenient time after the synch-point has been achieved.
This requires the extension of the production site network(s) to DR site. That can happen on
Layer 2 (like: L2TPv3, and Cisco Overlay Transport Virtualization) or in Layer 3 (like: Cisco
Locator and ID Separation Protocol (LISP), Virtual Private LAN Services (VPLS) or Software
Defined Network technique). Additional considerations may require the full control of the
landing DR hypervisor, so techniques like Virtual Extensible LAN overlays included in
solutions like VMware NSX (Network Virtualization and Security Platform).
From a network perspective this extension requires additional planning because of possible
impacts on security and performances, at least because the servers that were previously in
the local LAN, are now placed over a longer distance (WAN).
168 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
This impact must be evaluated in two directions: from a server to user perspective and from a
server to server perspective, especially for latency-sensitive applications or services.
User-to-DR server
Consider the scenario where only a server (server A) has been failed over and is currently
running at the DR data center in a partial failover situation.
The user-to-server A session, will reach the server A DR-VLAN through the existing customer
routing and security running on-premise. From the customer-managed routing perspective,
server A is still in the source-VLAN; that is, the fact that source-VLAN is extended in the DR
data center is transparent to the customer managed router.
So, the customer managed router will place an Address Resolution Protocol (ARP) frame for
server A on the source-VLAN. This broadcast frame will be seen and picked up by the
network extension device or function at the customer location, encapsulated in the L2TPv3
tunnel, and sent to the network extension device or function at the DR site. The frame is then
placed DR-VLAN where server A is running.
Note: If the customer managed router has a static ARP table for server A, this will not work
and the static entry needs to be replaced.
While this is applicable to both internet and directly connected customers, the network design
must consider the additional latency due to the processing of the L2TPv3 tunnel, and the
distance.
Where the additional latency may be predictable within a range for directed connected clients
with network latency SLAs (the overall latency does not come only from the link latency), the
internet connected customers might experience a different situation due to the unpredictable
latency of the public internet.
The network design must also consider possible maximum transmission unit (MTU)
adjustments or additional packet loss situations that the application running in the production
site might demand in terms of requirements.
Server-to-DR servers
Consider the scenario where two or more servers have been failed over and are currently
running at the DR data center in a partial failover scenario (server A on DR-VLAN1 and
server B on DR-VLAN2), while the rest of the servers keep on running on-premises.
In this case even the main concepts are similar to what we have seen in previous example,
since the customer routing and security services are still running on-premises. Each frame
has an additional latency to be considered, as all the routing and security functions are
performed on-premises.
The more servers in a partial F/O mode, the more the impact of latency on the customer
routing and security functions.
Appendix A. Guidelines for disaster recovery solution in the Public Cloud 169
Network virtualization
Network virtualization (NV) is the ability to create logical, virtual networks that are decoupled
from the network hardware. This ensures that the network can better support virtual
environments.
NV abstracts networking connectivity and services that have traditionally been delivered via
hardware into a logical virtual network that is decoupled from and runs independently on top
of a physical network in a hypervisor.
NV can deliver a virtual network within a virtualized infrastructure that is separate from other
network resources.
Some NV functions require the full control of the landing DR hypervisor, so techniques like
VXLAN (or Network Overlays) may not be applicable or be subject to restrictions.
The virtualization of those function cloud be the only essential step in being able to exploit
cloud service providers, where it is not possible to recreate them using traditional appliances
or specialised hardware.
Another challenge might be on the performance side, as the network function virtualization
runs over a standardized compute and not on specialized hardware like an appliance
on-premises.
You might want to exploit this feature to maintain your on-premises IP in the DR site. When
doing so, consider that pros and cons exist.
170 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Pros
BYOIP includes the following advantages:
System administrators are more comfortable with operating in an environment that mimics
their production site
Hard coded IP address in systems works like on-premises
Domain Name Servers does not require re-convergence
Cons
BYOIP includes the following disadvantages:
Presents more challenges in network extensions
Presents more challenges when handling Partial Fail-Over conditions
Appendix A. Guidelines for disaster recovery solution in the Public Cloud 171
172 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Related publications
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topic in this
document. Note that some publications that are referenced in this list might be available in
softcopy only:
Implementation guide for IBM Spectrum Virtualize for Public Cloud on AWS, REDP-5534
Implementing the IBM Storwize V7000 with IBM Spectrum Virtualize V8.1, SG24-7938-06
IBM System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521
Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V8.1, SG24-7933
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft, and additional materials, at the following website:
ibm.com/redbooks
Online resources
The following websites are also relevant as further information sources:
Solution for integrating the FlashCopy feature for point in time copies and quick recovery
of applications and databases:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000935
Information about the total storage capacity that is manageable per system regarding the
selection of extents:
https://www.ibm.com/support/docview.wss?uid=ssg1S1010644
Information about the maximum configurations that apply to the system, I/O group, and
nodes:
https://www.ibm.com/support/docview.wss?uid=ssg1S1010644
IBM Systems Journal, Vol. 42, No. 2, 2003:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853
IBM Storage Advisor Tool:
https://www.ibm.com/us-en/marketplace/data-protection-and-recovery
174 Implementation guide for IBM Spectrum Virtualize for Public Cloud Version 8.3
Back cover
REDP-5466-01
ISBN 0738457809
Printed in U.S.A.
®
ibm.com/redbooks