Dell SRDF Intro 10
Dell SRDF Intro 10
Introduction
October 2024
Rev. 06
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2019 - 2024 Dell Inc. or its subsidiaries. All rights reserved. Dell Technologies, Dell, and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners.
Contents
Figures..........................................................................................................................................6
Tables........................................................................................................................................... 7
Preface.........................................................................................................................................................................................8
Chapter 1: Introduction................................................................................................................. 9
Overview.............................................................................................................................................................................. 10
SRDF supported features................................................................................................................................................ 10
What is SRDF?....................................................................................................................................................................12
Disaster recovery......................................................................................................................................................... 12
High availability............................................................................................................................................................. 13
Data migration...............................................................................................................................................................14
SRDF concepts...................................................................................................................................................................15
SRDF device pairs........................................................................................................................................................ 15
Invalid tracks in SRDF pairs....................................................................................................................................... 17
SRDF groups................................................................................................................................................................. 18
Dynamic device Operations....................................................................................................................................... 18
SRDF modes of operation.......................................................................................................................................... 19
SRDF device states.....................................................................................................................................................22
SRDF consistency....................................................................................................................................................... 26
Nodes, links, and ports...............................................................................................................................................26
Contents 3
Chapter 3: High availability......................................................................................................... 48
SRDF/Metro ......................................................................................................................................................................48
SRDF/Metro life cycle...................................................................................................................................................... 51
Initial provisioning and configuration setup........................................................................................................... 51
Add and remove devices........................................................................................................................................... 52
Remove the SRDF/Metro configuration............................................................................................................... 52
SRDF/Metro configuration changes............................................................................................................................ 53
Add devices...................................................................................................................................................................53
Remove devices...........................................................................................................................................................54
SRDF/Metro resilience.................................................................................................................................................... 54
Witness.......................................................................................................................................................................... 55
Device Bias....................................................................................................................................................................60
Preventing data loss....................................................................................................................................................61
Specifying the resiliency method............................................................................................................................. 61
Mobility ID with ALUA.......................................................................................................................................................61
Disaster recovery facilities...............................................................................................................................................61
Highly available disaster recovery (SRDF/Metro Smart DR)............................................................................61
Independent disaster recovery.................................................................................................................................65
Deactivate SRDF/Metro................................................................................................................................................. 66
SRDF/Metro restrictions.................................................................................................................................................67
4 Contents
Write folding................................................................................................................................................................. 84
Write pacing ................................................................................................................................................................ 84
Contents 5
Figures
6 Figures
Tables
Tables 7
Preface
As part of an effort to improve its product lines, Dell Technologies periodically releases revisions of its software and hardware.
Therefore, some functions that are described in this document might not be supported by all versions of the software or
hardware currently in use. The product release notes provide the most up-to-date information about product features.
Contact your Dell Technologies technical support professional if a product does not function properly or does not function as
described in this document.
NOTE: This document was accurate at publication time. Go to Dell Technologies Online Support, (https://www.dell.com/
support/home), to ensure that you are using the latest version of this document.
Purpose
This document provides an introduction to the Symmetrix Remote Data Facility (SRDF) and its uses in disaster recovery, high
availability, and data migration applications.
Audience
This document is intended for Dell Technologies customers who want an overview of SRDF and its applications.
Related documentation
Information on the storage arrays that SRDF runs on is in the following publications:
● Dell PowerMax Family Product Guide
● Dell VMAX All Flash Product Guide for VMAX 250F, 450F, 850F, 950F with HYPERMAX OS
● VMAX3 Family Product Guide for VMAX 100K, 200K, 400K with HYPERMAX OS
Your comments
Your suggestions help improve the accuracy, organization, and overall quality of the documentation. Send your comments and
feedback to: [email protected]
8 Preface
1
Introduction
This chapter introduces SRDF, lists its uses, and defines SRDF's concepts.
Topics:
• Overview
• SRDF supported features
• What is SRDF?
• SRDF concepts
Introduction 9
Overview
SRDF can operate between different operating environments and arrays. Arrays running PowerMaxOS or HYPERMAX OS can
connect to arrays running older operating environments. In mixed configurations where arrays are running different versions,
SRDF features of the lowest version are supported.
PowerMax, VMAX All Flash, and VMAX3 arrays can connect to:
● PowerMax arrays running PowerMaxOS
● VMAX 250F, 450F, 850F, and 950F arrays running HYPERMAX OS
● VMAX 100K, 200K, and 400K arrays running HYPERMAX OS
● VMAX 10K, 20K, and 40K arrays running Enginuity 5876 with an Enginuity e-Pack
Iinterfamily connectivity allows you to add the latest hardware platform or operating environment to an existing SRDF solution,
enabling a technology refresh.
NOTE: When you connect between arrays running different operating environments, limitations may apply. Information
about which SRDF features are supported, and applicable limitations for 2-site and 3-site solutions is available in the SRDF
and NDM Interfamily Connectivity Information document.
10 Introduction
Table 1. SRDF features by hardware platform/operating environment (continued)
Feature Enginuity 5876 HYPERMAX OS PowerMaxOS PowerMaxOS 10
5977 5978 (6079)
VMAX 40K, VMAX 10K VMAX3, VMAX PowerMax PowerMaxOS
VMAX 20K 250F, 450F, 2000, 2500,
850F, 950F PowerMax 8000 PowerMaxOS
8500
Fibre Channel Single Round Trip Enabled Enabled Enabled Enabled Enabled
(SiRT)
GigE SRDF Compression
Software Supported Supported Supported Supported Supported
● VMAX
20K
● VMAX
40K:
Enginuity
5876.82.
57 or
higher
Hardware Supported N/A Supported Supported Supported
● VMAX
20K
● VMAX
40K:
Enginuity
5876.82.
57 or
higher
Fibre Channel SRDF Compression
Software Supported Supported Supported Supported Supported
● VMAX
20K
● VMAX
40K:
Enginuity
5876.82.
57 or
higher
Hardware Supported N/A Supported Supported Supported
● VMAX
20K
● VMAX
40K:
Enginuity
5876.82.
57 or
higher
IPv6 and IPsec
IPv6 feature on 10 GbE Supported Supported Supported Supported Supported
IPsec encryption on 1 GbE ports Supported Supported N/A N/A N/A
a. If both arrays are running HYPERMAX OS or PowerMaxOS 5978, up to 250 SRDF groups can be defined across all the
ports on a specific SRDF director, or up to 250 SRDF groups can be defined on one port on a specific SRDF director.
b. A port on the array running HYPERMAX OS or PowerMaxOS 5978 connected to an array running Enginuity 5876 supports
a maximum of 64 SRDF groups. The director on the HYPERMAX OS or PowerMaxOS side that is associated with that port
supports a maximum of 186 (250 – 64) SRDF groups.
Introduction 11
Table 1. SRDF features by hardware platform/operating environment
c. If both arrays are running HYPERMAX OS or PowerMaxOS 5978, up to 250 SRDF groups can be defined across all the
ports on a specific SRDF director, or up to 250 SRDF groups can be defined on one port on a specific SRDF director.
d. A port on the array running HYPERMAX OS or PowerMaxOS 5978 connected to an array running Enginuity 5876 supports
a maximum of 64 SRDF groups. The director on the HYPERMAX OS or PowerMaxOS side that is associated with that port
supports a maximum of 186 (250 – 64) SRDF groups.
e. If both arrays are running PowerMaxOS 10 (6079) up to 2 K SRDF groups can be defined across all the ports on a specific
SRDF director or up to 2K SRDF groups can be define on one port on a specific SRDF director.
f. A port on the array running connected to an array running HYPERMAX OS 5977 or PowerMaxOS 5978 supports a
maximum of 250 SRDF groups. The director on the side that is associated with that port supports a maximum of 1798
(2048 - 250)v SRDF groups.
g. 32 Gb fibre channel port support was added with PowerMaxOS 5978.479.479 for PowerMax 8000 / 2000 systems and
can negotiate down two levels. A 32 Gb/s port can run at 16 Gb/s or 8 Gb/s, a 16 Gb/s port can run at 8 Gb/s or 4 Gb/s.
What is SRDF?
The Symmetrix Remote Data Facility (SRDF) maintains real-time (or near real-time) copies of data on a production storage array
at one or more remote storage arrays. SRDF has three primary applications:
● Disaster recovery
● High availability
● Data migration
This is an introduction to SRDF, its uses, configurations, and terminology. The rest of this chapter provides a summary of the
applications for SRDF (SRDF device pairs explains the device naming conventions that are used in the diagrams).
Disaster recovery
In disaster recovery, SRDF maintains a real-time copy of the data of one or more devices on a storage array at one or more
additional arrays. This provides the means to restart host applications should the main array become unavailable for any reason.
Typically, each array is on a separate site from all the others in the SRDF configuration.
For example, this diagram shows a configuration where data is replicated to one additional array:
Open systems hosts
Production host Optional remote host
R1 SRDF Links R2
Read/ Read
Write Only
R1 data copies to R2
The next example shows data being replicated to two additional arrays simultaneously. This configuration improves redundancy
and data security:
12 Introduction
Site B
Target
R2
Site C
R11
Target
Site A
Source
R2
Disaster recovery describes SRDF's disaster recovery facilities, and system configurations in more detail.
High availability
In other SRDF configurations, devices on the primary array are Read or Write accessible to the application host while devices on
the additional arrays are Read Only or Write Disabled. However, in an SRDF high availability configuration:
● Devices on the additional array are Read/Write accessible to the application host.
● The application host can write to both sides of the device pair.
● The devices on the additional array assume the same external identity (such as geometry and device identifier) as the
devices on the primary array.
This shared identity means that the devices appear to the application host as a single, virtual device across the two arrays.
Using two devices improves the availability of the data that they contain. One device can become unavailable without impacting
on the host application as the second device continues to operate.
Such a configuration, which is known as SRDF/Metro, can be deployed in a single, multipathed host environment, or in a
clustered environment as this diagram shows:
Multi-Path Cluster
Read/Write Read/Write
Read/Write Read/Write
High availability describes SRDF high availability and its system configurations in more detail.
Introduction 13
Open systems (FBA) only
SRDF/Metro is available in open systems (FBA) and IBM i D910 1 environments only. The mainframe environment has its own
high availability configuration called AutoSwap. The publications in the mainframe and GDDR sections of More information
contain details of AutoSwap and its capabilities.
Data migration
The data migration capabilities of SRDF enable devices on either side of a two-array configuration to be replaced with new
devices without interrupting disaster recovery operations. The configuration is enhanced with a third array that contains the
new devices. Data is replicated to the third array in addition to the normal operation. When the migration is complete, the
devices being replaced can be taken out of the configuration leaving one of the original arrays and the new one.
For example, this diagram shows the replacement of R2 devices with new devices using SRDF migration facilities:
● The initial two-array topology
● The interim three-array topology
● The final two-array topology
Array A Array B
R1 R2
R11 R2 R1
SRDF
migration
R2
R2
Array C Array C
Data migration describes the SRDF migration capabilities and system configurations in more detail.
14 Introduction
SRDF concepts
SRDF device pairs
An SRDF device pair is a logical device that is paired with another logical device that resides in a second array. The arrays are
connected by SRDF links.
Encapsulated Data Domain devices that are used for Storage Direct cannot be part of an SRDF device pair.
R1 and R2 devices
An R1 device is the member of the device pair at the source (production) site. R1 devices are generally Read/Write accessible to
the application host.
An R2 device is the member of the device pair at the target (remote) site. During normal operations, host I/O writes to the R1
device are mirrored over the SRDF links to the R2 device. In general, data on R2 devices is not available to the application host
while the SRDF relationship is active. In SRDF synchronous mode, however, an R2 device can be in Read Only mode that allows
a host to read from the R2.
In a typical environment:
● The application production host has Read/Write access to the R1 device.
● An application host connected to the R2 device has Read Only (Write Disabled) access to the R2 device.
Open systems hosts
Production host Optional remote host
R1 SRDF Links R2
Read/ Read
Write Only
R1 data copies to R2
R11 devices
R11 devices operate as the R1 device for two R2 devices. Links to both R2 devices are active.
R11 devices typically occur in three-site concurrent configurations where data on the R11 site is mirrored to two secondary (R2)
arrays:
Introduction 15
Site B
Target
R2
Site C
R11
Target
Site A
Source
R2
R21 devices
R21 devices have a dual role and are used in cascaded three-site configurations where:
● Data on the R1 site is synchronously mirrored to a secondary (R21) site, and then
● Asynchronously mirrored from the secondary (R21) site to a tertiary (R2) site:
Production
host
SRDF Links
R1 R21 R2
The R21 device acts as a R2 device that receives updates from the R1 device, and as a R1 device that sends updates to the R2
device.
When the R1->R21->R2 SRDF relationship is established, no host has write access to the R21 device.
In arrays that run Enginuity 5978 and earlier, the R21 device can be diskless. That is, it consists solely of cache memory and does
not have any associated storage device. It acts purely to relay changes in the R1 device to the R2 device. This capability requires
the use of thick devices. Systems that run PowerMaxOS or HYPERMAX OS contain thin devices only, so setting up a diskless
R21 device is not possible on arrays running those environments.
R22 devices
R22 devices:
16 Introduction
● Have two R1 devices, only one of which is active at a time.
● Are typically used in cascaded SRDF/Star and concurrent SRDF/Star configurations to decrease the complexity and the
time that is required to complete failover and failback operations.
● Enable recovery to occur without removing old SRDF pairs and creating new SRDF pairs.
Example: R1 is unavailable
Here, the R1 device has become unavailable for some reason. To maintain service to the application host, processing is moved to
the R2 device. That is, the R2 device is made write accessible to the application host, and it receives I/O from that host. While
this situation exists, invalid tracks accumulate at the R2 array.
Once the R1 device is available again, the array containing the R2 device sends the invalid tracks to the R1 device. Once the two
devices are fully synchronized, processing returns to the R1 device and the R2 device is made write protected to the application
host.
NOTE: Depending on the mode of SRDF, user action may be required to make the R2 device RW available.
Introduction 17
SRDF groups
An SRDF group defines the logical relationship between SRDF devices and directors on both sides of an SRDF pair.
Group properties
The properties of an SRDF group are:
● Label (Name - Up to 10 alphanumeric characters including _, no special characters allowed)
● A set of ports on the local array is used to communicate over the SRDF links.
● A set of ports on the remote array is used to communicate over the SRDF links.
● Local group number
● Remote group number
● One or more pairs of devices
The devices in the group share the ports and associated CPU resources of the port's directors.
Advanced properties of an SRDF group include:
● Link Limbo mode—The amount of time that the array's operating environment waits after the SRDF link goes down before
updating the link's status.
● Link Domino mode—Specifies whether to force SRDF devices into the Not Ready state to the application host if, for
example, host I/Os cannot be delivered across the SRDF link.
● Autolink recovery—Specifies whether SRDF automatically restores the SRDF links when they become available again after
an earlier failure.
● Compression—Specifies whether to use compression when sending data over the SRDF links. Both hardware and software
compression are available and can be used independently or together.
NOTE: It is not recommended to use hardware and software compression together. It is recommended to use:
1. Compression at the network switches if available.
2. Hardware compression on the PowerMax where hardware I/O modules are present and network compression cannot
be used.
3. Software compression on the PowerMax where hardware I/O modules are not present and network compression
cannot be used.
Types of group
There are two types of SRDF group:
● Static
● Dynamic
Static groups are defined in the local array's configuration file. Dynamic groups are defined using SRDF management tools and
their properties are stored in the array's cache memory.
On arrays running PowerMaxOS or HYPERMAX OS all SRDF groups are dynamic.
Group membership
An SRDF device is a member of as many SRDF groups as there are mirrors of that device. So, in a simple, 2-site configuration
(see R1 and R2 devices ) that consists of R1 and R2 devices, each device is a member of one group. In a concurrent SRDF
configuration (see R11 device in concurrent SRDF), the R11 device is a member of two groups, one for each R2 mirror. The R2
devices are each in a single group.
18 Introduction
● Create a R1/R2 pair relationship from non-SRDF devices.
● Terminate and establish an SRDF relationship with a new R2 device.
● Swap personalities between R1 and R2 devices.
● Move R1/R2 pairs between SRDF groups.
Synchronous mode
SRDF/Synchronous (SRDF/S) maintains a real-time mirror image of data between the R1 and R2 devices. The recommended
distance between the devices is 200 km (125 miles) or less because application latency may rise to unacceptable levels at longer
distances.
Host writes are written simultaneously to both arrays in real time before the application I/O completes. Acknowledgments are
not sent to the host until the data is stored in cache on both arrays.
Write operations in synchronous mode and SRDF read operations have more information about I/O operations in synchronous
mode.
Asynchronous mode
SRDF/Asynchronous (SRDF/A) maintains a dependent-write consistent copy between the R1 and R2 devices across any
distance with no impact to the application.
Host writes are collected for a configurable interval into “delta sets”. Delta sets are transferred to the remote array in timed
cycles.
SRDF/A operations vary depending on whether the SRDF session mode is single or multi-session with Multi Session Consistency
(MSC) enabled:
Introduction 19
● For single SRDF/A sessions, cycle switching is controlled by the array's operating environment. Each session is controlled
independently, whether it is in the same or multiple arrays.
● For multiple SRDF/A sessions in MSC mode, multiple SRDF groups are in the same SRDF/A MSC session. Cycle switching is
controlled by SRDF host software to maintain consistency.
SRDF/A MSC cycle switching has more information on I/O operations in asynchronous mode.
Active mode
Active mode is used in SRDF/Metro high-availability configurations. When in Active mode the R1 and R2 devices of a
configuration appear to the host as a single, federated device. The host can write to both of the devices and SRDF/Metro
ensures that both sides hold identical data that is complete and up to date.
High availability has more information about SRDF/Metro and high availability.
NOTE: Adaptive copy write-pending mode is not available on arrays running HYPERMAX OS 5977 and above.
Domino modes
Under typical conditions, when one side of a device pair becomes unavailable, new data that is written to the device is marked
for later transfer. When the device or link is restored, the two sides synchronize.
Domino modes force SRDF devices into the Not Ready state to the host if one side of the device pair becomes unavailable.
Domino mode can be enabled/disabled for any:
● Device (domino mode)—If the R1 device cannot successfully mirror data to the R2 device, the next host write to the R1
device causes the device to become Not Ready to the host connected to the primary array.
● SRDF group (link domino mode)—If the last available link in the SRDF group fails, the next host write to any R1 device in the
SRDF group causes all R1 devices in the SRDF group to become Not Ready to their hosts.
20 Introduction
Link domino mode is set at the SRDF group level and only impacts devices where the R1 is on the side where it is set.
In Enginuity 5876, the track size of an FBA device is 64 KB, while in PowerMaxOS 5978 and HYPERMAX OS 5977 the track size
is 128 KB. So an array running PowerMaxOS or HYPERMAX OS cannot create a device that is the same size as a device that
has an odd number of cylinders on an array running Enginuity in a mixed SRDF configuration. However, SRDF requires that the
devices in a device pair are the same size.
PowerMaxOS 5978 and HYPERMAX OS 5977 manage the difference in size automatically using the device attribute Geometry
Compatibility Mode (GCM). A device with GCM set is presented as being half a cylinder smaller than its configured size. This
enables full functionality in a mixed configuration for SRDF, TimeFinder, SnapVX, and TimeFinder emulations (TimeFinder Clone,
TimeFinder VP Snap, and TimeFinder/Mirror) and ORS.
The GCM attribute can be set in two ways:
● Automatically on a target device when it is on an array running PowerMaxOS 10 (6079), PowerMaxOS 5978 or HYPERMAX
OS 5977 and the source device is on an array running Enginuity 5876.
● Manually using the Solutions Enabler CLI, Mainframe Enablers SRDF Host Component, or Unisphere.
Notes:
● An SRDF configuration cannot contain a pair of arrays where one side runs PowerMaxOS 10 (6079) and the other runs
Enginuity 5876. So GCM is not available and is not relevant in such configurations.
● Do not set GCM on devices that are mounted and under the control of a Local Volume Manager (LVM).
● Clear the GCM flag before mapping the device to a host. Otherwise, to clear the attribute, the device must be unmapped
from the host which results in a data outage.
● The GCM setting for a device cannot be changed when the target of the data device is already part of another replication
session.
Introduction 21
SRDF device states
The state of an SRDF device is determined by a combination of two views; host interface view and SRDF view, as shown in this
diagram.
Host interface view
(Read/Write, Read Only (Write Disabled), Not Ready)
Open systems host environment
R1 SRDF links R2
R1 device states
An R1 device presents one of the following states to a host connected to it:
● Read/Write (Write Enabled)—The R1 device is available for Read/Write operations. This is the default R1 device state.
● Read Only (Write Disabled)—The R1 device responds with Write Protected to all write operations to that device.
● Not Ready—The R1 device responds Not Ready to the host for read and write operations to that device.
R2 device states
An R2 device presents one of the following states to a host connected to it:
● Read Only (Write Disabled)—The R2 device responds Write Protected to the host for all write operations to that device.
● Read/Write (Write Enabled)—The R2 device is available for read/write operations. This state is possible in recovery or
parallel processing operations.
● Not Ready—The R2 device responds Not Ready (Intervention Required) to the host for read and write operations to that
device.
SRDF view
The SRDF view is composed of the SRDF state and the internal SRDF device state. These states indicate whether the device is
available to send data across the SRDF links, and able to receive software commands.
NOTE: See the Solutions Enabler SRDF Family State Tables Guide for more information about SRDF operations and
applicable pair states.
22 Introduction
R1 device states
An R1 device can have the following states for SRDF operations:
● Ready—The R1 device is ready for SRDF operations. The R1 device can send data across the SRDF links. True even if local
mirror(s) of the R1 device are Not Ready for I/O operations.
● Not Ready (SRDF mirror Not Ready)—The R1 device is Not Ready for SRDF operations.
NOTE: When the R2 device is placed into a Read/Write state to the host, the corresponding R1 device is automatically
placed into the SRDF mirror Not Ready state.
R2 device states
An R2 device can have the following states for SRDF operations:
● Ready—The R2 device receives the updates that are propagated across the SRDF links and can accept SRDF host-based
software commands.
● Not Ready—The R2 device can receive updates that are propagated from the primary array, but cannot accept SRDF
host-based software commands.
● Link blocked (LnkBlk)—Applicable only to R2 SRDF mirrors that belong to R22 devices.
One of the R2 SRDF mirrors cannot receive writes from its associated R1 device. In normal operations, one of the R2 SRDF
mirrors of the R22 device is in this state.
Introduction 23
Table 2. SRDF pair states (continued)
Pair State Description
Partitioned The SYMAPI is unable to communicate through the corresponding SRDF path to
the remote array. The Partitioned state may apply to devices within an SRDF group.
For example, if SYMAPI is unable to communicate with a remote array from an
SRDF group, devices in that group are in the Partitioned state. A half pair and a
duplicate pair are also reported as Partitioned.
Mixed A composite SYMAPI device group SRDF pair state. There are different SRDF pair
states within a device group.
Invalid This state is the default when no other SRDF state applies.
● The combination of the R1 device, the R2 device, and the SRDF link states do
not match any other pair state.
● This state may occur if there is a problem at the disk director level.
Consistent The R2 SRDF/A capable devices are in a consistent state. The consistent state
signifies the normal state of operation for device pairs operating in asynchronous
mode.
Transmit Idle The SRDF/A session cannot send data in the transmit cycle over the link because
the link is unavailable.
24 Introduction
R1/R2 device accessibility
Accessibility of an SRDF device to the application host depends on both the host and the array view of the SRDF device state.
R1 device accessibility and R2 device accessibility list application host accessibility for R1 and R2 devices.
Introduction 25
Table 5. Possible SRDF device and link state combinations (continued)
Source (R1) Target (R2) R1 or R2 invalid
SRDF pair state SRDF state SRDF link state SRDF state tracks
Transmit Idle Ready (RW) Ready (RW) Not Ready or WD —
SRDF consistency
Many applications (in particular, database management systems) use dependent write logic to ensure data integrity when a
failure occurs. A dependent write is a write operation that the application does not issue unless some prior I/O has been
completed. If the writes are out of order, and an event such as a failure occurs at that exact time, unrecoverable data loss may
occur.
An SRDF consistency group (SRDF/CG) contains SRDF devices with consistency enabled.
An SRDF consistency group preserves the dependent-write consistency of devices within the group. Consistency is maintained
by monitoring data propagation from source devices to their corresponding target devices. If consistency is enabled, and SRDF
detects any write I/O to a R1 device that cannot communicate with its R2 device, SRDF:
1. Suspends remote mirroring for all devices in the consistency group.
2. Completes the intercepted I/O to the R1 device.
3. Returns control to the application.
In this way, SRDF/CG prevents a dependent-write I/O from reaching the secondary site if the previous I/O only gets as far as
the primary site.
SRDF consistency allows quick recover from certain types of failure or physical disasters by retaining a consistent, DBMS-
restartable copy of the database.
SRDF consistency group protection is available for both SRDF/S and SRDF/A.
NOTE: Two or more SRDF links per SRDF group are required for redundancy and fault tolerance.
The relationship between the resources on a node (CPU cores and ports) varies depending on the operating environment.
PowerMaxOS
On arrays running PowerMaxOS :
● The relationship between the SRDF emulation and resources on a node is configurable:
○ One node/multiple CPU cores/multiple ports
○ Connectivity (ports in the SRDF group) is independent of compute power (number of CPU cores). You can change the
amount of connectivity without changing compute power.
● Each node has up to 16 frontend ports, any or all of which can be used by SRDF. Both the SRDF Gigabit Ethernet and SRDF
Fibre Channel emulations can use any port.
● The data path for devices in an SRDF group is not fixed to a single port. Instead, the path for data is shared across all ports
in the group.
26 Introduction
Mixed configurations
PowerMaxOS 10
Arrays running PowerMaxOS and HYPERMAX OS support a single frontend emulation of each type (such as FA and EF) for
each node, but each of these emulations supports a variable number of physical ports. Both the SRDF Gigabit Ethernet (RE) and
SRDF Fibre Channel (RF) emulations can use any port on the node. The relationship between the SRDF emulation and resources
on a node is configurable: One node for 1 or multiple CPU cores for 1 or multiple ports.
Connectivity is not bound to a fixed number of CPU cores. You can change the amount of connectivity without changing CPU
power.
The SRDF emulation supports up to 16 frontend ports per node (four frontend modules per node), any or all of which can be
used by SRDF. Both the SRDF Gigabit Ethernet and SRDF Fibre Channel emulations can use any port.
NOTE: If hardware compression is enabled, the maximum number of ports per node is 12.
For example, when one array in an SRDF configuration is running PowerMaxOS, and one array is running HYPERMAX OS,
specify only the node ID on the array running HYPERMAX OS, and specify both the node ID and port number on the array
running PowerMaxOS.
Mixed configurations PowerMaxOS 5978 and earlier, or HYPERMAX OS 5977 and Enginuity 5876
For configurations where one array is running Enginuity 5876, and the other array is running PowerMaxOS 5978 or HYPERMAX
OS 5977, these rules apply:
● On the 5876 side, an SRDF group can have the full complement of directors, but no more than 16 ports on the PowerMaxOS
or HYPERMAX OS side.
● You can connect to 16 directors using one port each, 2 directors using eight ports each or any other combination that does
not exceed 16 per SRDF group.
Introduction 27
2
Disaster recovery
This chapter provides more detail on the disaster recovery configurations of SRDF.
Topics:
• Two-site configurations
• Three-site configurations
• Four-site configurations
• SRDF recovery scenarios
• Replication of VMware vVols
Two-site configurations
Table 6. SRDF two-site configurations
Solution highlights Site topology
SRDF/Synchronous (SRDF/S) Maintains a real- Host Primary Secondary
time copy of production data at a physically
separated array.
● No data exposure R1 Limited distance R2
● Ensured consistency protection with SRDF/ Synchronous
Consistency Group
● Recommended maximum distance of 200 km
(125 miles) between arrays as application
latency may rise to unacceptable levels at longer
distances.
NOTE: In some circumstances, distances
greater than 200 km may be feasible. Contact
your Dell Technologies representative for more
information.
28 Disaster recovery
Table 6. SRDF two-site configurations (continued)
Solution highlights Site topology
● SRDF/DM is only for data replication or
migration, not for disaster restart solutions.
SRDF/Cluster Enabler (CE)
VLAN switch VLAN switch
● Integrates SRDF/S or SRDF/A with Microsoft Extended IP subnet
Failover Clusters (MSCS) to automate or semi-
automate site failover.
● Complete solution for restarting operations in
Microsoft Failover Cluster environments.
● Expands the range of cluster storage and Cluster 1 Fibre Channel Fibre Channel
management capabilities while ensuring full Host 1 hub/switch hub/switch Cluster 1
Host 2
protection of the SRDF remote replication.
SRDF/CE is not available on arrays that run
PowerMaxOS 10 (6079).
Cluster 2
SRDF/S or SRDF/A links
Cluster 2 Host 2
Host 1
SRDF-2node2cluster.eps
Site A Site B
SRDF and VMware Site Recovery Manager Protection side Recovery side
Completely automates storage-based disaster vCenter and SRM Server vCenter and SRM Server
Solutions Enabler software Solutions Enabler software
restart operations for VMware environments in
SRDF topologies.
IP Network
● The SRDF Adapter enables VMware Site IP Network
Site A, primary
Site B, secondary
Disaster recovery 29
Table 6. SRDF two-site configurations (continued)
Solution highlights Site topology
SRDF/Automated Replication (SRDF/AR)
● Combines SRDF and TimeFinder to optimize
bandwidth requirements and provide a long- Host Host
distance disaster restart solution.
● Operates in two-site solutions that use
SRDF/DM with TimeFinder.
SRDF/AR is available only for arrays that
run Enginuity 5876 since it uses STD (thick)
devices. Arrays that run PowerMaxOS 10 (6079),
PowerMaxOS 5978, or HYPERMAX OS 5977 have SRDF
TDEV (thin) devices only. TimeFinder
TimeFinder
background copy
R1 R2
Site A Site B
Three-site configurations
Table 7. SRDF multisite solutions
Solution highlights Site topology
Concurrent SRDF Three-site
disaster recovery and advanced
multisite business continuity
F/S R2
protection. SRD
● Data on the primary site is
concurrently replicated to two Site B
R11 adaptive copy
secondary sites. R2
● Replication to a remote site
Site A Site C
can use SRDF/S, SRDF/A, or
adaptive copy.
Cascaded SRDF Three-site
disaster recovery and advanced
multisite business continuity SRDF/S SRDF/A
R1 R21 R2
protection.
● Data on the primary site is
synchronously mirrored to a Site A Site B Site C
secondary (R21) site. The data
is then asynchronously mirrored
from the secondary (R21) site
to a tertiary (R2) site.
● The first “hop” is SRDF/S. The
second hop is SRDF/A.
30 Disaster recovery
Table 7. SRDF multisite solutions (continued)
Solution highlights Site topology
SRDF/Star Three-site data Cascaded SRDF/Star
protection and disaster recovery
with zero data loss recovery, R21
business continuity protection, and F/S SRD
disaster-restart. R11 SRD F/A R2/
R22
● Available in two configurations: Site B
○ Cascaded SRDF/Star
Site A SRDF/A (recovery) Site C
○ Concurrent SRDF/Star
● Differential synchronization Concurrent SRDF/Star
allows rapid reestablishment of
mirroring among surviving sites R21
in a multisite disaster recovery F/S SR
SRD (re DF/A R2/
implementation. R11 cov R22
Site B ery
)
● Implemented using SRDF
consistency groups (CG) with Site A SRDF/A Site C
SRDF/S and SRDF/A.
SRDF/Automated Replication
(SRDF/AR)
Host Host
● Combines SRDF and TimeFinder
to optimize bandwidth
requirements and provide a
long-distance disaster restart
solution.
● Operates in three-site solutions
that use a combination R1 R2
SRDF adaptive
of SRDF/S, SRDF/DM, and SRDF/S TimeFinder copy
TimeFinder
TimeFinder.
SRDF/AR is available only for R1
R2
arrays that run Enginuity 5876
Site A Site B Site C
since it uses STD (thick) devices.
Arrays that run PowerMaxOS 10
(6079), PowerMaxOS 5978 and
HYPERMAX OS 5977 have TDEV
(thin) devices only.
Concurrent SRDF
Concurrent SRDF is a three-site disaster recovery configuration using an R11 device that replicates to two R2 devices. The two
R2 devices operate independently but concurrently using any combination of SRDF modes:
● Concurrent SRDF/S to both R2 devices if the R11 site is within synchronous distance of the two R2 sites
● Concurrent SRDF/A to sites at extended distances from the workload site
Either of the R2 devices can restore data to the R11 device. Similarly, an R2 device can restore both the R11 and the other R2
device.
Concurrent SRDF can also replace an existing R11 or R2 device with a new device. Migrate data from the existing device to a
new device using adaptive copy disk mode, and then replace the existing device with the newly populated device.
Concurrent SRDF topologies use Fibre Channel and Gigabit Ethernet.
This example shows:
● The R11 -> R2 in Site B in synchronous mode
● The R11 -> R2 in Site C in adaptive copy mode
Disaster recovery 31
Site A Site B
Production host
Synchronous
R11 R2
Adaptive copy
R2
Site C
Figure 10. Concurrent SRDF topology
Cascaded SRDF
Cascaded SRDF provides a zero data loss solution at long distances if the primary site is lost.
In a cascaded SRDF configuration, data from a primary (R1) site is synchronously mirrored to a secondary (R21) site. Then the
data is asynchronously mirrored from the secondary (R21) site to a tertiary (R2) site.
Cascaded SRDF provides:
● Fast recovery times at the tertiary site
● Tight integration with the TimeFinder product family
● Geographically dispersed secondary and tertiary sites
If the primary site fails, cascaded SRDF can continue mirroring, with minimal user intervention, from the secondary site to the
tertiary site. This enables a faster recovery at the tertiary site.
Both the secondary and the tertiary site can be failover sites. Open systems solutions typically fail over to the tertiary site.
Site A Site B Site C
Host
SRDF/S,
SRDF/A or SRDF/A or
R1 Adaptive copy R21 Adaptive copy R2
32 Disaster recovery
SRDF/Star
SRDF/Star is a disaster recovery configuration that consists of three sites: primary (production), secondary, and tertiary. The
secondary site synchronously mirrors the data from the primary site, and the tertiary site asynchronously mirrors the production
data.
NOTE: In mainframe environments, GDDR is required to implement SRDF/Star. For more information, see SRDF/Star for
mainframe systems and the appropriate GDDR product guide that is listed in More information.
If an outage occurs at the primary site, SRDF/Star enables rapid transfer of operations to and re-establishment of remote
mirroring between the remaining sites. When conditions permit, the primary site can rejoin the solution, resuming the SRDF/Star
operations.
SRDF/Star operates in concurrent and cascaded configuration that address different recovery and availability objectives:
● Concurrent SRDF/Star—Data is mirrored from the primary site concurrently to two R2 devices. Both the secondary and
tertiary sites are potential recovery sites should the primary site fail. Differential resynchronization is used between the
secondary and the tertiary sites.
● Cascaded SRDF/Star—Data is mirrored first from the primary site to a secondary site, and then from the secondary to
a tertiary site. Both the secondary and tertiary sites are potential recovery sites. Differential resynchronization is used
between the primary and the tertiary site.
Differential synchronization between two remote sites:
● Allows SRDF/Star to rapidly reestablish cross-site mirroring should the primary site fail.
● Greatly reduces the time that is required to remotely mirror the selected production site.
If a rolling disaster affects the primary site, SRDF/Star helps in determining which remote site has the most current data. You
can select which site to operate from and which site’s data to use when recovering from the primary site failure.
If the primary site fails, SRDF/Star enables resumption of asynchronous protection between the secondary and tertiary sites,
with minimal data movement.
Concurrent SRDF/Star
In concurrent SRDF/Star, production data on R11 devices concurrently replicates to R2 devices in two remote arrays. In this
example:
● Site B is a secondary site using SRDF/S links from Site A.
● Site C is a tertiary site using SRDF/A links from Site A.
● The (normally inactive) recovery links are SRDF/A between Site C and Site B.
Disaster recovery 33
Site A Site B
SRDF/S
R11 R2
SRDF/A
SRDF/A
recovery links
R2
Active
Inactive
Site C
Figure 12. Concurrent SRDF/Star
34 Disaster recovery
Concurrent SRDF/Star with R22 devices
SRDF supports concurrent SRDF/Star topologies using R22 devices. R22 devices have two SRDF mirrors, only one of which
is active on the SRDF links at a given time. R22 devices improve the resiliency of the SRDF/Star application, and reduce the
number of steps for failover procedures.
This example shows R22 devices at Site C.
Site A Site B
SRDF/S
R11 R2
SRDF/A
SRDF/A
recovery links
R22
Active
Inactive
Site C
Figure 13. Concurrent SRDF/Star with R22 devices
Disaster recovery 35
Cascaded SRDF/Star
In cascaded SRDF/Star, production data on R1 devices at Site A is synchronously replicated to R21 devices at Site B. Then,
the data is asynchronously replicated to R2 devices at Site C. The synchronous secondary site (Site B) is always more current
than the asynchronous tertiary site (Site C). If the synchronous secondary site fails, the cascaded SRDF/Star solution can
incrementally establish an SRDF/A session between the primary site and the asynchronous tertiary site.
Cascaded SRDF/Star can determine when the current active R1 cycle (capture) contents reach the active R2 cycle (apply)
over the long-distance SRDF/A links. It minimizes the amount of data that must be moved between Site B and Site C to fully
synchronize them.
This example shows a basic cascaded SRDF/Star solution.
Site A Site B
SRDF/S
R1 R21
SRDF/A
recovery links SRDF/A
R2
Active
Inactive
Site C
Figure 14. Cascaded SRDF/Star
36 Disaster recovery
Cascaded SRDF/Star with R22 devices
Cascaded SRDF/Star can use R22 devices to pre-configure the SRDF pairs that are required to incrementally establish an
SRDF/A session between Site A and Site C in case Site B fails.
This example shows cascaded R22 devices in a cascaded SRDF solution.
Site A Site B
SRDF/S
R11 R21
SRDF/A
recovery links
SRDF/A
Active
R22
Inactive
Site C
Figure 15. R22 devices in cascaded SRDF/Star
Disaster recovery 37
DC1 DC2
GDDR GDDR
ConGroup ConGroup
R11 R21
SRDF/S
DC3
SRDF/A
R22
GDDR heartbeat communication
Active FICON channels
GDDR
Standby FICON channels
Active SRDF links
SRDF links in standby mode
DC1 DC2
GDDR GDDR
ConGroup ConGroup
R11 R21
SRDF/S
SRDF/A
DC3
R22
GDDR heartbeat communication
Active FICON channels
GDDR
Standby FICON channels
Active SRDF links
SRDF links in standby mode
38 Disaster recovery
Two-site SRDF/Star
In the two-site SRDF/Star configuration, there are three storage arrays as in any other SRDF/Star configuration, but there are
GDDR controls systems at DC1 and DC3 only. This means that if there is a failure at DC1 (the primary site), operations can be
restarted at DC3 only.
SRDF/Star restrictions
● GNS Remote Mirroring is NOT supported with SRDF/STAR configurations .
● Devices that are part of an RecoverPoint configuration, cannot at the same time, be part of an SRDF/Star configuration.
● The RDF groups that are part of an SRDF/Star CG cannot contain any devices that are not part of the SRDF/Star CG.
● Devices that are part of a SRDF/Star CG should not be controlled outside of symstar commands.
● Devices that are part of an SRDF/Metro configuration cannot at the same time be part of an SRDF/Star configuration.
● If any array in a SRDF/Star configuration is running PowerMaxOS 5978, Solutions Enabler 9.0 or later is required in order to
manage that configuration.
● If any array in a SRDF/Star configuration is running PowerMaxOS 10 (6079), Solutions Enabler 10.0 or later is required in
order to manage that configuration.
● Each SRDF/Star control host must be connected to only one site in the SRDF/Star triangle. An SRDF/Star control host is
where the symstar commands are issued.
● A minimum of one SRDF daemon must be running on at least one host attached locally to each site. This host must be
connected to only one site in the SRDF/Star triangle. The host could be the same as the Star control host but is not required
unless using symstar modifycg.
Dell Technologies strongly recommends running redundant SRDF daemons on multiple hosts to ensure that at least one
SRDF daemon is available to perform time-critical, consistency monitoring operations. Redundant SRDF daemons avoid
service interruptions caused by performance bottlenecks local to a host.
● SRDF/A recovery links are required.
● SRDF groups cannot be shared between separate SRDF/Star configurations.
● R22 devices are required in SRDF/Star environments that include VMAX 10K arrays.
● All SRDF/Star device pairs must be of the same geometry and size.
● All SRDF groups including inactive ones must be defined and operational prior to entering SRDF/Star mode.
● It is strongly recommended that all SRDF devices be locally protected and that each SRDF device is configured with
TimeFinder to provide local replicas at each site.
● Composite groups consisting of device groups are not supported.
● Devices enabled as part of consistency groups cannot at the same time be part of an SRDF/Star configuration.
● Devices cannot be Snap devices.
● Snap device management must be configured separately.
NOTE:
Dell Technologies strongly recommends that you have Snap device management available at both the synchronous and
asynchronous target sites.
Four-site configurations
Four-site configurations provide extra data protection. There are configurations for both FBA and mainframe environments.
Disaster recovery 39
Four-site configurations
The four-site SRDF solution for open systems host and mainframe environments:
● Replicates data by using both concurrent and cascaded SRDF topologies.
● Are a multi-region disaster recovery solution with higher availability, improved protection, and less downtime than concurrent
or cascaded SRDF solutions.
● Offers multi-region high availability by combining the benefits of concurrent and cascaded SRDF solutions.
If two sites fail because of a regional disaster, a copy of the data is available, and there is protection between the remaining two
sites. An existing two-site or three-site SRDF topology is the basis for a four-site topology. Four-site SRDF can also be used for
data migration.
Example of the four-site SRDF solution:
Site A Site B
SRDF/A
R11
R2
SRDF/S
Adaptive copy
R21 R2
Site C Site D
40 Disaster recovery
Primary site (Site A) Secondary site (Site B)
DC1 DC2
AutoSwap
AutoSwap
GDDR
GDDR
Primary region
(region 1)
DC2
DC1 R11 SRDF/S R21 DASD
DASD
DC3 DC4
R21 SRDF/S
R22
GDDR
GDDR
Secondary region
(region 2)
DC4
DC3 AutoSwap AutoSwap DASD
DASD
The diagram shows four Dell GDDR control systems with their independent heartbeat communication paths, separate from the
production disk and computer facilities. Each of the managed z/OS systems has Dell Autoswap and Dell Consistency Groups
(ConGroup) installed.
Each GDDR SRDF/SQAR environment manages two consistency groups (one active, one defined) and two MultiSession
Consistency MSC groups (both active). A consistency group is a named group of source (R1) volumes that are managed
by the ConGroup application as a unit. An MSC group is a named group consisting of multiple SRDF groups operating in SRDF/A
mode, managed by the Dell MSC control software feature as a single unit. The relationship between Site A (DC1) and Site B
(DC2) is maintained through SRDF/S replication of primary disk images at DC1 to DC2, while SRDF/A replication maintains out
of region, mirrored data at Site C (DC3) and Site 4 (DC4).
Disaster recovery 41
SRDF recovery scenarios
This section describes recovery scenarios in 2-site SRDF configurations.
Applications stopped
R1/R2 swap
R1 R2
SRDF links -suspended
Site A Site B
Figure 20. Planned failover: before personality swap
Applications running
R2 R1
SRDF links
Site A Site B
Figure 21. Planned failover: after personality swap
42 Disaster recovery
When the maintenance, upgrades or testing procedures are completed, a similar procedure returns production to Site A.
Unplanned failover
An unplanned failover moves production applications from the primary site to the secondary site after an unanticipated outage
at the primary site.
Failover to the secondary site in a simple configuration can be performed in minutes. You can resume production processing
when the applications are restarted on the failover host that is connected to Site B.
Unlike the planned failover operation, an unplanned failover resumes production at the secondary site, but without remote
mirroring until Site A becomes operational and ready for a failback operation.
This diagram shows failover to the secondary site after the primary site fails.
Production host Remote, failover host Production host Remote, failover host
SRDF links -
R1 suspended R1 SRDF links
R2 R2
Not Ready or Read/Write
Read Only
Disaster recovery 43
Permanent link loss (SRDF/A)
If all SRDF links are lost for more time than link limbo or Transmit Idle can manage:
● All the devices in the SRDF group are set to a Not Ready state.
● All data in capture and transmit delta sets is changed from write pending for the R1 SRDF mirror to invalid for the R1 SRDF
mirror and is therefore owed to the R2 device.
● Any new write I/Os to the R1 device are also marked invalid for the R1 SRDF mirror. These tracks are owed to the secondary
array once the links are restored.
When the links are restored, normal SRDF recovery procedures are followed:
● Metadata representing the data that is owed is compared and merged based on normal host recovery procedures.
● Data is resynchronized by sending the owed tracks as part of the SRDF/A cycles.
Data on nonconsistency exempt devices on the secondary array is always dependent-write consistent in SRDF/A active/
consistent state, even when all SRDF links fail. Starting a resynchronization process compromises the dependent-write
consistency until the resynchronization is fully complete and two cycle switches have occurred.
For this reason, it is important to use TimeFinder to create a gold copy of the dependent-write consistent image on the
secondary array.
44 Disaster recovery
Replication of VMware vVols
A vVol is a VMware object that, at a basic level, equates to a disk associated with a VMware virtual machine (VM). The VMware
administrator creates and manages vVols through the VMware vSphere product.
This section contains an overview of the architecture of vVols and their implementation. It then shows how vVols can be
replicated for disaster recovery purposes using SRDF/A.
vVol architecture
This diagram shows the architecture of the vVol implementation in a PowerMax or VMAX All Flash environment.
Control Path
Gold Silver Diamond Gold
VM VM VM VM
vSphere
ESXi host
VASA Data PE
Provider path
Storage array
Storage Storage
Container Resource
The diagram shows the VASA Provider separate from the storage array. PowerMaxOS 10 (6079) and PowerMaxOS
5978.669.669, and later, include an embedded VASA Provider that runs as a guest management application on the PowerMax
array.
Components
The major components in the implementation of vVols are:
Virtual Machine A virtual machine on an ESXi host managed through the VMware product.
(VM)
vVol A virtual disk associated with a specific VM.
Storage A pool of storage on an array set aside for use by vVols.
Container
Storage A subdivision of a storage container that has a specified capacity and has attributes like Service Level,
Resource Workload, and data reduction. There are one or more Storage Resources in each Storage Container. The
vSphere administrator creates vVols in the Storage Resources according to the capabilities that the VM
requires.
Protocol A thin device that acts as a data path between the vVols on a storage array and their virtual machines on
endpoint the ESXi host.
Disaster recovery 45
VASA Provider A software component that implements vSphere APIs for Storage Awareness (VASA). The vSphere
administrator manages the resources on the array that are allocated to the VMs through the VASA
Provider.
Management responsibilities
Table 8. Management responsibilities in a vVol environment
Component Manager
Virtual Machine vSphere administrator
vVol vSphere administrator
Storage Container Storage administrator
Storage Resource Storage administrator
Protocol Endpoint Storage administrator
Protection Recovery
PowerMax PowerMax
As the diagram shows, there is one extra component in this environment: VMware Site Recovery Manager (or SRM). SRM is a
disaster recovery product that orchestrates the failover and failback operations between the production environment and the
disaster recovery environment. The vSphere administrator uses SRM to manage the failover and failback of vVols. As with other
SRDF configurations failover, and the subsequent failback, can be both planned and unplanned.
46 Disaster recovery
Establish the replication configuration
Both the storage administrator and the vSphere administrator are involved in establishing a replication configuration.
The storage administrator:
1. Creates the Storage Containers and Storage Resources on the production environment.
2. Provisions Protocol Endpoints for each ESXi host that uses the array.
3. Creates Storage Containers on the disaster recovery environment.
4. Creates Replication Groups that contain Storage Containers on both the production and disaster recovery environments.
This step includes selecting SRDF/A as the communications protocol.
Once the storage administrator has set up the replication configuration at the two sites, the vSphere administrator:
1. Registers the VASA Provider.
2. Creates vVols in the Storage Resources on the production environment.
3. Sets up Storage Resource Policies. These policies define which vVols are replicated to the disaster recovery environment.
The policies also define whether point-in-time copies of vVols are made and how frequently the copies are taken.
4. When necessary, uses SRM to carry out failover and failback operations between the production and disaster recovery sites.
The vSphere administrator defines the replication services that are necessary. The storage array provides those services.
Requirements
Remote replication of a vVol environment requires specific versions of several software components on both the storage array
and in the ESXi environment.
Storage array:
● PowerMaxOS 10 (6079) or PowerMaxOS 5978.669.669, and later
● Solutions Enabler Version 9.2, and later
● Unisphere for PowerMax Version 9.2 and later
ESXi environment:
● vSphere Version 7.0 and later
● SRM Version 8.3
Management facilities
Solutions Enabler and Unisphere for PowerMax have facilities to:
● Create, manage, and remove Storage Containers, Storage Resources, and Protocol Endpoints.
● Register with a VASA Provider.
● Create, manage, and remove VASA Replication Groups.
Disaster recovery 47
3
High availability
This chapter provides more detail on the high availability configurations that SRDF/Metro provides for open systems (FBA) and
IBM i D910 application hosts.
Topics:
• SRDF/Metro
• SRDF/Metro life cycle
• SRDF/Metro configuration changes
• SRDF/Metro resilience
• Mobility ID with ALUA
• Disaster recovery facilities
• Deactivate SRDF/Metro
• SRDF/Metro restrictions
SRDF/Metro
In traditional SRDF, R1 devices are Read/Write accessible. R2 devices are Read Only or Write Disabled. However, in the
high-availability SRDF/Metro configuration:
● R2 devices are Read/Write accessible to application hosts.
● Application hosts can write to both the R1 and R2 side of the device pair.
● R2 devices assume the same external device identity (geometry, device WWN) as the R1 devices. This shared identity means
that the R1 and R2 devices appear to application hosts as a single, virtual device across the two arrays.
SRDF/Metro can be deployed in either a single, multipathed host, or in a clustered host environment.
Multi-Path Cluster
Read/Write Read/Write
Read/Write Read/Write
Hosts can read and write to both the R1 and R2 devices in a SRDF/Metro configuration:
● In a single host configuration, a single host issues I/O operations. Multipathing software directs parallel reads and writes to
each array.
● In a clustered host configuration, multiple hosts issue I/O operations to both sides of the SRDF device pair. Each cluster
node has dedicated access to an individual storage array.
● In both single host and clustered configurations, writes to the R1 or R2 devices are synchronously copied to the paired
device. SRDF/Metro software resolves write conflicts to maintain consistent images on the SRDF device pairs. The R1
device and its paired R2 device appear to the host as a single virtualized device.
Other characteristics of SRDF/Metro are:
● SRDF/Metro is managed using either Solutions Enabler or Unisphere.
● SRDF/Metro requires a license on both arrays.
● Storage arrays can simultaneously contain SRDF groups for SRDF/Metro operations and traditional SRDF groups.
48 High availability
● The arrays can be up to 200 km (125 miles) apart.
● The arrays are typically in separate fault domains for extra resilience.
Device management
All device pairs in a SRDF/Metro group are managed together for all supported operations, except when changing the contents
of the group:
● Create pair and move pair operations can add devices to the SRDF/Metro group.
● Delete pair and move pair operations can remove devices from the SRDF/Metro group.
Failure resilience
While providing high data availability, SRDF/Metro ensures that the data is consistent on both sides of the configuration. If data
consistency can no longer be guaranteed due to, for example, the failure of a device or the SRDF link, SRDF/Metro:
● Allows one side to remain accessible to the application host (the winner) This side continues to service I/O requests from
the application host.
● Makes the other side inaccessible to the application host (the loser)
Making only one side accessible to the application host eliminates the possibility of a split brain situation occurring. A split brain
scenario could result in inconsistent data accumulating on both sides of the SRDF/Metro group.
SRDF/Metro provides two mechanisms for determining the winner:
● Witness option: A Witness is a third party that mediates between the two sides to help decide which remains available to
the application host if there is a failure. The Witness method provides intelligent selection of the winner at the time of failure
to facilitate continued host availability. The Witness option is the default and there are two types of Witness:
○ Array Witness : The operating environment on a third array acts as the mediator to decide the side of the device pair
that remains R/W accessible to the application host. It gives priority to the winner, but should that side be unavailable
the other side remains available.
○ Virtual Witness (vWitness) option: vWitness provides the same functionality as the Array Witness option, but it is
packaged to run in a virtual appliance or as a daemon on a Linux system, rather than on an array. The vWitness runs as a
Linux daemon when either or both arrays in the SRDF/Metro configuration run PowerMaxOS 10 (6079).
High availability 49
The vWitness and Array Witness options are treated the same in the operating environment, and can be deployed
independently or simultaneously. When deployed simultaneously, SRDF/Metro favors the Array Witness option over the
vWitness option, as the Array Witness option has better availability.
● Device Bias option: Device pairs for SRDF/Metro have a bias attribute. SRDF/Metro designates the R1 side of a device
pair at the time that the devices become Ready on the SRDF link as the winner. If the device pair becomes Not Ready
(NR) on the link, the R1 (bias side) remains accessible to the application host. The R2 (nonbias side) is inaccessible to the
application host.
The Witness option has key advantages over the Device Bias option:
● The Witness option provides intelligent selection of the winner and loser at the time of a failure. The Device Bias option,
decides on the winner and loser before any failure.
● As a result, the Witness option can designate as the winner the side that appears most capable of providing continued data
availability to the host. The Device Bias option can only designate the bias (R1) side as the winner. Device Bias cannot make
the R2 side available to the application host. If the R1 side is unavailable, the application host loses all data availability that
the devices provide.
Wherever possible, do not use Device Bias in a production environment.
SRDF/Metro resilience has more information about these failure-recovery mechanisms.
Disaster recovery
SRDF/Metro provides high availability while traditional SRDF provides disaster recovery. The two technologies can exist
together to provide disaster recovery facilities for an SRDF/Metro. Disaster recovery facilities has more information about
using disaster recovery in an SRDF/Metro configuration.
Operating environment
Arrays in a SRDF/Metro configuration run HYPERMAX OS 5977.691.684, or later, any version of PowerMaxOS 5978, or any
version of PowerMaxOS 10 (6079). SRDF and NDM Interfamily Connectivity Information defines the combinations of operating
environments that an SRDF/Metro configuration can contain.
Some features of SRDF/Metro require later versions of HYPERMAX OS or PowerMaxOS. Where this applies, the following
sections identify the versions that are required.
50 High availability
SRDF/Metro life cycle
The following demonstrates the life cycle of an SRDF/Metro configuration in relation to the life cycle of an application:
1. Initial provisioning and configuration setup. Allocate devices for the application on two arrays, turn those devices into an
SRDF/Metro configuration, and activate the configuration.
2. Adding and removing devices. As the storage needs of the application change, add devices to and remove them from the
SRDF/Metro configuration.
3. Removing the SRDF/Metro configuration. When the application reaches the end of its life, the SRDF/Metro configuration
becomes obsolete and is dismantled.
High availability 51
NOTE: When using one of the Witness options, SRDF/Metro may change the winning and losing sides at any time, based
on the relative robustness of the two sides. SRDF/Metro always reports the winning side as the R1 device and the losing
side as the R2 device. So each switch causes an apparent swap of the R1 and R2 personalities in a device pair.
The application host now discovers the R2 devices through their federated personalities. Once they are discovered, the
configuration is providing high data availability:
● The application host can send write operations to either side. When SRDF/Metro receives a write operation it sends the I/O
to the other array in the configuration. SRDF/Metro confirms the I/O to the application host only when that other array
acknowledges receipt of the Write operation.
● Reading from either side always returns the same data since both sides have identical content.
52 High availability
SRDF/Metro configuration changes
During its lifetime, the content of an SRDF/Metro group can change by adding and removing devices.
Add devices
There are two ways of adding devices to an SRDF/Metro group:
● Create a pair of devices using devices that are not in an SRDF relationship.
● Move an existing SRDF pair into the group.
Create pair
When creating a pair from two existing devices, you can specify which device retains its data. As part of the pair creation
process, SRDF/Metro copies the data from that device to the other, replacing any data that it contained. The device that
retains its data becomes the R1 device in the pair.
When adding devices that contain no data, SRDF/Metro can format the devices when adding them to the group as a device
pair. Formatting removes any data on the devices, and often completes faster than when data has to be copied between the
device pair. To be able to format devices, they:
● Must be unmapped or user-not-ready so that no I/O is occurring
● Cannot be part of another replication session
Move pair
When moving an existing pair into the group, the R1 and R2 sides do not have to align with the devices already in the group.
SRDF/Metro switches the R1 and R2 sides when making the new pair ready on the SRDF link.
The criteria that the devices must fulfill are:
● The devices must be in an SRDF session between the same two arrays that SRDF/Metro uses.
● Each device pair operates in synchronous or adaptive copy mode.
● The devices can be Ready or Not Ready on the SRDF link.
● All device pairs to move into the SRDF/Metro group have their R1 and R2 sides aligned.
The alignment does not have to match the alignment in the SRDF/Metro group. If the alignment is different, the R2 sides
cannot owe data to the R1 side.
Process
The process of adding a device pair to a SRDF/Metro group is:
1. The R1 side of each device pair is made accessible to the application host. The R2 side is inaccessible to the host. SRDF/
Metro uses a slightly different approach when it formats a device pair. In this case, both sides are inaccessible to the
application host until the formatting is complete. Only then does the R1 side become accessible to the application host.
2. Once the devices are Ready on the SRDF link, the device pair enters the SyncInProg state while the two sides synchronize
their data. That is, SRDF/Metro aligns the data on the R2 device to match that on the R1 device. So when moving an
existing pair into a SRDF/Metro group, the R2 side cannot owe data to the R1 side.
3. If necessary, SRDF/Metro switches the R1 and R2 sides so that they align with the device pairs already in the group, when
synchronization is complete.
4. SRDF/Metro copies the personality attributes, such as geometry and identity, from the R1 side to the R2 side.
5. Once both devices have the same identity, SRDF/Metro:
● Makes the R2 device accessible to the application host
● Sets the SRDF pair state of the added devoices to ActiveActive or ActiveBias to match the state of all other device pairs
in the SRDF/Metro group
The application host now discovers the newly added R2 devices, that have the identity of their R1 partners. Now high data
availability is in force.
High availability 53
Remove devices
There are two ways of removing devices from an SRDF/Metro group:
● Delete—remove the SRDF relationship between a pair of devices and move the two devices out of the SRDF/Metro group.
● Move—move a device pair from the SRDF/Metro group to another SRDF group.
In either case, the R1 device at the time the pair are removed remains accessible to the application host. However, you can
control which device in the pair remains accessible to the application host. So if you prefer the R2 side remains accessible.
Making the R2 accessible, switches the R1 and R2 sides. Thus, the side that was R2 now becomes R1. To specify the side that
remains available to the application host requires that the devices are Ready on the SRDF link.
Delete pair
When deleting a pair:
● The SRDF pairing between the two devices is removed. They are no longer a device pair but rather two non-SRDF devices.
The device that was the R1 device remains accessible to the application host as a regular device.
● It is possible to delete the last device in an SRDF/Metro group when that device is Not ready on the SRDF link.
● To make the R2 device accessible to the application host, requires that the device pair was in the ActiveActive or ActiveBias
state when deleted.
● After being removed from the SRDF/Metro group, the devices retain the identities that they last used in that group. To be
able to reuse devices that are inaccessible to the application host requires that the device identities are reset manually.
Move pair
The move pair operation moves a device pair from the SRDF/Metro group to another SRDF group:
● The device pairing is retained allowing replication to continue within the destination SRDF group.
● The destination group must be Synchronous. It cannot be active (in another SRDF/Metro group) or Asynchronous.
● The move pair operation must leave at least one device in the SRDF group.
● To make the R2 device accessible to the application host, requires that the device pair was in the ActiveActive or ActiveBias
state when deleted.
SRDF/Metro resilience
In normal operation, PowerMaxOS and HYPERMAX OS use the SRDF links to maintain consistency of the data on both sides of
a SRDF/Metro pair. That consistency of data cannot be ensured when hosts have write access to both sides and either of the
following occurs:
● The device pair becomes Not Ready (NR) on the SRDF link.
● One of the devices in the pair fails.
In either situation, SRDF/Metro selects one device in the pair to remain accessible to the host (the winner) and make the other
device inaccessible (the loser). This decision applies to all device pairs in the SRDF/Metro configuration.
SRDF/Metro provides two ways of selecting the winner and loser:
● Witness
● Device bias
The following sections explain these methods and provide background information about how SRDF/Metro ensures that no data
is lost due to the failure.
NOTE: The selected method comes into effect only when a failure occurs and the devices pairs are ActiveActive or
ActiveBias. Before transitioning to one of those states, only one side of a pair is accessible to the host. So there is no need
for SRDF/Metro to take any special action.
54 High availability
Witness
The Witness method uses a third party (a witness) to enhance the response of an SRDF/Metro configuration to a failure. The
witness intelligently selects the winner and loser sides at the time the failure occurs to provide continued host availability. The
primary, and recommended, form of resilience is Witness.
The pair state of a device pair that uses a Witness is ActiveActive when the pair can provide high data availability. Once the
devices in a configuration are ready to move to the ActiveActive state, the arrays on the R1 and R2 sides:
● Negotiate which Witness they are to use.
● Periodically renegotiate the winning and losing sides, based on the relative robustness of the two sides in the configuration.
This capability is available only on arrays that run PowerMaxOS.
Witness negotiation and selection describes how the arrays initially select a Witness and how they regularly renegotiate the
winning and losing sides.
If a failure occurs, the arrays on both sides of the SRDF/Metro group attempt to contact the Witness. The purpose of the
Witness is to direct one array to act as the winner and the other side as the loser. The winner continues to be accessible to the
host.
In determining the array that remains accessible to the host, the Witness gives preference to the winner chosen during the last
negotiation between the two arrays. However, should that array not communicate with the Witness, the Witness turns what
was the loser into the winner, and that array remains host-accessible. This mechanism helps to continue to provide the host
with data accessibility should the winner fail.
The following sections describe:
● The two forms of witness:
○ Array Witness
○ Virtual Witness (vWitness)
● The process used to select a witness
● Witness failure scenarios
Array Witness
When using the Array Witness method, SRDF/Metro uses a third "witness" array to determine the winning side. The witness
array runs one of these operating environments:
● PowerMaxOS 6079.xxx.xxx or later
● PowerMaxOS 5978.144.144 or later
● HYPERMAX OS 5977.945.890 or later
● HYPERMAX OS 5977.810.784 with e-Pack containing fixes to support SRDF N-X connectivity.
● Enginuity 5876 with e-Pack containing fixes to support SRDF N-x connectivity.
The SRDF and NDM Interfamily Connectivity Information defines the operating environments that are valid for each SRDF/
Metro configuration.
The Array Witness must have SRDF connectivity to both the R1-side array and R2-side array. SRDF remote adapters (RAs) are
required on the witness array with applicable network connectivity to both the R1 and R2 arrays.
For redundancy, there can be multiple witness arrays but only one witness array is used by an individual SRDF/Metro group at
any single time. The two sides of the SRDF/Metro group agree on the witness array to use when the devices in the group are
ready to transition to the ActiveActive state. If the auto configuration process fails and no other applicable witness arrays are
available, SRDF/Metro uses the Device Bias method.
The Array Witness method requires two SRDF groups: one between the R1 array and the witness array, and a second between
the R2 array and the witness array. A witness group:
● Is an SRDF group that enables an array to act as a Witness for any or all SRDF/Metro sessions that are connected to the
array at the other side of the witness group. The group has no other purpose.
○ There can be only one witness group between any pair of arrays.
○ A witness group does not contain any devices.
○ It is not possible to remove a witness group that is protecting a SRDF/Metro configuration unless there is another
Witness that the configuration can use.
● Must be online when required to act as a Witness. Both witness groups must be online when the device pairs in the SRDF/
Metro configuration become Ready on the SRDF link.
High availability 55
NOTE: The term witness array is relative to a single SRDF/Metro configuration. While the array acts as a Witness for that
configuration, it may also contain other SRDF groups, including SRDF/Metro groups.
SR
p
ou
DF
gr
W
s
i tn
es
i tn
es
sg
W
DF
ro
up
SR
SRDF links
R1 R2
R1 array R2 array
Figure 26. SRDF/Metro Array Witness and groups
SRDF/Metro management software checks that the Witness groups exist and are online when carrying out establish or restore
operations. SRDF/Metro determines which witness array an SRDF/Metro group is using, so it is not necessary to specify the
Witness. Indeed, there is no means of specifying the Witness.
When the Array Witness method is in operation, the state of the device pairs is ActiveActive.
If the witness array becomes inaccessible from both the R1 and R2 arrays, the state of the device pairs becomes ActiveBias.
The management guests on the R1 and R2 arrays maintain multiple IP connections to redundant vWitness instances. These
connections use TLS/SSL for secure connectivity.
After creating IP connectivity to the arrays, the storage administrator can use SRDF management software to:
56 High availability
● Add a vWitness to the configuration. This action does not affect any existing vWitnesses. Once the vWitness is added, it is
available to participate in the vWitness infrastructure.
● Query the state of a vWitness configuration.
● Suspend a vWitness. If the vWitness is servicing an SRDF/Metro session, this operation requires a force flag. The SRDF/
Metro session is in an unprotected state until it renegotiates with another witness, if available.
● Remove a vWitness from the configuration. Once the vWitness is removed, SRDF/Metro breaks the connection with
vWitness. The administrator can only remove vWitnesses that are not servicing active SRDF/Metro sessions. Also, the
administrator cannot remove a vWitness when there is no alternative Witness (array or vWitness) for the SRDF/Metro
configuration to use.
Witness negotiation
Each side of the SRDF/Metro configuration maintains a list of witnesses that the administrator sets up. To begin the negotiation
process, the nonbias side sends its list of witnesses to the bias side. On receiving the list, the bias side compares it with its own
list of witnesses. The first matching witness definition is selected as the witness and the bias side sends its identification back to
the nonbias side. The two sides then establish communication with the selected witness.
Before allowing the devices in a pair to become Ready on the SRDF link, SRDF/Metro checks that the arrays have at least one
witness definition in common. If there is no common witness, SRDF/Metro does not allow the devices to become Ready on the
link. In this situation, the administrator reconfigures either or both arrays so that they have at least one witness definition in
common.
NOTE: If the arrays cannot agree on a witness, the session operates in Device Bias mode. However, should a common
witness become available, the two sides negotiate the use of that witness. The sides then continue in witness mode.
High availability 57
Witness connection failure
While using a witness, an error could cause either or both sides of a SRDF/Metro session to lose contact with that witness.
Witness failure scenarios has more on how SRDF/Metro reacts to various failure scenarios.
58 High availability
S1 R1 side of device pair
S1 S2 S1 S2
S2 R2 side of device pair
W Witness Array/vWitness X
SRDF links W X
W
SRDF links/IP connectivity*
S1 and S2 remain S1 and S2 remain
X Failure/outage accessible to host accessible to host
S2 wins future failures Move to bias mode
* Depending on witness type
S1 calls home S1 and S2 call home
X
S1 S2 S1 X S2 S1 S2
X
W W W
S1 X
S2
S2 failed
S1 remains accessible to host
Figure 28. SRDF/Metro Witness single failure scenarios
High availability 59
S1 S2 S1 X S2 S1 X S2
X X X
W X
W W
S1 X S2 X
S1 S2 S1 X
S2
X X X
W W W
X
S1 S2 S1 X
S2 S1 X S2
X X
X
W X
W W
Device Bias
When the devices in the SRDF/Metro group become Ready on the SRDF link, PowerMaxOS and HYPERMAX OS:
● Mark the array that contains the R1 devices as the bias side and the other array as the nonbias side.
● Designate each device on the bias side as the winner.
● Designate each device on the nonbias side as the loser.
If a device pair becomes Not Ready on the SRDF link, or either device in a pair fails, PowerMaxOS and HYPERMAX OS:
● Let all devices on the winner side remain accessible to the application host.
● Make all devices on the loser side inaccessible to the application host.
A pair that uses device bias has the ActiveBias state when the pair can provide high data availability.
When a pair is in the ActiveBias state, it is possible to change the bias to the other array in the configuration. Changing the bias
also changes the R1 and R2 designations. That is, devices that were previously R1 devices are now R2 devices and R2 devices
are now R1 devices. The winner and loser designations also change.
Note that if there is a failure on the R1 side, the application host loses all connectivity to the devices in the SRDF/Metro group.
The Device Bias method cannot make the R2 devices available to the application host.
60 High availability
Preventing data loss
Should a failure occur, SRDF/Metro must ensure that there is no loss of data because of the failure. Also, SRDF/Metro decides
which side that remains host-accessible (the winner). Having decided on the winner, the steps that SRDF/Metro takes to
prevent data loss are:
1. Briefly stall all I/O operations to all devices on both sides of the SRDF/Metro group.
2. Hold I/O operations to both sides of all pairs in the SRDF/Metro group without processing them.
3. Set all devices on both sides of the SRDF/Metro group Not Ready on the SRDF link.
4. Make all devices on the loser side of the group inaccessible to the application host.
Devices on with winner side remain accessible to the application host.
5. Clear the Stalled state.
NOTE: The Stalled state is transient and typically transparent to applications. So, it does not appear as a discrete state,
but is incorporated in the Suspended and Partitioned states.
6. Reject all the I/O operations that were held in step 2 with a retry sense code.
The use of the retry sense code tells the host to send these I/O operations again.
Subsequent I/O operations are accepted by devices on the winner's side, and rejected by devices on the loser's side. This
applies to all I/O operations, whether they are new or ones that are retried.
When the failure is resolved, SRDF/Metro:
1. Sets the devices in the SRDF/Metro group to Ready on the SRDF link.
2. Re-synchronizes each device pair in the SRDF/Metro group.
3. Sets each pair state to ActiveActive or ActiveBias once the pair can provide high data availability.
High availability 61
SRDF/Metro
R11 R21
Array A Array B
SRDF/A or SRDF/A or
Adaptive Copy Adaptive Copy
Disk Disk
Active link
Inactive link
R22
Array C
Notice that the device names differ from a standard SRDF/Metro configuration. This difference reflects the change in the
device functions when SRDF/Metro Smart DR is in operation. For instance, the R1 side of the SRDF/Metro on Array A now has
the name R11, because it is the R1 device to both the:
● R21 device on Array B in the SRDF/Metro configuration
● R22 device on Array C in the SRDF/Metro Smart DR configuration
Arrays A and B both have SRDF/Asynchronous or Adaptive Copy Disk connections to the DR array (Array C). However, only
one of those connections is active at a time (in this example the connection between Array A and Array C). The two SRDF/A
connections are known as the active and standby connections.
If a problem prevents Array A replicating data to Array C, the standby link between Array B and Array C becomes active and
replication continues. Array A and Array B track the data that is replicated to Array C to enable replication and avoid data loss.
Resilience
The SRDF/Metro pair must use a Witness resilience mechanism. If the pair uses an Array Witness, the version of the operating
environment on the witness array must be compatible with that running on each SRDF/Metro array. The SRDF and NDM
Interfamily Connectivity Infromation defines the versions of the operating environment that are compatible.
Operating environment
The three arrays in an SRDF/Metro Smart DR configuration run PowerMaxOS 10 (6079) or PowerMaxOS 5978.669.669 or
later. The SRDF and NDM Interfamily Connectivity Information defines the valid combinations of operating environment,
Management facilities
To set up and manage an SRDF/Metro Smart DR configuration requires either or both of:
● Unisphere for PowerMax version 9.2 or later
● Solutions Enabler version 9.2 or later
The facilities for managing SRDF/Metro Smart DR configurations are in three categories:
● The SRDF/Metro Smart DR environment
● The SRDF/Metro session
● The DR session
62 High availability
Table 10. Management facilities
Facility Description
SRDF/Metro Smart DR environment
Establish Create a SRDF/Metro Smart DR environment from a concurrent SRDF environment. In the
concurrent environment, one leg is the SRDF/Metro session and the other is an SRDF/A or
Adaptive Copy Disk DR session.
Remove Remove a SRDF/Metro Smart DR environment by removing one of the DR connections.
List Display the high-level status of all SRDF/Metro Smart DR environments on a specified array.
Show Display the configuration of a specified SRDF/Metro Smart DR environment.
Query Display the status of all SRDF/Metro Smart DR environments on an array. For each
environment, the display includes the status of its SRDF/Metro session, and the status of
its DR session.
SRDF/Metro session
Establish Make the devices in a SRDF/Metro session RW on the SRDF link, and start an incremental
resynchronization of data from R11 to R21. While this work is going on, only R11 is accessible to
the hosts. When both devices contain identical data, R21 is also write accessible to the host.
Restore Restore data from R21 to R11 in a SRDF/Metro session. While this work is going on R11 is
not accessible to the host, and R21 is RW accessible to the host. Once both devices contain
identical data, R11 is also write accessible to the host.
Suspend Set the SDRF link to Not Ready, and make either R11 or R21 inaccessible to the hosts. The
state of the SRDF/Metro becomes Suspended.
DR session
Establish Make the devices on the DR session RW on the SRDF link. Then start an incremental restore
from the R11 array in the SRDF/Metro session to the DR (R22) array. While this work is
going on, R11 is inaccessible to the host. R21 is accessible to the host if the state of the
SRDF/Metro session is active/active. Otherwise, R21 is inaccessible to the host. When the
synchronization is complete, the state of the DR session become Consistent.
Restore Restores data from R22 to R11. While this work is going on, R11 is inaccessible to the host.
Suspend Make R22 Not Ready on the SRDF link, stopping data synchronization between the Metro
session and R22. R11 remains accessible to the host. R21 is accessible to the host if the state
of the SRDF/Metro session is active/active. Otherwise, R21 is inaccessible to the host. The
state of the DR session is Suspended.
Split Make the devices in the DR session Not Ready on the SRDF link. If R11 is mapped to the
host, it remains accessible to the host. R21 remains accessible to the host if the state of the
SRDF/Metro session is active/active. Otherwise R21 is inaccessible to the host. The state of
the DR session is Inactive.
Failover Make the devices in the DR session Not Ready on the SRDF link, stop synchronization
between R11 or R21 and R22. Also, adjust R22 to allow the application to start on the DR
side. The state of the DR session is Inactive.
Failback Make the devices in the DR session RW on the SRDF link and start an incremental
resynchronization from R22 to R11. Also, make the devices in the SRDF/Metro RW on the
SRDF link and begin an incremental resynchronization of data from R11 to R21.
Update R1 Make the R11 and R22 devices RW on the SRDF link and start an update of R11, while R22 is
accessible to the host.
Set mode Set the mode of the SRDF link in the DR session to either SRDF/A or Adaptive Copy Disk.
Restrictions
● The devices in each R11, R21, and R22 triple must have the same capacity.
● Devices in a SRDF/Metro Smart DR configuration cannot be:
High availability 63
○ vVols
○ CKD devices
○ BCV devices
○ Encapsulated devices
○ RecoverPoint devices
○ Data Domain devices
○ Part of a SRDF/STAR configuration
○ Part of a SRDF/SQAR configuration
○ Part of a data migration session
○ Enabled for MSC
64 High availability
Independent disaster recovery
Devices in SRDF/Metro groups can simultaneously be part of device groups that replicate data to a third, disaster-recovery site.
Either or both sides of the Metro region can be replicated. An organization can choose which ever configuration that suits its
business needs. The following diagram shows the possible configurations:
Single-sided replication
SRDF/Metro SRDF/Metro
R11 R2 R1 R21
SRDF/A SRDF/A
or Adaptive Copy or Adaptive Copy
Disk Disk
R2 R2
Site C Site C
Double-sided replication
SRDF/Metro SRDF/Metro
R2
R2 R2
R2
The device names differ from a stand-alone SRDF/Metro configuration. This difference reflects the change in the devices'
function when disaster recovery facilities are in place. For instance, when the R2 side is replicated to a disaster recovery site, its
name changes to R21 because it is both the:
● R2 device in the SRDF/Metro configuration
● R1 device in the disaster-recovery configuration
When an SRDF/Metro uses a witness for resilience protection, the two sides periodically renegotiate the winning and losing
sides. This means that the R1 side of the pair can change based on that witness determination of the winner. If the winning and
losing sides do switch:
High availability 65
● An R11 device becomes an R21 device. That device was the R1 device for both the SRDF/Metro and disaster recovery
configurations. Now the device is the R2 device of the SRDF/Metro configuration but it remains the R1 device of the
disaster recovery configuration.
● An R21 device becomes an R11 device. That device was the R2 device in the SRDF/Metro configuration and the R1 device
of the disaster recovery configuration. Now the device is the R1 device of both the SRDF/Metro and disaster recovery
configurations.
Replication modes
As the diagram shows, the links to the disaster-recovery site use either SRDF/Asynchronous (SRDF/A) or Adaptive Copy Disk.
In a double-sided configuration, each of the SRDF/Metro arrays can use either replication mode.
There are several criteria that a witness considers when selecting the winner side. For example, a witness might take DR
configuration into account.
Operating environment
In a HYPERMAX OS environment, both SRDF/Metro arrays must run HYPERMAX OS 5977.945.890 or later. The disaster-
recovery arrays can run Enginuity 5876 or HYPERMAX OS 5977.691.684, and later.
In a PowerMaxOS 5978 environment, both SRDF/Metro arrays must run PowerMaxOS 5978.144.144 or later. The disaster
recovery arrays can run PowerMaxOS 5978.144.144, and later, HYPERMAX OS 5977.952.892, and later, or Enginuity
5876.288.195, and later.
SRDF/Metro configurations that can consist of a mix of operating environments. In this case, the SRDF and NDM Interfamily
Connectivity Information defines the valid SRDF/Metro combinations and the operating environments that are valid for the
disaster recovery arrays.
Deactivate SRDF/Metro
The administrator can delete or move a subset of device pairs from and SRDF/Metro group without changing the group's status
as a SRDF/Metro configuration. To terminate a SRDF/Metro configuration, the administrator removes all the device pairs from
the SRDF/Metro group.
NOTE:
● Only a delete operation can remove the final device pair or set of devices from an SRDF/Metro group.
● To delete devices in a configuration that runs HYPERMAX OS, the devices cannot be Ready on the SRDF link.
● To delete the final devices in a configuration that runs PowerMaxOS those devices must be Not Ready on the SRDF link.
When all the devices in the SRDF/Metro group have been deleted, the group is no longer part of an SRDF/Metro configuration.
Also, the SRDF/Metro configuration has terminated.
66 High availability
SRDF/Metro restrictions
Some restrictions and dependencies apply to SRDF/Metro configurations:
● Online Device Expansion (ODE) is available when both sides run PowerMaxOS 10 (6079) or PowerMaxOS 5978.444.444 and
later.
● The R1 and R2 devices must be identical in size.
● In an SRDF/Metro group, all the R1 devices must be on one side of the SRDF link and all the R2 devices on the other side.
● Devices can have Geometry Compatibility Mode (GCM) set only if the configuration consists of arrays that run
PowerMaxOS 5978 or later.
● Devices can have user Geometry set only if GCM is also set.
● Some operations apply to all devices in an SRDF/Metro group:
○ Setting devices Ready or Not Ready on the SRDF/Metro link. For example, there is no provision to apply an establish or
suspend operation to a subset of devices in an SRDF/Metro group.
○ Changing bias.
● An SRDF/Metro configuration contains FBA or IBM i D910 2 devices only. It cannot contain CKD (mainframe) devices.
● Mobility IDs are allowed in SRDF/Metro configurations only when both sides run PowerMaxOS. If either side runs
HYPERMAX OS, Mobility ID is not available. Also, both sides must use native IDs or Mobility IDs. An SRDF/Metro group
cannot contain devices that use a mix of native IDs and Mobility IDs.
Interaction restrictions
Some restrictions apply to SRDF device pairs in an SRDF/Metro configuration with TimeFinder and Open Replicator (ORS):
● Open Replicator is not supported.
● Devices cannot be BCVs.
● Devices cannot be used as the target of a TimeFinder data copy when the SRDF devices are both:
○ RW on the SRDF link
○ In the SyncInProg, or active/active SRDF, or Active Bias pair state
Devices can, however, be used as the source of a TimeFinder data copy.
High availability 67
4
Data migration
This chapter has more detail on the data migration facilities of SRDF.
Topics:
• Introduction to data migration using SRDF
• Non-disruptive migration
• Migrating data with concurrent SRDF
• Migration-only SRDF
• Device Migration operations requirements
68 Data migration
Introduction to data migration using SRDF
Data migration is a one-time movement of data from one array (the source) to another array (the target). Typical examples are
data center refreshes where data is moved from an old array after which that array is retired or repurposed. Data migration is
not data movement due to replication (where the source data is accessible after the target is created) or data mobility (where
the target is continually updated).
After a data migration operation, applications that access the data reference it at the new location.
This chapter introduces the migration capabilities that are based on SRDF. These capabilities are available for open host (FBA)
systems only. Mainframe systems have data migration capabilities but they are based on other technologies.
Non-disruptive migration
Non-disruptive migration (NDM) is a method for migrating data from one array to another without application downtime. The
migration typically takes place within a data center.
NOTE: For PowerMax 2500 and 8500 arrays, NDM is part of the PowerMax Data Mobility product along with Minimally
Disruptive Migration (MDM).
NDM does not affect the operation of the application host, enabling applications to continue to run while the migration takes
place. Once the migration is complete, the application host switches to using the target array. A typical use of NDM is when a
data center has a technology refresh and replaces an existing array.
Data migration 69
Migration from VMAX array
Migrating from a VMAX array running Enginuity 5876 uses SRDF in Pass-through mode. In this mode, the application host can
access data on both source and target devices while the migration is in progress. PowerMaxOS or HYPERMAX OS on the target
ensures that the source processes all I/O operations sent to the target.
Process
The steps in the migration process are:
1. Set up the environment – configure the infrastructure of the source and target array, in preparation for data migration.
2. On the source array, select a storage group to migrate.
3. If using NDM Updates, shut down the application associated with the storage group.
4. Create the migration session – copy the content of the storage group to the target array using SRDF. When creating the
session, optionally specify whether to move the identity of the LUNs in the storage group to the target array.
5. When the data copy is complete:
a. If the migration session did not move the identity of the LUNs, reconfigure the application to access the new LUNs on
the target array.
b. Cutover the storage group to the PowerMax, VMAX All Flash, or VMAX3 array.
c. Commit the migration session – remove resources from the source array and those used in the migration itself. The
application now uses the target array only.
6. If using NDM Updates, restart the application.
7. To migrate further storage groups, repeat the steps 2 to 6.
8. After migrating all the required storage groups, remove the migration environment.
Process
Normal flow
The steps in the migration process that are followed are:
1. Set up the migration environment – configure the infrastructure of the source and target array, in preparation for data
migration.
2. On the source array, select a storage group to migrate.
3. If using Minimally Disruptive Migration (MDM) from PowerMaxOS 10 (6079)), shut down the application associated with the
storage group.
4. Create the migration session optionally specifying whether to move the identity of the LUNs in the storage group to the
target array – copy the content of the storage group to the target array using SRDF/Metro. During this time, the source and
target arrays are both accessible to the application host.
5. When the data copy is complete:
a. If the migration session did not move the identity of the LUNs, reconfigure the application to access the new LUNs on
the target array.
b. Commit the migration session – remove resources from the source array and those used in the migration itself.
6. If using NDM Updates, restart the application.
7. To migrate further storage groups, repeat the steps 2 to 6.
8. After migrating all the required storage groups, remove the migration environment.
70 Data migration
Alternate flow
There is an alternative process that precopies the data to the target array before making it available to the application host. The
steps in this process are:
1. Set up the migration environment – configure the infrastructure of the source and target array, in preparation for data
migration.
2. On the source array, select a storage group to migrate.
3. Use the precopy facility of NDM to copy the selected data to the target array. Optionally, specify whether to move the
identity of the LUNs in the storage group to the target array. While the data copy takes place, the source array is available to
the application host, but the target array is unavailable.
4. When the copying of the data is complete: Use the Ready Target facility in NDM to make the target array available to the
application host also.
a. If the migration session did not move the identity of the LUNs, reconfigure the application to access the new LUNs on
the target array.
b. If using MDM, restart the application.
c. Commit the migration session: Remove resources from the source array and those resources that are used in the
migration itself. The application now uses the target array only.
5. To migrate further storage groups, repeat the steps 2 to 4.
6. After migrating all the required storage groups, remove the migration environment.
Other functions
Other NDM facilities that are available for exceptional circumstances are:
● Cancel – to cancel a migration that has not yet been committed.
● Sync – to stop or start the synchronization of writes to the target array back to the source array. When stopped, the
application runs on the target array only. Used for testing.
● Recover – to recover a migration process following an error.
Other features
Other features of NDM are:
● Data can be compressed during migration to the PowerMax array
● Allows for nondisruptive revert to the source array
● There can be up to 50 migration sessions in progress simultaneously
● Does not require an additional license as NDM is part of PowerMaxOS
● The connections between the application host and the arrays use FC; the SRDF connection between the arrays uses FC or
GigE
Devices and components that cannot be part of an NDM process are:
● CKD devices
● eNAS data
● Storage Direct and FAST.X relationships along with their associated data
Data migration 71
● Final two-site topology
After migration, the new primary array is mirrored to the original secondary array.
Figure 32. Migrating data and replacing the original primary array (R1)
72 Data migration
Array A Array B
R1 R2
R11 R2 R1
SRDF
migration
R2
R2
Array C Array C
Figure 33. Migrating data and removing the original secondary array (R2)
Data migration 73
Site A Site B
R1 R2
Site B
Site A Site B
R11 R2
SRDF
migration
R21 R2 R1 R2
Figure 34. Migrating data and replacing the original primary (R1) and secondary (R2) arrays
Migration-only SRDF
In some cases, you can migrate data with full SRDF functionality, including disaster recovery and other advanced SRDF features.
In cases where full SRDF functionality is not available, you can move the data using migration-only SRDF.
74 Data migration
Table 12. Limitations of the migration-only mode (continued)
SRDF operations or features Whether supported during migration
Out-of-family Non-Disruptive Not available
Upgrade (NDU)
Data migration 75
5
SRDF I/O operations
This chapter shows how SRDF handles write and read operations. In addition, there is information on the performance and
resilience features of SRDF/A.
Topics:
• SRDF write operations
• SRDF read operations
• SRDF/A resilience and performance features
Host
1 Cache 2 Cache
Drive Drive
emulations SRDF/S emulations
4 3 R2
R1
Enginuity 5876
If either array in the solution is running Enginuity 5876, SRDF/A operates in legacy mode. There are two cycles on the R1 side,
and two cycles on the R2 side:
● On the R1 side:
○ One Capture
○ One Transmit
● On the R2 side:
○ One Receive
○ One Apply
Each cycle switch moves the delta set to the next cycle in the process.
A new capture cycle cannot start until both the transmit cycle on the R1 side and the apply cycle on the R2 side are complete.
Cycle switching can occur within the preset Minimum Cycle Time. However, it can also take longer since it depends on both:
● The time taken to transfer the data from the R1 transmit cycle to the R2 receive cycle
● The time taken to destage the R2 apply cycle
PowerMaxOS or HYPERMAX OS
SRDF/A SSC sessions where both arrays are running PowerMaxOS or HYPERMAX OS have one or more Transmit cycles on the
R1 side (multicycle mode).
The following diagram shows multicycle mode:
● Multiple cycles (one capture cycle and multiple transmit cycles) on the R1 side, and
● Two cycles (receive and apply) on the R2 side.
Capture Apply
N
N-M-1
R1
Transmit queue
depth = M Receive R2
N-M
R1 Transmit
N-1
Transmit R2
N-M
In multicycle mode, each cycle switch creates a new capture cycle (N) and the existing capture cycle (N-1) is added to the
queue of cycles (N-1 through N-M cycles) to be transmitted to the R2 side by a separate commit action.
Only the data in the last transmit cycle (N-M) is transferred to the R2 side during a single commit.
Enginuity 5876
SRDF/A SSC sessions that include an array running Enginuity 5876 have one Capture cycle and one Transmit cycle on the R1
side (legacy mode).
The following diagram shows legacy mode:
● 2 cycles (capture and transmit) on the R1 side, and
● 2 cycles (receive and apply) on the R2 side
Primary Site Secondary Site
R1
Capture Apply R2
N N-2
R1 Transmit
Receive
N-1 R2
N-1
Capture
N Apply
N-M-1
SRDF
consistency
group { R1
R1
R1
R1
Transmit queue
depth = M
Transmit
N-1
Transmit
N-M
Receive
N-M R2
R2
R2
R2
R2
SRDF cycle switches all SRDF/A sessions in the MSC group at the same time. All sessions in the MSC group have the same:
● Number of cycles outstanding on the R1 side
● Transmit queue depth (M)
In SRDF/A MSC sessions, the array's operating environment performs a coordinated cycle switch during a window of time when
no host writes are being completed.
So that it can establish consistency, MSC temporarily suspends write operations from the application host across all SRDF/A
sessions. MSC resumes those write operations once there is consistency. There is a timeout of 12 seconds associated with
Enginuity 5876
SRDF/A MSC sessions that include an array running Enginuity 5876 have only two cycles on the R1 side (legacy mode).
In legacy mode, the following conditions must be met before an MSC cycle switch can take place:
● The primary array’s transmit delta set must be empty.
● The secondary array’s apply delta set must have completed. The N-2 data must be marked write pending for the R2 devices.
To achieve consistency through cycle switching, MSC suspends write operations from the application host in the same was as
it does when both arrays run PowerMaxOS 5978 or HYPERMAX OS 5977. It also uses the 12-second timeout to protect against
the failure of MSC while synchronization is in progress.
Host Cache
Cache Cache SRDF/A
or
SRDF/S Adaptive copy disk
R1 R21 R2
PowerMaxOS or HYPERMAX OS
Arrays running PowerMaxOS or HYPERMAX OS cannot service SRDF/A read I/Os with Delta Set Extension (DSE). So, spillover
is not invoked during a SRDF/A restore operation until that restore operation is complete. SRDF/A cache data offloading
contains more information about DSE.
NOTE: PPRC is not supported on arrays running PowerMaxOS 10 and HYPERMAX OS 5977.
Tunable cache
The storage administrator can set the SRDF/A maximum cache utilization threshold to a percentage of the system write
pending limit for an individual SRDF/A session in single session mode and multiple SRDF/A sessions in single or MSC mode.
When the SRDF/A maximum cache utilization threshold or the system write pending limit is exceeded, the array exhausts its
cache.
By default, the SRDF/A session drops if array cache is exhausted. However, the SRDF/A session can continue to run for a
user-defined period. The storage administrator can assign priorities to sessions, keeping SRDF/A active for as long as cache
resources allow. If the condition is not resolved at the expiration of the user-defined period, the SRDF/A session drops.
The features described below help to prevent SRDF/A from exceeding its maximum cache utilization threshold.
PowerMaxOS or HYPERMAX OS
PowerMaxOS and HYPERMAX OS offload data into a Storage Resource Pool. One or more Storage Resource Pools are
pre-configured before installation and used by a variety of functions. DSE can use a Storage Resource Pool pre-configured
specifically for DSE. If no such pool exists, DSE can use the default Storage Resource Pool. All SRDF groups on the array use
the same Storage Resource Pool for DSE. DSE requests allocations from the Storage Resource Pool only when DSE is activated.
The Storage Resource Pool used by DSE is sized based on your SRDF/A cache requirements. DSE is automatically enabled.
Transmit Idle
During short-term network interruptions, the transmit idle state indicates that SRDF/A is still tracking changes but is unable to
transmit data to the remote side.
Write folding
Write folding improves the efficiency of your SRDF links.
When multiple updates to the same location arrive in the same delta set, the SRDF emulations send the only most current data
across the SRDF links.
Write folding decreases network bandwidth consumption and the number of I/Os processed by the SRDF emulations.
Write pacing
SRDF/A write pacing reduces the likelihood that an active SRDF/A session drops due to cache exhaustion. Write pacing
dynamically paces the host I/O rate so it does not exceed the SRDF/A session's service rate. This prevents cache overflow on
both the R1 and R2 sides.
Use write pacing to maintain SRDF/A replication with reduced resources when replication is more important for the application
than minimizing write response time.
You can apply write pacing to groups, or devices for individual RDF device pairs that have TimeFinder/Snap or TimeFinder/
Clone sessions off the R2 device.
NOTE: Write pacing is not available in configurations that include arrays running PowerMaxOS 10 (6079).
Group pacing
SRDF/A group pacing adjusts the pace of host writes to match the SRDF/A session’s link transfer rate. When host I/O rates
spike, or slowdowns make transmit or apply cycle times longer, group pacing extends the host write I/O response time to match
slower SRDF/A service rates.
When DSE is activated for an SRDF/A session, host-issued write I/Os are paced so their rate does not exceed the rate at which
DSE can offload the SRDF/A session’s cycle data to the DSE Storage Resource Pool.
Group pacing behavior varies depending on whether the maximum pacing delay is specified:
● If the maximum write pacing delay is not specified, SRDF adds up to 50 ms to the host write I/O response time to match the
speed of either the SRDF links or the apply operation on the R2 side, whichever is slower.
● If the maximum write pacing delay is specified, SRDF adds up to the user-specified maximum write pacing delay to keep the
SRDF/A session running.
Enginuity 5876
SRDF/A device pacing applies a write pacing delay for individual SRDF/A R1 devices whose R2 counterparts participate in
TimeFinder copy sessions.
SRDF/A group pacing avoids high SRDF/A cache utilization levels when the R2 devices servicing both the SRDF/A and
TimeFinder copy requests experience slowdowns.
Device pacing avoids high SRDF/A cache utilization when the R2 devices servicing both the SRDF/A and TimeFinder copy
requests experience slowdowns.
Device pacing behavior varies depending on whether the maximum pacing delay is specified:
● If the maximum write pacing delay is not specified, SRDF adds up to 50 milliseconds to the overall host write response time
to keep the SRDF/A session active.
● If the maximum write pacing delay is specified, SRDF adds up to the user-defined maximum write pacing delay to keep the
SRDF/A session active.
Device pacing can be activated on the second hop (R21 -> R2) of a cascaded SRDF and cascaded SRDF/Star, topologies.
Device pacing may not take effect if all SRDF/A links are lost.
86 Management tools
Solutions Enabler
SYMCLI commands are invoked from a management host, either interactively on the command line, or using scripts.
SYMCLI is built on functions that use system calls to generate low-level I?O SCSI commands. Configuration and status
information is maintained in a host database file, reducing the number of inquiries from the host to the arrays.
Use SYMCLI to:
● Configure array software (for example, SnapVx Snapshots and Clones, SRDF, Open Replicator)
● Monitor device configuration and status.
● Perform control operations on devices and data objects.
Solutions Enabler also has a Representational State Transfer (REST) API. Use this API to access performance and configuration
information, and provision storage arrays. It can be integrated with any tool that supports REST, such as web browsers and
programming platforms that can issue HTTP requests.
Unisphere
Unisphere is a web-based application that provides provisioning, management, and monitoring of arrays.
With Unisphere you can perform the following tasks:
Management tools 87
● SRDF: Configure, establish, and split SRDF devices, including:
○ SRDF/A
○ SRDF/S
○ Concurrent SRDF/A
○ Concurrent SRDF/S
● TimeFinder:
○ Create point-in-time copies of full volumes or individual datasets.
○ Create a point-in-time snapshot of images.
Extended features
SRDF/TimeFinder Manager for IBM i extended features provide support for the IBM independent ASP (IASP) functionality.
IASPs are sets of switchable or private auxiliary disk pools (up to 223) that can be brought online/offline on an IBM i host
without affecting the rest of the system.
When combined with SRDF/TimeFinder Manager for IBM i, IASPs let you control SRDF or TimeFinder operations on arrays
attached to IBM i hosts, including:
● Display and assign TimeFinder SnapVX devices.
● Run SRDF or TimeFinder commands to establish and split SRDF or TimeFinder devices.
● Present one or more target devices containing an IASP image to another host for business continuity (BC) processes.
Access to extended features control operations include:
● From the SRDF/TimeFinder Manager menu-driven interface.
● From the command line using SRDF/TimeFinder Manager commands and associated IBM i commands.
Mainframe Enablers
Mainframe Enablers (MFE) is a suite of products for managing and monitoring Dell storage systems in a mainframe environment.
The entire suite consists of:
● SRDF Host Component for z/OS
● ResourcePak Base for z/OS
● Autoswap for z/OS
● Consistency Groups for z/OS
● TimeFinder SnapVX
● Data Protector for z/Systems(zDP)
● TimeFinder/Clone Mainframe Snap Facility
● TimeFinder/Mirror for z/OS
● TimeFinder Utility
In the context of SRDF, only the SRDF Host Component for z/OS, TimeFinder/Mirror for z/OS, plus these components of the
ResourcePak for z/OS are relevant:
● SRDF/A Monitor
● WPA Monitor
● SRDF/AR
88 Management tools
SRDF Host Component for z/OS
SRDF Host Component for z/OS is a z/OS subsystem for controlling SRDF processes and monitoring SRDF status using
commands issued from a host. With the SRDF Host Component you can manage these SRDF variants:
● SRDF/S
● SRDF/A
● SRDF/DM
● SRDF/AR
● SRDF/CG
● SRDF/Star
● SRDF/SQAR
You can issue SRDF Host Component commands to both local and remote storage systems. Commands destined for remote
storage systems are transmitted through local storage systems using SRDF links. Configuration and status information can be
viewed for each device on each storage system that contains SRDF devices.
There are user interfaces for the SRDF Host Component for the batch commands and through the system console.
SRDF/A Monitor
SRDF/A Monitor is a facility for managing and monitoring SRDF/A operations. It is a component of the ResoucePak Base for
z/OS. SRDF/A Monitor:
● Discovers storage systems that are running SRDF/A and monitors the state of the SRDF/A groups
● Collects and writes System Management Facility (SMF) data about the SRDF/A groups
● Optionally, calls a user exit to perform user-defined actions when it detects a change in the state of a SRDF/A group
● Optionally, invokes SRDF/A automatic recovery procedures to recover a dropped SRDF/A session
Management tools 89
WPA Monitor
SRDF/A Write Pacing extends the availability of SRDF/A by enabling you to prevent conditions that can result in cache
overflow. The SRDF/A Write Pacing Monitor, a component of the ResoucePak Base for z/OS, gathers information about write
pacing activities in a storage system. The data is collected for each:
● SRDF/A group by the storage system
● SRDF device by the SRDF group and the storage system
The data includes:
● Changes in the ARMED state by device
● Total paced delay by device
● Total paced track count by device
● Changes in the ENABLED/SUPPORTED/ARMED/PACED state for the SRDF/A group
● Total paced delay for the SRDF/A group
● Total paced track count for the SRDF/A group
The WPA Monitor writes the collected information as SMF records.
90 Management tools
7
More information
This chapter shows where there is further information available on some of the subjects that are mentioned in other chapters.
All documents are available from Dell Technologies Online Support, (https://www.dell.com/support/home).
Topics:
• Solutions Enabler CLI
• Unisphere
• Mainframe Enablers
• GDDR
• SRDF/TimeFinder Manager for IBM i
• SRDF/Metro vWitness
• SRDF Interfamily Compatibility
• Storage arrays
More information 91
Solutions Enabler CLI
Solutions Enabler SRDF Family CLI User Guide
Unisphere
Unisphere for PowerMax Product Guide
Unisphere for PowerMax Online Help
Unisphere for VMAX Online help
Unisphere for PowerMax REST API Concepts and Programmer's Guide
Unisphere for VMAX REST API Concepts and Programmer's Guide
Mainframe Enablers
SRDF Host Component for z/OS Product Guide
ResourcePak Base for z/OS Product Guide (contains information about SRDF/A Monitor, WPA Monitor, and SRDF/AR process
management)
TimeFinder/Mirror for z/OS Product Guide (contains information about configuring, managing, and monitoring SRDF/AR)
AutoSwap for z/OS Product Guide
Consistency Groups for z/OS Product Guide
GDDR
GDDR for SRDF/Star Product Guide
GDDR for SRDF/Star with AutoSwap Product Guide
GDDR for SRDF/Star-A Product Guide
GDDR for SRDF/SQAR with AutoSwap Product Guide
GDDR for SRDF/A Product Guide
GDDR for SRDF/S with AutoSwap Product Guide
GDDR for SRDF/S with ConGroup Product Guide
SRDF/Metro vWitness
SRDF/Metro vWitness Configuration Guide
92 More information
Storage arrays
PowerMax Family Product Guide
VMAX All Flash Product Guide
VMAX 3 Product Guide
Symmetrix VMAX Family with Enginuity Product Guide
More information 93