h12642 WP Emc Vplex Leveraging Native and Array Based Copy Technologies
h12642 WP Emc Vplex Leveraging Native and Array Based Copy Technologies
Abstract
This white paper provides best practices planning and
use cases for using array-based and native replication
solutions with EMC® VPLEX™ Local and EMC VPLEX
Metro.
Copyright © 2016 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate of its publication date. The
information is subject to change without notice.
The information in this publication is provided “as is”. EMC Corporation makes no
representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for
a particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires
an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
All other trademarks used herein are the property of their respective owners.
Part Number H14293
Preserving investments in array-based storage replication technologies like EMC MirrorView™, TimeFinder™,
SnapView™, and SRDF™ is crucial in today’s IT environments. This paper outlines the considerations and
methodologies for array-based replication technologies within VPLEX Local and Metro environments.
Procedural examples for various replication scenarios, examples of host based scripting, and VPLEX native
copy capabilities are examined. The key conclusion of the paper is array based replication technologies
continue to deliver their original business value with VPLEX environments.
Audience
This white paper is intended for technology architects, storage administrators, and system administrators
who are responsible for architecting, creating, managing, and using the IT environments that utilize EMC
VPLEX technologies. It is assumed that the reader is familiar with EMC VPLEX and storage array-based
replication technologies.
Note: Using the built-in copy capability of VPLEX opens the door to creating copies of
virtual volumes from heterogeneous storage arrays within and across data centers.
Terminology
Table 1: Operational Definitions
Term Definition
Storage volume LUN or unit of storage presented by the back-end arrays.
Metadata volume System volume that contains metadata about the devices, virtual volumes,
and cluster configuration.
Virtual volume Unit of storage presented by the VPLEX front-end ports to hosts.
Director The central processing and intelligence of the VPLEX solution. There are
redundant (A and B) directors in each VPLEX Engine.
VPLEX cluster A collection of VPLEX engines in one rack, using redundant, private Fibre
Channel connections as the cluster interconnect.
Acronym/Abbreviation Definition
Copy An independent full disk copy of another device.
Source device Standard or primary array device. Typically used to run production
applications.
VPLEX Technology
EMC VPLEX Virtual Storage
EMC VPLEX virtualizes traditional physical storage array devices by applying three layers of logical
abstraction. The logical relationships between each abstraction layer are shown below in Figure 1.
Extents are the mechanism used by VPLEX to divide storage volumes. Extents may be all or part of the
underlying storage volume. EMC VPLEX aggregates extents and applies various RAID geometries (i.e.
RAID-0, RAID-1, or RAID-C) to them within the device layer. Devices are constructed using one or more
extents, and can be combined into more complex RAID schemes and device structures as desired. At
the top layer of the VPLEX storage structures are virtual volumes. Virtual volumes are created from
devices and inherit the size of underlying device. Virtual volumes are the elements VPLEX exposes to
hosts via its FE ports. Access to virtual volumes is controlled using storage views. Storage views are
analogous to Auto-provisioning Groups on EMC Symmetrix® and VMAX® or to storage groups on EMC
Note: Snapshots or any thinly provisioned storage volumes should identified as thin devices
during the VPLEX claiming process.
These copies can either be presented back through VPLEX or directly from the underlying storage
array to a backup host, to a test environment, or even back to the original host. For our discussion, it
is assumed the copies will be presented back through VPLEX to a host. For this basic use case
example each array-based copy has a one-to-one VPLEX virtual volume configuration (VPLEX device
capacity = extent capacity = array storage volume capacity) and has a single extent VPLEX RAID-0 or
two extent RAID-1 device geometry. The process for managing more complex geometry array-based
copies (for example, a copy that is RAID-1 or distributed RAID-1) is covered in the advanced use case
discussion at the end of this section.
Potential host, VPLEX, and storage array connectivity combinations for the basic use case are
illustrated in Figures Figure 3 and Figure 4. Figure 3 shows a single array with a single extent (RAID-0
geometry) source and single extent (RAID-0) array based copy. Both the source and the copy storage
volumes pass through VPLEX and then back to hosts. The array-based copy is presented to a
separate second host.
In Figure 4, the source volume has single extent RAID-1 geometry. Each mirror leg is a single extent
that is equal in size to the underlying storage volume from each array. The difference in this topology
is that there is an option to use the array-based copy services provided by array 1 and/or by array 2.
The choice of which array to copy from can be based on available capacity, array workload or,
perhaps, feature licensing. The objective remains the same as with Figure 3 -- to create a copy that
has single extent (RAID-0) geometry.
2. Using the VPLEX CLI or REST API, check for valid copy conditions to ensure data consistency:
Note: Each of the following checks are typically scripted or built into code that orchestrates
the overall copy process on the array.
a. Confirm a VPLEX ndu is not in progress. Using the VPLEX CLI issue the ‘ndu status’ command
and confirm that the response is ‘No firmware, BIOS/POST, or SSD upgrade is in progress.’
b. Confirm the device is healthy. Issue the ‘ll’ command from the /clusters/<cluster name>/virtual-
volumes/<virtual volume name> context for each volume(s) to be copied.
i. Confirm the underlying device status is not marked ‘out of date’ or in a ‘rebuilding’ state.
ii. Confirm Health Status is ‘ok’
iii. Confirm Operational Status is ‘ok’
Note: Consistency can’t be guaranteed if Host I/O is ongoing and a R1 or DR1 is being
snapped/cloned
c. Confirm the source VPLEX virtual volume device geometry is not RAID-C. Device geometry can be
determined by issuing ll at the /clusters/<cluster name>/devices/<device name> context.
d. Confirm each volume is 1:1 mapped (single extent) RAID-0 or 1:1 mapped (two extent) local
RAID-1. Distributed RAID-1 device legs must be a combination of RAID-0 (single extent) and/or
RAID-1 (two extent) device geometries.
e. Confirm the device is not being protected by RecoverPoint. Issue the ‘ll’ command from the
/clusters/<cluster name>/virtual-volumes/<virtual volume name> context and check
‘recoverpoint-protection-at’ is set to [] and ‘recoverpoint-usage’ is set to ‘-‘.
f. Confirm VPLEX volumes to be copied do not have ‘remote’ locality (from same VPLEX cluster).
Issue the ‘ll’ command against the /clusters/<local cluster name>/virtual volumes/<virtual
volume name> context and confirm locality is ‘local’ or ‘distributed’.
g. Ensure virtual volumes are members of the same VPLEX consistency group and the same array-
based consistency group (if available). In most cases all members of the consistency group
should be copied together. Consistency group membership can be determined by issuing ll from
the /clusters/<cluster name>/consistency-groups/<consistency group name> context.
Note: Best practice is to set the VPLEX consistency group detach rule (winning site) to
match site where array based copies are made.
3. Follow standard array-based procedure(s) to generate desired array-based copies (i.e SnapView,
TimeFinder, XtremIO User or CLI guides). The most up to date documentation for EMC products is
available at https://support.emc.com.
Once the copy is created on the storage array, it can be presented (zoned and lun masked) directly from the
storage array back to a test or backup host. Most often, however, the preference is to present the array
based copy back through VPLEX to a host. Since a single array contains the copy(s) / snapshot(s), the
resulting virtual volume(s) will always be a local virtual volume(s). For VPLEX presented copies, perform
these additional VPLEX specific steps:
4. Confirm the array-based copies are visible to VPLEX BE ports. As necessary, perform storage array to
VPLEX masking/storage group modification to add the copies.
5. Perform one-to-one encapsulation through the Unisphere for VPLEX UI or VPLEX CLI:
a. Claim storage volumes from the storage array containing the copies
b. Identify the logical units from the array containing the array-based copies. One way to do
this is to use the host lun number assigned by the array to each device in the array masking
view (VMAX/XtremIO) or storage group (CX4/VNX).
c. Create VPLEX virtual volumes from the copies using the VPLEX UI or CLI:
i. Use the ‘Create Virtual Volumes’ button from the Arrays context within the UI
or
ii. Create single extent, single member RAID-0 geometry device, and virtual volume(s)
with the ‘storage-tool-compose’ (5.3 and higher) command from the VPLEX CLI
6. Present VPLEX virtual volumes built from array-based copies to host(s)
a. As necessary, create VPLEX storage view(s)
b. Add virtual volumes built from array-based copies to storage view(s)
This process follows the traditional best practices for refreshing non-VPLEX array based copies such as those
provided by products such as TimeFinder, SnapView, and XtremIO Snapshots. During the resynchronization
process, the storage array writes new data to the copy outside of the host and VPLEX IO path. It is very
important that the host does not have the copies mounted and in-use. Failure to stop applications and to
clear both the host read cache and the VPLEX read cache prior to re-synchronization may lead to data
consistency issues on the copies.
Technical Note: These steps require using the VPLEX CLI or REST API only.
To re-synchronize (update) VPLEX virtual volumes based on array-based copies:
1. Confirm the device(s) you wish to restore is/are not RecoverPoint protected. Issue the ‘ll’
command from the /clusters/<cluster name>/virtual volumes/<virtual volume name> context.
Check ‘recoverpoint-protection-at’ is set to [] and ‘recoverpoint-usage’ is set to ’-‘. If the
virtual volume is a member of a consistency group, the consistency group context will
indicate if it is RecoverPoint protected or not as well.
2. Confirm a VPLEX ndu is not in progress. Using the VPLEX CLI issue the ‘ndu status’ command
and confirm that the response is ‘No firmware, BIOS/POST, or SSD upgrade is in progress.’
3. Shut down any applications using the VPLEX virtual volumes that contain the array-based
copies to be re-synchronized. Unmount the associated virtual volumes from the host. The
key here is to force the host to invalidate its own read cache and to prevent any new writes
during the resynchronization of the array-based copy.
4. Invalidate the VPLEX read cache. There are several options to achieve invalidation depending on your
VPLEX GeoSynchrony code version:
A. For pre-5.2 code, remove the virtual volume(s) from all storage views. Make note of the virtual
volume lun numbers within the storage view prior to removing them. You will need this
information in step 6 below.
or
or
Use consistency-group cache-invalidate to invalidate an entire VPLEX consistency
group.
VPlexcli:/> consistency-group cache-invalidate <consistency group>
cache-invalidate-status
-----------------------
director-1-1-A status: in-progress
result: -
cause: -
cache-invalidate-status
-----------------------
director-1-1-A status: completed
result: successful
cause: -
5. Identify the source storage volumes within the array you wish to resynchronize. Follow your normal array
resynchronization procedure(s) to refresh the desired array-based copies.
6. Confirm the IO Status of storage volumes based on array-based copies is “alive” by doing a long listing
against the storage-volumes context for your cluster.
For example:
5. Confirm VPLEX back-end paths are healthy by issuing the “connectivity validate-be”
command from the VPLEX CLI. Ensure that there are no errors or connectivity issues to the
back-end storage devices. Resolve any error conditions with the back-end storage before
proceeding.
Example output showing desired back-end status:
6. For pre-5.2 code, restore access to virtual volumes based on array-based copies for host(s). If
you removed the virtual volume from a storage view, add the virtual volume back, specifying the
original LUN number (noted in step 2) using the VPLEX CLI:
/clusters/<cluster name>/exports/storage-views> addvirtualvolume -v
storage_view_name/ -o (lun#, virtual_volume_name) -f
7. As necessary, rescan devices and restore paths (for example, powermt restore) on hosts.
Technical Note: VPLEX does not alter operating system limitations relative to array-based
copies. When presenting an array-based copy back to the same host, you will need to
confirm both the host operating system and the logical volume manager support such an
operation. If the host OS does not support seeing the same device signature or logical
volume manager identifier, VPLEX will not change this situation.
This section assumes VPLEX is providing RAID-1 device geometry for the array-based copy. The VPLEX
copy virtual volume consists of one mirror leg that contains an array-based copy and a second mirror leg
that may (Figure 7) or may not (Figure 6) contain array based copy.
Note: Array based copies on two different arrays are feasible, but consistency across
disparate arrays cannot be assumed due to differences in things such as copy timing and
copy performance. That said, some array management software interfaces do have a
concept of consistency across arrays. See individual array administrative guides for
further details.
For some use cases, it may be desirable to create copies of each mirror leg to protect against the loss of
an array or loss of connectivity to an array. In addition, the second copy can be located within the same
array or a separate second array. Figure 6 illustrates a local VPLEX device protected using an array-based
copy with RAID-1 geometry. This configuration applies to both array-based copies within a local RAID-1
device and to array-based copies that make up distributed (VPLEX Metro) RAID-1 devices. The steps that
follow are critical to ensure proper mirror synchronization (for the non-copy leg) and to ensure each
virtual volume’s read cache is properly updated.
Prerequisites
This section assumes you are using or planning to use distributed or local RAID-1 VPLEX virtual volumes
built with at least one mirror leg that is an array-based copy. In addition, the VPLEX virtual volumes must
possess both of the following attributes:
Be comprised of devices that have a one-to-one storage volume pass-through configuration to VPLEX
(device capacity = extent capacity = storage volume capacity).
Have a single device with single-extent RAID-1 (two single-extent devices being mirrored) geometry.
2. Using the VPLEX CLI or REST API, check for valid copy conditions to ensure data consistency:
Note: Each of the following checks are typically scripted or built into code that orchestrates the
overall copy process on the array.
a. Confirm a VPLEX ndu is not in progress. Using the VPLEX CLI issue the ‘ndu status’ command
and confirm that the response is ‘No firmware, BIOS/POST, or SSD upgrade is in progress.’
b. Confirm the device is healthy. Issue the ‘ll’ command from the /clusters/<cluster
name>/virtual-volumes/<virtual volume name> context for each volume(s) to be copied.
i. Confirm the underlying device status is not marked ‘out of date’ or in a ‘rebuilding’
state.
ii. Confirm Health Status is ‘ok’
iii. Confirm Operational Status is ‘ok’
c. Confirm the underlying device geometry is not RAID-c. Device geometry can be determined
by issuing ll at the /clusters/<cluster name>/devices/<device name> context.
d. Confirm each volume is 1:1 mapped (single extent) RAID-0 or 1:1 mapped (two extent) local
RAID-1. Distributed RAID-1 device legs must be a combination of RAID-0 (single extent)
and/or RAID-1 (two extent) device geometries.
Note: VPLEX consistency group membership should align with array based consistency
group membership whenever possible.
h. For RAID-1 or distributed RAID-1 based virtual volumes, confirm underlying storage volume
status is not failed or in an error state. Issue the ‘ll’ command from the /clusters/<cluster
name>/devices context or from /distributed-storage/distributed-devices/<distributed device
name>/components context.
i. For distributed RAID-1 confirm WAN links are not down. Issue the ‘cluster status’ command
and confirm ‘wan-com’ status is ‘ok’ or ‘degraded’. If WAN links are completely down,
confirm array based copy is being made at winning site.
Note: Best practice is to set the consistency group detach rule (winning site) to match site where
array based copy is made.
3. Follow standard array-based procedure(s) to generate desired array-based copies. For example, use
the TimeFinder, SnapView, or XtremIO Snapshots cli documentation. The most up to date versions
of these documents are available at http://support.emc.com
1. Confirm the device(s) you wish to resynchronize is/are not RecoverPoint protected. Issue the ‘ll’
command from the /clusters/<cluster name>/virtual volumes/<virtual volume name> context. Check
‘recoverpoint-protection-at’ is set to [] and ‘recoverpoint-usage’ is set to ’-‘. If the virtual volume is a
member of a consistency group, the consistency group context will indicate if it is RecoverPoint
protected or not as well.
2. Confirm a VPLEX ndu is not in progress. Using the VPLEX CLI issue the ‘ndu status’ command and
confirm that the response is ‘No firmware, BIOS/POST, or SSD upgrade is in progress.’
3. Shut down any applications accessing the VPLEX virtual volumes that contain the array-based
copies to be resynchronized. This must be done at both clusters if the virtual volumes are being
accessed across multiple sites. If necessary, unmount the associated virtual volumes from the
host(s). The key here is to force the hosts to invalidate their read cache and to prevent any new
writes during the resynchronization of the array-based copy.
4. Invalidate the VPLEX read cache. There are two options to achieve invalidation depending on your VPLEX
GeoSynchrony code version:
A. For pre-5.2 code, remove the virtual volume(s) from all storage views.
1. If the virtual volume is built from a local RAID-1 device and/or is a member of a single
storage view, using the VPLEX CLI run:
/clusters/<cluster name>/exports/storage-views>
removevirtualvolume -v storage_view_name -o virtual_volume_name
–f
2. If the virtual volume is built from a distributed device and is a member of storage
views in both clusters, using the VPLEX CLI run:
Make note of the virtual volume lun numbers within the VPLEX storage view prior to removing
them. You will need this information in step 7 below.
Or
B. For 5.2 and higher code,
1. Use virtual-volume cache-invalidate to invalidate an individual volume:
VPlexcli:/> virtual-volume cache-invalidate <virtual volume>
or
Use consistency-group cache-invalidate to invalidate an entire VPLEX consistency
group.
VPlexcli:/> consistency-group cache-invalidate <consistency group>
cache-invalidate-status
-----------------------
director-1-1-A status: in-progress
result: -
cause: -
cache-invalidate-status
-----------------------
director-1-1-A status: completed
result: successful
cause: -
Technical Note 3: Cache-invalidate command must not be executed on the Recover Point
enabled virtual-volumes. This means using either RecoverPoint or array-based copies, but not
both for any given virtual volume. The VPLEX Clusters should not be undergoing a NDU while
this command is being executed.
5. Detach the VPLEX device mirror leg that will not be updated during the array-based replication or
resynchronization processes:
device detach-mirror -m <device_mirror_to_detach> -d <distributed_device_name> –i -f
6. Perform resynchronization (update) of array-based copies using standard TimeFinder, SRDF, SnapView,
MirrorView, or XtremIO administrative procedures. See http://support.emc.com for the latest copies of EMC
product documentation.
7. Confirm the IO Status of storage volumes based on array-based copies is “alive” by doing a long listing
against the storage-volumes context for your cluster.
For example:
In addition, confirm VPLEX back-end paths are healthy by issuing the “connectivity validate-be” command
from the VPLEX CLI. Ensure that there are no errors or connectivity issues to the back-end storage devices.
Resolve any error conditions with the back-end storage before proceeding.
Example output showing desired back-end status:
Distributed RAID-1:
device attach-mirror -m <2nd mirror leg to attach> -d /clusters/<cluster
name>/devices/<existing distributed RAID-1 device>
Technical Note: The device you are attaching is the non-copy mirror leg. It will be
overwritten with the data from the copy. Depending on the amount of data being
resynchronized this process can take quite some time. This should be factored into the
RTO consideration.
The lun# is the previously recorded value from step 2 for each virtual volume.
Technical Note 1: For pre-5.2 GeoSynchrony code, EMC recommends waiting at least 30
seconds after removing access from a storage view to restore access. Waiting ensures that
the VPLEX cache has been cleared for the volumes. The array-based resynchronization will
likely will take 30 seconds, but if you are scripting, be sure to add a pause prior to
performing this step.
Technical Note 2: Some hosts and applications are sensitive to LUN numbering changes.
Use the information you recorded in step 2 to ensure the same LUN numbering for the
host(s) when the virtual volume access is restored.
or
Use consistency-group cache-invalidate to invalidate an entire VPLEX consistency group of
source volumes.
VPlexcli:/> consistency-group cache-invalidate <consistency group>
cache-invalidate-status
-----------------------
director-1-1-A status: in-progress
result: -
cause: -
cache-invalidate-status
-----------------------
director-1-1-A status: completed
result: successful
cause: -
6. Identify the copy (BCV/Clone/Snapshot) to source device pairings within the array(s). For EMC products,
follow the TimeFinder, SnapView, SRDF, MirrorView, XtremIO Snapshot restore procedure(s) to restore
data to the desired source devices. See http://support.emc.com for the latest EMC product
documentation for TimeFinder, SnapView, SRDF, MirrorView, or XtremIO Snapshots.
7. Confirm the IO Status of the source storage volumes within VPLEX is “alive” by doing a long listing against
the storage volumes context for your cluster.
For example:
In addition, confirm VPLEX back-end paths are healthy by issuing the “connectivity validate-be”
command from the VPLEX CLI. Ensure that there are no errors or connectivity issues to the back-end
storage devices. Resolve any error conditions with the back-end storage before proceeding.
Example output showing desired back-end status:
8. For Pre 5.2 code, restore access to virtual volume(s) based on source devices for host(s): Add the virtual
volume back to the view, specifying the original LUN number (noted in step 2) using VPLEX CLI:
/clusters/<cluster name>/exports/storage-views> addvirtualvolume -v
storage_view_name/ -o (lun#, virtual_volume_name) -f
3 4 5
Technical Note: This same set of steps can be applied to remote array-based copy
products like SRDF or MirrorView. For example, an SRDF R2 or MirrorView Secondary Image
is essentially identical in function to local array-based copy. The remote copy, in this case,
can be used to do a restore to a production (R1/Primary Image) volume.
Prerequisites
This section assumes users have existing distributed or local RAID-1 VPLEX virtual volumes built from the
array source devices being restored. In addition, the VPLEX virtual volumes must possess both of the
following attributes:
Note: Each of the following checks are typically scripted or built into code that orchestrates the
overall restore process on the array.
a. Confirm a VPLEX ndu is not in progress. Using the VPLEX CLI issue the ‘ndu status’ command
and confirm that the response is ‘No firmware, BIOS/POST, or SSD upgrade is in progress.’
b. Confirm the restore target device is healthy. Issue the ‘ll’ command from the
/clusters/<cluster name>/virtual-volumes/<virtual volume name> context for each volume(s)
to be copied.
i. Confirm the underlying device status is not marked ‘out of date’ or in a ‘rebuilding’
state.
ii. Confirm Health Status is ‘ok’
iii. Confirm Operational Status is ‘ok’
c. Confirm the underlying restore target device geometry is not RAID-c. Device geometry can be
determined by issuing ll at the /clusters/<cluster name>/devices/<device name> context.
d. Confirm each volume is 1:1 mapped (single extent) RAID-0 or 1:1 mapped (two extent) local
RAID-1. Distributed RAID-1 device legs must be a combination of RAID-0 (single extent)
and/or RAID-1 (two extent) device geometries.
e. Confirm the restore target device is not being protected by RecoverPoint. Issue the ‘ll’
command from the /clusters/<cluster name>/virtual-volumes/<virtual volume name> context
and check ‘recoverpoint-protection-at’ is set to [] and ‘recoverpoint-usage’ is set to ‘-‘.
f. Confirm VPLEX volumes to be restore have the same locality (from same VPLEX cluster) and
are in the same array.
g. Ensure virtual volumes are members of the same VPLEX consistency group and, when
possible, part of a similar array-based consistency group construct within the array. In most
cases all members of the consistency should be copied together. Consistency group
membership can be determined by issuing ll from the /clusters/<cluster name>/consistency-
groups/<consistency group name> context. Array based consistency group membership can
be determined using the appropriate array cli and/or api.
h. For RAID-1 or distributed RAID-1 based virtual volumes, confirm underlying storage volume
status is not failed or in an error state. Issue the ‘ll’ command from the /clusters/<cluster
Note: Best practice is to set the consistency group detach rule (winning site) to match site
where array based restore is being done.
2. Shut down any host applications (both local and remote with DR1) using the source VPLEX volume(s)
that will be restored. Unmount the associated virtual volumes on the host. The objectives here are to
prevent host access and to clear the host’s read cache.
3. Invalidate VPLEX read cache on the source virtual volume(s). There are several options to achieve VPLEX
read cache invalidation depending on your VPLEX GeoSynchrony code version:
A. For pre-5.2 code, remove the source virtual volume(s) from all storage views. Make note of the
virtual volume lun numbers within the storage view prior to removing them. You will need this
information in step 7 below.
or
B. For 5.2 and higher code,
1. Use virtual-volume cache-invalidate to invalidate the individual source volume(s):
VPlexcli:/> virtual-volume cache-invalidate <virtual volume>
or
Use consistency-group cache-invalidate to invalidate an entire VPLEX consistency group of
source volumes.
VPlexcli:/> consistency-group cache-invalidate <consistency group>
cache-invalidate-status
-----------------------
director-1-1-A status: in-progress
result: -
cause: -
cache-invalidate-status
-----------------------
director-1-1-A status: completed
result: successful
cause: -
Technical Note 3: Cache-invalidate command must not be executed on the Recover Point
enabled virtual-volumes. This means using either RecoverPoint or array-based copies, but not
both, for a given virtual volume. The VPLEX Clusters should not be undergoing a NDU while
this command is being executed.
4. Detach the VPLEX device RAID-1 or Distributed RAID-1 mirror leg that will not be restored during the
array-based restore processes. If the virtual volume is a member of a consistency group, in some cases
the virtual volume may no longer have storage at one site which may cause the detach command to fail.
In this case the virtual volume will need to be removed from the consistency group *before* it the mirror
leg is detached. Us the detach-mirror command to detach the mirror leg(s):
Note: Depending on the raid geometry for each leg of the distributed device, it may be
necessary to detach both the local mirror leg and the remote mirror leg. This is because
only 1 storage volume is being restored and there are up to 3 additional mirrored copies
maintained by VPLEX (1 local and 1 or 2 remote). For example, if the VPLEX distributed
device mirror leg being restored is, itself, a RAID-1 device then both the non-restored local
leg and the remote leg must be detached.
6. Confirm the IO Status of storage volumes based on array-based clones is “alive” by doing a long listing
against the storage volumes context for your cluster.
For example:
In addition, confirm VPLEX back-end paths are healthy by issuing the “connectivity validate-be” command
from the VPLEX CLI. Ensure that there are no errors or connectivity issues to the back-end storage devices.
Resolve any error conditions with the back-end storage before proceeding.
Distributed RAID-1:
device attach-mirror -m <2nd mirror leg to attach> -d /clusters/<cluster
name>/devices/<existing distributed RAID-1 device>
Note: The device you are attaching in this step will be overwritten with the data from the
newly restored source device.
8. For Pre 5.2 code, restore host access to the VPLEX volume(s).
If the virtual volume is built from a local RAID 1 device:
The lun# is the previously recorded value from step 2 for each virtual volume.
Technical Note: EMC recommends waiting at least 30 seconds after removing access from
a storage view to restore access. This is done to ensure that the VPLEX cache has been
cleared for the volumes. The array-based restore will likely will take 30 seconds, but if you
are scripting be sure to add a pause.
Technical Note: Some hosts and applications are sensitive to LUN numbering changes.
Use the information you recorded in step 3 to ensure the same LUN numbering when you
restore the virtual volume access.
Please consult with your local EMC support representative if you are uncertain as to the applicability of
these procedures to your VPLEX environment.
Introduction
This section focuses on best practices and key considerations for leveraging VPLEX native copy technologies
within VPLEX Local or Metro environments. These copy features and capabilities are complementary to, and
not a replacement for, array-based copy technologies (e.g. TimeFinder, SnapView, SRDF, MirrorView, or
XtremIO Snapshots). The VPLEX copy capabilities highlighted in this whitepaper are best suited for use
cases that require one or more of the following:
A copy of single-volume applications running on VPLEX Local or Metro platforms
Single-volume copies between heterogeneous storage arrays
A copy of a volume from one datacenter to be made available in a 2nd data center.
Multi-volume copies where the application can be temporarily quiesced prior to splitting the copy.
Crash consistent copies of sets of volumes
Consistent copies of groups of volumes obtained by quiescing applications
Independent full disk copies provide important functionality in areas like application development and
testing, data recovery, and data protection. As such, it is important to be aware of the copy features within
VPLEX and where VPLEX provides advantages over physically and/or geographically constrained array-based
copy technologies.
Creating independent virtual volume copies with VPLEX can be accomplished using both the UniSphere for
VPLEX UI and the VPLEX CLI. The high level steps for each are the same. Figures 9a and 9b below illustrate
the steps in the process of creating a VPLEX native clone device.
Creating native copies with VPLEX consists of several steps:
1. Select the source virtual volume and its underlying device(s).
2. Select the copy (target) device with the desired characteristics.
3. Create a data mobility session consisting of the source device and the copy device. As part of this
step VPLEX creates a temporary mirror to facilitate the copy.
The next few steps allow the user to specify either the source virtual volume or the corresponding source
device that makes up the virtual volume that you wish to make a copy of.
Note: The VPLEX mobility facility supports up to 25 concurrent device mobility sessions.
More than 25 mobility jobs can be queued, but at any point in time only 25 will be actively
synchronizing. In addition, once synchronization has been completed for each pair of
devices, they will remain in sync until the next steps are performed by the script or by the
administrator.
14. Enter a descriptive name for the VPLEX native copy mobility job. Note, we are calling it VNC_Job_1 in
this example.
15. Click Apply. This will place the name you selected in the Device Mobility Job Name column.
16. Set the Transfer Speed. This controls the speed of synchronization and impact of the copy
synchronization process. It is recommended to start with one of the lower Transfer Speed settings and
then work your way up as you are become more familiar with the impact of the synchronization process
on your application and are better able to assess the impact of this setting.
17. Click start to being the synchronization between the source and copy devices you specified.
After clicking ok on the subsequent confirmation pop-up you will now see your VPLEX native copy mobility
job listed in the In Progress section of the Mobility Central screen shown below. Sync progress can be
monitored here.
1. Confirm the mobility job is in the Commit Pending Column and that the Status shows ‘complete’.
2. When ready to split the copy device from the source device click the ’Cancel’ button. Cancelling the
mobility job reverts the source virtual volume back to its original state and splits of the copy device
into a usable copy of the source device.
3. Confirm you are cancelling (splitting) the correct VNC mobility job.
4. Click OK to finalize and start the split process.
Note: The assumptions for this process are that the source device will have a virtual
volume on top of it and that the host using the restored source virtual will not be impacted
if the VPD ID of the virtual volume changes. If there is any uncertainty as to whether or not
the host OS will be impacted by VPDID changes, this procedure should not be used until
an impact determination made via testing and/or OS documentation review. If there is
impact, then remediation steps to account for the VPD ID change should be added to the
overall procedure. These steps will vary by OS and host platform.
The steps to perform a restore are:
1. Identify the copy virtual volumes and underlying devices you wish to restore from.
2. Identify the source virtual volumes and underlying devices you wish to restore to.
3. Quiesce any host applications running on the source virtual volume to be restored. It is
recommended to unmount the source VPLEX virtual volume to force the host to flush its read cache.
4. Remove the source virtual volume from any storage views it is a member of. Make note of the
corresponding lun number for future reference.
5. Remove the virtual volume on top of the source volume to be restored.
6. Create a data mobility job using the copy virtual volume’s underlying VPLEX device as the source
device and the source device as the restore target.
7. Allow copy and source to synchronize and enter the commit pending state.
8. Cancel the mobility job.
9. Create a new virtual volume on top of the source device that was just restored.
10. Add the virtual volume to the original storage-view ensuring original lun # is used when necessary.
11. Scan for luns (if necessary).
12. Restart host applications.
i. Note: If hosts are sensitive to VPD ID changes then please plan accordingly.
-s, --transfer-size= <arg> The maximum amount of I/O used for a rebuild when
a migration undergoes a full rebuild.
*-f, --from= <arg> The name of source extent or device for the
migration. While this element is in use for the
migration, it cannot be used for any other purpose,
including another migration.
*-t, --to= <arg> The name of target extent or device (copy) for the
migration. While this element is in use for the
migration, it cannot be used for any other purpose,
including another migration.
Note: The VPLEX mobility facility supports up to 25 concurrent device mobility sessions.
More than 25 mobility jobs can be queued, but at any point in time only 25 will be actively
synchronizing. In addition, once synchronization has been completed for each pair of
devices, they will remain in sync until the next steps are performed by the script or by the
administrator.
2. Wait for the migration to finish and enter the commit pending state.
VPlexcli:/data-migrations/device-migrations/VNC_Job_1> ll
Note: You can cancel a migration at any time unless you commit it. If it is accidentally committed
then in order to return to using the original device you will need to create another mobility activity
using the old source device.
4. If necessary, remove the migration record. You may wish to preserve this record to determine the source
and target volumes at a later date.
VPlexcli:/> dm migration remove VNC_Job_1 --force
This will remove the VNC_Job_1 migration context from the appropriate /data-migrations context.
The copy device is now available in the /clusters/<cluster name>/devices/ context in the VPLEX CLI. If you
wish to make the VPLEX Native Copy device available to a host you must create a virtual volume on top of it.
Follow the normal steps to create and then provision a virtual volume using the copy device you just
created.
Optional Commands:
To pause and then resume an in-progress or queued migration, use:
VPlexcli:/> dm migration pause VNC_Job_1
VPlexcli:/> dm migration resume VNC_Job_1
References
The following reference documents are available at Support.EMC.com:
White Paper: Workload Resiliency with EMC VPLEX
VPLEX 5.2 Administrators Guide
VPLEX 5.2 Configuration Guide
VPLEX Procedure Generator available from http://support.emc.com
EMC VPLEX HA TechBook
TechBook: EMC VPLEX Architecture and Deployment: Enabling the Journey to the Private Cloud