3PAR Remote
3PAR Remote
?
3PAR Remote Copy
Transport Layers
– Remote Copy over Fibre Channel (RCFC) – connected over a Fibre Channel SAN.
Up to 4 RCFC links are supported per node.
Only 1:1 relationship between RCFC links and targets is allowed
(you cannot use the same RCFC link to connect to more than one target).
– Remote Copy over Internet Protocol (RCIP) – connected to each other over an IP network and use
native Remote Copy IP ports to replicate data to each other.
1-to-2 relationship between RCIP links and targets is allowed
(the same pair of RCIP links can be used to connect to two source/target arrays).
– Remote Copy over FCIP (Fibre Channel over IP) – enables Fibre Channel frames to be sent over
an IP network. Storage systems use FC ports, the same ones they use for RCFC, and the SAN they
use is extended across an IP network by using FC-to-IP routing with external FCIP gateways.
3PAR Remote Copy link performance enhancements
Dramatically increased performance with 3.3.1 for RCFC and RCIP
3PAR 2-node configuration, 8kB, 100% random write 3PAR 4-node configuration, 8kB, 100% random write
Synchronous Remote Copy FC (4 x 16Gb FC links) Synchronous Remote Copy IP (4 x 10Gb RCIP links)
7 7
Response time (ms)
5 5
4 4
3 3
2 2
1 1
40 000 60 000 80 000 100 000 120 000 100 000 120 000 140 000 160 000 180 000
Host performance (IOPS) Host performance (IOPS)
3.2.2 3.2.2
3.3.1 with IRQ map disabled 3.3.1 with IRQ map disabled
3.3.1 with IRQ map enabled 3.3.1 with IRQ map & jumbo frames enabled
IRQ = Processor Interrupt Request IRQ = Processor Interrupt Request
Remote Copy Replication Types
3PAR Remote Copy Synchronous
Continuous operation and synchronization
Primary Secondary
Real-time Mirror 1 2
• Highest I/O currency
• Lock-step data consistency 4 3
Space Efficient
P S
• Thin provisioning aware
Targeted Use
1 : Host server writes I/O to primary write cache
• Campus-wide business continuity 2 : Primary array writes I/O to secondary write cache
• Guaranteed Consistency 3 : Remote array acknowledges the receipt of the I/O
• Enabled by Volume Groups 4 : Host I/O acknowledged to host
3PAR Remote Copy Asynchronous Periodic
Initial setup and synchronization
• Space efficient P S
• Thin aware
• Asynchronous streaming mode targets are not Primary Synchronous Long Distance
supported in SLD 1:2 Configuration
Metropolitan distance
P
• Remote copy links for SLD can be all IP, all FC,
or a mixture of each (one leg IP and the other
leg FC). Sync RC
Async Periodic RC
S1 S2
Standby
Secondary Tertiary
Replication SW Suite *
• Virtual Copy (VC)
Recovery Manager Central Suite
• Remote Copy (RC)
• Peer Persistence (PP) • vSphere
• Cluster Extension Windows (CLX) Policy Server • MS SQL
• Policy Manager Software
• Oracle
Data Optimization SW Suite * • SAP HANA
• Dynamic Optimization Data Encryption • 3PAR File Persona
• Adaptive Optimization
• Peer Motion
• Priority Optimization
File Persona SW Suite Application SW Suite for MS Exchange
Security SW Suite *
• Virtual Domains Smart SAN for 3PAR Application SW Suite for MS Hyper-V
• Virtual Lock
• Remote Copy requires the use of a minimum of two 3PAR StoreServ systems
• Remote Copy does not support self-mirroring configurations. It cannot use a storage system to
replicate its own primary volumes to itself.
• Remote Copy does not support multi-hop configurations. It cannot replicate a primary volume
group to a secondary system and then replicate the volume group again from the secondary
system to a third storage system.
• The physical connections between all storage systems used with Remote Copy must be made
through an IP-capable network or an FC SAN network
3PAR Remote Copy
General Requirements & Restrictions
• To maintain availability, you must use more than one link to connect storage systems
• Remote Copy uses all the available links that are configured for the same replication mode
(synchronous or asynchronous periodic) to transmit data in parallel
• HPE recommends that synchronous mode replication be used to limit the potential for data loss,
whenever the additional write latency induced by the network, plus the write latency induced by the
target array, will not exceed the maximum write latency tolerable by the application whose data is
being replicated
3PAR Remote Copy
Limitations 1/2
• Max Size of Mirrored Volume - 64 TB
• Max Mirrored Volumes per HPE 3PAR StoreServ Storage for synchronous RC:
800 - 2 Nodes 3000 - 4 Nodes 4000 - 8 Nodes
• Max Mirrored Volumes per HPE 3PAR StoreServ Storage for async periodic RC:
2400 - 2 Nodes 6000 - 4 Nodes and more
• Max Mirrored Volumes per HPE 3PAR StoreServ Storage for async streaming RC:
512 - All Node configuration
3PAR Remote Copy
Limitations 2/2
• Max Consistency Groups
• Max Mirrored Volumes per 3PAR for async streaming: 512 - All Node configuration
Transport Layers
Requirements & Limitations
3PAR Remote Copy
Transport Layers RCFC
• Each storage system should have a pair of HBAs installed
• HBAs in each storage system connect those systems through FC SAN (dual fabric)
• HBA can be shared by the host and RCFC for the following HPE 3PAR StoreServ Storage systems:
20000, 10000, 8000, 7000
• Each pair of RCFC ports that support an RCFC link must exist in an exclusive zone (pWWN members)
• Only one-to-one relationship between RCFC links and targets is allowed. i.e. you cannot use the same
RCFC link to connect to more than one target.
• For RCIP, each network adapter (NIC) interface must use a unique IP address
• The network adapter (NIC) interface and the management Ethernet port of the 3PAR controller node
must be on different IP subnets
• A pair of IP ports on the node pairs in an array may have a RC relationship with up to 2 other arrays
(pair of RCIP ports on an array may send data to up to two different arrays, and may be the remote
copy target for those same two)
3PAR Remote Copy
Transport Layers RCIP 2/2
• RCIP configurations can use up to 8 links between systems. Up to 8 nodes can each have one
network adapter (NIC) port contributing links to an RCIP remote copy pair.
• The network used by RCIP does not have to be dedicated to remote copy, but there should be a
guaranteed network bandwidth (minimum of 500 KB/s) between any pair of arrays. Guaranteed
bandwidth on the network is especially important when replicating synchronously over RCIP
• Traffic on the firewall must be enabled for RCIP ports: TCP 5785 & 5001
• Supported Array Ports over IP for async streaming RC: 10GbE only (1GbE not supported)!
3PAR Remote Copy
Transport Layers Limitation
• Transport layer protocols cannot be mixed for a given mode of replication between a pair
of Remote Copy volume groups.
E.g. when doing replication between a pair of volume group (between a pair of 3PAR
StoreServ systems), the transport layer can be either RCIP, or RCFC but not mixed.
3PAR Remote Copy
Maximum latencies
Max Supported Latency Round Trip Time (RTT )
Remote Copy Type
3PAR OS ≤ 3.2.1 3PAR OS 3.2.2 3PAR OS 3.3.1
Synchronous RCFC 2.6ms ≤ MU1: 5ms - ≥ MU2: 10ms
10ms
Synchronous RCIP 2.6ms 5ms 10ms
Synchronous RC FCIP NA 10ms 10ms
Asynchronous Periodic RCFC 2.6ms 5ms 5ms
Asynchronous Periodic RCIP 150ms 150ms 150ms
Asynchronous Periodic RC FCIP 120ms 120ms 120ms
Asynchronous Streaming RCFC NA 5ms 10ms
Asynchronous Streaming RCIP NA NA 30ms
Asynchronous Streaming FCIP NA 10ms 30ms
• Support for firmware 3.3.1 only (patches for 3.2.2 to enable Async Streaming were for Proof Of Concept ONLY).
• Source and target arrays must contain the same number of nodes.
• Maximum latency on the network used for any link will not exceed the specified values in the Feature
availability matrix for the particular link over 24 hours using the packet load size of the network
(10 ms, 30 ms)
• Network latency jitter (the difference between the highest and lowest network latency, measured
constantly over 24 hours with a packet load the size of the network's MTU) must not exceed
2ms or 20% of the end-to-end average network latency
• Packet loss ratio does not exceed 0.0012% average measured over 24 hours
Transport Layers
Link Timeout & Failures
3PAR Remote Copy
Link failure
• When a link between storage systems fails, the system issues an alert on each
system as soon as it detects the link failure.
• As long as 1 remote copy link between the two systems remains active, Remote Copy
remains in normal operation and sends all data through the remaining links.
• The system might experience a slight reduction in throughput (bandwidth), but a
single link failure in a multiple-link remote copy pair does not incur errors under normal
operating conditions other than the link failure itself.
3PAR Remote Copy
Link & Target Timeout..
Asynchronous Streaming 5s 5s
RC declares a secondary
RC declares an
system down after
unresponsive link down
all links have gone down
3PAR Remote Copy
Target Failure
• When all links between remote copy targets in a remote copy relationship fail, neither
side can communicate. Each side reports that its remote copy target has failed.
Following a remote copy target failure, the following actions occur:
Both systems declare the other system to be down.
Both systems generate alerts regarding the other system’s failure.
The system handles complete link failure differently depending on whether the links are
used for synchronous, asynchronous streaming, or asynchronous periodic mode volume
groups
Remote Copy Volume Groups
3PAR Remote Copy Volume Groups
Assured Data integrity
New Target
Create new
Single Volume Volume created
Source
autonomically
Volume
• All writes to the secondary volume are completed
in the same order as they were written on the
primary volume
• When you create and name a volume group on the primary system, HPE 3PAR
Remote Copy automatically creates and names the associated secondary volume
group on the secondary system.
• For example, if the primary volume group is Group1 and the primary system ID is 96,
HPE 3PAR Remote Copy names the secondary volume group Group1.r96
3PAR Remote Copy
Volumes Synchronization States...
New — Remote copy for the primary virtual volume has not started.
Syncing — Remote copy is currently synchronizing the secondary virtual volume with the primary
Synced — The primary and secondary virtual volumes are currently in sync (for asynchronous periodic mode volumes,
the output displays the last synchronization time)
NotSynced — The primary and secondary virtual volumes are not in synchronization
Stopped — The volume group is stopped. The primary and secondary virtual volumes were in synchronization the last
time the group was in the Started state, but might be out of synchronization now
Stale — The secondary virtual volume has a valid point-in-time copy of the primary volume; however, the last attempt at
synchronization failed
Failsafe — In the event of a network failure that prompts a failover, any primary volumes will be placed into a failsafe
state to prevent data corruption and inconsistency between primary and secondary volumes.
NOTE: The Failsafe state is relevant only to volumes that have matching WWNs. If the secondary volumes were created using the
admitrcopyvv –createvv option, it will affect these groups.
Remote Copy Group operations
3PAR Remote Copy
RC Group States...
Macierz DC RC Macierz DRC
Operation Group status
Status Read/Write direction Status Read/Write
Primary-
Failover stopped Primary R/W - R/W
Reverse
Secondary- Primary-
Recover started & sync - <--- R/W
Reverse Reverse
Revert Failover
started & sync Primary R/W ---> Secondary -
Undo Failover
Go back to
SW and
Features
Remote Copy States - Normal
Read/Write
LUN
Normal
A A
Primary Group started Secondary
& sync
3PAR Array A 3PAR Array B
Site A Site B
Remote Copy States - Failover
Read/Write
LUN
Failover
Read/Write
LUN
A A
Primary Group Primary-Reverse
stopped
3PAR Array A 3PAR Array B
Site A Site B
Remote Copy States - Failover
Recover
Read/Write
LUN
A A
Secondary-Rev Group started Primary-Reverse
& sync
3PAR Array A 3PAR Array B
Site A Site B
Remote Copy States - Normal
Read/Write
LUN
Restore
A A
Primary Group started Secondary
& sync
3PAR Array A 3PAR Array B
Site A Site B
Remote Copy –
Track synchronization
3PAR Remote Copy tracks synchronization details
Synchronous volume groups
• startrcopygroup command starts synchronization only the first time the command is
issued for a volume group. Subsequent instances of the command do not initiate
resynchronization of asynchronous periodic mode volume groups.
• HPE 3PAR Remote Copy does not create a new task for each resynchronization of
an asynchronous periodic volume group.
• Instead, HPE 3PAR Remote Copy keeps the initial synchronization task active for as
long as the group is in the Started state, and updates task details when
resynchronizations occur
Remote Copy - CLI
CLI Overview – SSH connection
stoprcopy Stop remote copy functionality for all started remote copy volume groups
creatercopytarget Specify the targets within a remote copy pair and create additional links
creatercopygroup Create a remote copy volume group (can be run by users with edit privileges)
admitrcopyvv Add an existing virtual volume to an existing remote copy volume group
setrcopygroup Set a remote copy volume group’s policies, data transfer direction, resynchronization period, and mode
setrcopytarget Set a remote copy target’s name and policies, and the target link’s throughput definition
CLI Overview – Remote Copy
RC commands descriptions
showport View remote copy port information
showrctransport View the status and information about end-to-end remote copy transport
checkrclink Perform a connectivity, latency, and throughput test between two connected systems
dismissrcopylink Remove a sending link that was created with the admitrcopylink command
dismissrcopytarget Remove a secondary system from HPE 3PAR StoreServ Storage system
removercopygroup Delete a remote copy volume group (can be run by users with Edit privileges)
removercopytarget Remove a target definition from a remote copy system and remove all links affiliated with that target definition
CLI Overview – Remote Copy
RC commands descriptions
startrcopygroup Enable remote copy for a remote copy volume group
stoprcopygroup Stop remote copy functionality for a specific remote copy volume group
srstatrcopy View historical performance data reports for remote copy links
srstatrcvv View historical performance data reports for remote copy volumes
1. Set MTU
3par-local# controlport rcip mtu 9000 0:3:1
3par-local# controlport rcip mtu 9000 1:3:1
2. Dodanie wolumenów do RC
3par-local# admitrcopyvv -pat vv1.* Group1 3par-remote:@vvname@
3. Uruchomienie RC grupy
3par-local# startrcopygroup Group1
3par-local# showrcopy
4. Zatrzymanie RC grupy
3par-local# stoprcopygroup Group1
For remote copy to operate, you must name the domain correctly. When volumes are
admitted to a remote copy group for which a virtual domain has been defined, the volumes
on both sides must share the same domain name
Remote Copy - CLI
RC Groups – auto VV creation
1. Utworzenie Remote Copy Grupy
3par-local# creatercopygroup –domain domain1 Group1 3par-remote:async
4. Uruchomienie RC grupy
3par-local# startrcopygroup Group1
3par-local# showrcopy
5. Stop RC grupy
3par-local# stoprcopygroup Group1
Remote Copy - CLI
RC Groups – time interval
1. Utworzenie interwału czasowego dla grupy async periodic
3par-local# setrcopygroup period <period_value>{s|m|h|d} <target_name>
<group_name>
period_value - used to define the order in which groups will be stopped and restarted
automatically (min 30s)
Remote Copy - CLI
RC Groups – changing mode
1. Zatrzymanie grupy
3par-local# stoprcopygroup <group_name>
Volumes on remote targets are grown to the intended size of the local volume. If a target cannot be
contacted or remote copy is not started, only the local volume will be grown.
2. Powiększenie wolumenu
3par-local# growvv VV_name <size>[g|G|t|T]
3. Uruchomienie RC grupy
3par-local# startrcopygroup Group1
3par-local# showrcopy
Remote Copy - CLI
RC Group –VV admit
1. Dodanie wolumenów do RC
3par-local# admitrcopyvv -pat vv1.* Group1 3par-remote:@vvname@
4. Uruchomienie RC grupy
3par-local# startrcopygroup Group1
3par-local# showrcopy
Remote Copy
Group Operations and States
3PAR Remote Copy
Target Failure for synchronous groups
stoprcopygroup <group_name>
startrcopygroup <group_name>
• If a resynchronization snapshot does not exist, Remote Copy performs a full synchronization.
Remote Copy – Snapshots
Snapshots in asynchronous streaming
3PAR Remote Copy snapshots
Asynchronous Streaming mode
2. After synchronization is completed, deletes the sync snapshots created on the primary and
secondary system.
Periodically resynchronization points are created on both the primary and secondary sides, HPE 3PAR
Remote Copy takes new snapshots of the secondary and primary volumes. (Coordinated snapshots).
3PAR Remote Copy snapshots
Asynchronous Streaming mode – resync process
During resynchronization
• The primary uses its coordinated snapshots to resynchronize with the secondary arrays volumes.
• As each volume completes its resync, its volume state changes to synced.
• When the resynchronization completes on all volumes in the group, remote copy takes new
coordinated snapshots before deleting the now old coordinated snapshots on the primary and the
backup systems.
• After resynchronization, the base volume on the secondary storage system matches the new
snapshots on the primary and secondary storage systems. The new snapshot on the primary system
and secondary system can now be used in the next resynchronization operation.
3PAR Remote Copy snapshots
Asynchronous Streaming mode – resync failure
If, during resynchronization in asynchronous streaming mode, the primary system fails, remote copy
behaves in the following manner for the volumes in the remote copy target volume group:
• For all volumes in the remote copy volume group that completed resynchronizing before the failure,
HPE 3PAR Remote Copy automatically promotes the pre-synchronization snapshot for all of these
volumes.
• For all volumes in the remote copy volume group that were in the process of resynchronizing at the
time of the failure, but that did not complete resynchronizing, HPE 3PAR Remote Copy automatically
promotes the pre-synchronization snapshot for all of these volumes.
Remote Copy – Snapshots
Snapshots in asynchronous periodic
3PAR Remote Copy snapshots
Asynchronous periodic mode initial sync
• At the next scheduled resynchronization, or whenever you issue the syncrcopy, Remote Copy:
• When the resynchronization completes, remote copy deletes the old snapshots on the primary and
the backup systems.
• After resynchronization, the base volume on the secondary storage system matches the new
snapshot on the primary storage system. The new snapshot on the primary system can now be
used in the next resynchronization operation.
3PAR Remote Copy snapshots
Asynchronous periodic mode – resync failure
If, during resynchronization in asynchronous periodic mode, the primary system fails, remote copy
behaves in the following manner for the volumes in the remote copy target volume group:
• For all volumes in the remote copy volume group that completed resynchronizing before the failure,
HPE 3PAR Remote Copy takes no action on these volumes and retains all pre-synchronization
snapshots for these volumes.
• For all volumes in the remote copy volume group that were in the process of resynchronizing at the
time of the failure, but that did not complete resynchronizing, HPE 3PAR Remote Copy automatically
promotes the pre-synchronization snapshot for all of these volumes.
3PAR Remote Copy snapshots
Asynchronous periodic mode – resync failure
Some of the volumes in the volume group successfully synchronized before the failure, and some
volumes did not finish resynchronizing before the failure, then the volumes that had not completed
resynchronization will be promoted to the last recovery point
To make the volumes I/O consistent again, one of two actions must be performed:
• The remote copy volume group must be restarted after the failure has been recovered from, at which
time a new resynchronization will occur, resulting in all the volumes becoming I/O consistent with one
another at the new resynchronization point in time.
• The remote copy volume group must be used for recovery, by means of a failover, at which time all of
the volumes whose snapshots were not promoted following the failure (the ones that completed
synchronization) will have their pre-synchronization snapshots promoted, and all the volumes in the
volume group will then revert to their I/O consistent pre-synchronization state.
Disaster Tolerant Solutions
- Cluster Extension
- Metrocluster
- Peer Persistence
Go back to
SW and
Features
Cluster Extension for Windows
Clustering solution protecting against server and storage failure
What does it provide?
• Manual or automated site-failover for Server and Storage
resources
• Transparent Hyper-V Live Migration between site
Supported environments:
• Microsoft Windows Server 2003, 2008, 2012
• HPE StoreEasy (Windows Storage Server)
• Max supported distances
• Remote Copy sync supported up to 5ms RTT (~500km)
• Up to MS Cluster heartbeat max of 20ms RTT
• 1:1 and SLD configuration
• Sync or async Remote Copy
Requirements:
• 3PAR Disk Arrays
• 3PAR Remote Copy (RC)
• Microsoft Cluster
• HPE Cluster Extension (CLX)
• Max 20ms cluster IP network RTT
Licensing options:
• Option 1- per Cluster Node
1 LTU per Windows Cluster Node (i.e. 4 LTUs for the configuration to the left)
• Option 2 - per 3PAR Array
1 LTU per 3PAR array (i.e. 2 LTUs for the configuration to the left)
Also see the HPE CLX references
Serviceguard Metrocluster for HP-UX and Linux
End-to-end clustering solution to protect against server and storage failure
What does it provide?
• Manual or automated site-failover for Server and Storage
resources
Supported environments:
• HP-UX 11i v2 & v3 with Serviceguard
• RHEL 5 and 6 with HPE Serviceguard 11.20.10
• SLES 11with HPE Serviceguard 11.20.10
• Max supported distances
• Up to Remote Copy sync max 5ms RTT (~500km)
• Up to Remote Copy async max 150ms RTT
Requirements:
• 3PAR Disk Arrays
• 3PAR Remote Copy
• HPE Serviceguard & HPE Metrocluster
Licensing for Linux:
• 1 LTU SGLX per CPU core and 1 LTU MCLX per CPU core
Licensing Options for HP-UX:
• Option 1: Per CPU socket for SGUX and MCUX
• Option 2: Per Cluster with up to 16 nodes for SGUX and MCUX
2’
QW
vSphere
3PAR Array 3PAR Array
Site A Site C Site B
• An automatic transparent
failover will occur if both the Primary Array’s connection to the Quorum Witness QW failure here will
must fail and be detected within this window for not be detected in
RC links and the QW access Failsafe on the Primary and an ATF to occur time for an ATF
vMotion / Storage vMotion YES => One cluster over two datacenters Yes => Requires SRM ≥ 6.1 AND Peer Persistence
Volume switch / failover Manually or automated with Quorum on a 3rd site Manually
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Peer Persistence - Arbitrator
What it is
– Quorum Witness (QW)
Site 3
– Application running on Linux on a Virtual Machine
in a Hypervisor (vSphere 5.x or Hyper-V 2012 R2)
– Comes as a package including a Linux VM and the Quorum Witness
QW software
Linux
– Minimal configuration required
– Network setup, Hostname, Password Hypervisor
– Takes less than 5 minutes
– The QW only requires IP connectivity to both 3PARs
– Install the QW in a third independent site and do not place the QW
datastore on one of the protected
HP 3PAR StoreServ systems!
HP 3PAR Peer Persistence - Arbitrator
Quorum Witness Requirements*
– VMware ESXi 5.x or Windows Hyper-V 2012 R2
– 2 GB memory
– 20 GB disk space
– Network interface with access to a network with connectivity to both HP 3PAR StoreServ
Systems
– Static IP Address assignment
– Maximum Latency/RTT (Round Trip Time)
– Maximum round trip latency: 150 ms
– Connection timeout: 250 ms
– Response timeout: 3s
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
3PAR Peer Persistence failures handling
Remote Copy (RC) Links failure
Active Path Vol A
Cluster Active Path (Vol B)
Passive/Standby
LUN A.123
Path
LUN B.456
LUN B.456
LUN A.123
Fabric B
• Remote Copy Links failure
does not cause any
Fabric A Automatic transparent failover
if communication to QW
remains operational
• Replication of I/O across RC
B B links stops due to failure
RC Links
Synchronous Remote Copy
Secondary Failure Primary • Host I/Os continue to go to
with RTT of up to 5ms RTT
A latency A their primary volumes
Primary Secondary • Manual Switchover not
QW
3PAR Array A 3PAR Array B possible
LUN B.456
LUN B.456
LUN A.123
Fabric B
LUN B.456
LUN B.456
LUN A.123
Fabric B
LUN B.456
LUN B.456
LUN A.123
Fabric B • Both arrays will be isolated as
they can neither
communicate with each other
Fabric A over the RC links nor with
Quorum Witness (QW).
• Remote Copy (RC) groups
that are primary will go into
failsafe mode and stop
B RC Links
Synchronous Remote Copy
B serving I/O, resulting in host
Secondary Failure Primary
with RTT of up to 5ms RTT I/O failure
A latency A • No failover actions will be
Primary Secondary
Comm. QW Comm. performed, replication of I/O
3PAR Array A Failure Failure 3PAR Array B across RC links will stop
LUN B.456
LUN B.456
LUN A.123
Fabric B • VMware vSphere
VMware HA automatically
restarts VMs on other servers
in the cluster (including on
Fabric A
remote servers)
• Microsoft Windows Server
MS Failover Cluster
automatically restarts all
B Synchronous Remote Copy
B affected services and/or
Secondary Primary
with RTT of up to 5ms RTT Hyper-V VMs on other servers
A latency A in the cluster (including on
Primary Secondary remote servers)
QW
3PAR Array A 3PAR Array B • No intervention required on
storage systems
Site A Site C Site B
Storage component failure – Storage still accessible
Active Path Vol A
Cluster Active Path (Vol B)
Passive/Standby
LUN A.123
Path
LUN B.456
LUN B.456
LUN A.123
Fabric B
LUN B.456
LUN B.456
LUN A.123
• Upon loss of an entire storage
Fabric B
system, the volumes that
were ‘primary’ on the failed
system have to be failed over.
LUN B.456
LUN B.456
LUN A.123
• The volumes that were
Fabric B ‘primary’ on the failed site have
to be failed over.
• The surviving system will take
Fabric A over the ‘primary’ role based
on arbitration with the QW.
• The ‘standby’ paths to these
volumes become ‘active’
B B paths. Previously, ‘active’
Synchronous Remote Copy
with RTT of up to 5ms RTT
Primary paths go into ‘failed’ status.
A latency A • VMs on Site 2 remain online.
New Primary
QW • VMs previously running on Site
3PAR Array A 3PAR Array B 1 can be restarted on Site 2
Path
APP
OS
vSphere VM
LUN A.123
LUN B.456
LUN B.456
LUN A.123
• If a SAN partition occurs
Fabric B (Host and RC IO), volumes
from both arrays remain
available to local hosts.
Fabric A Volume paths to remote site
hosts are lost. Passive
volume paths remain passive.
VMs that reside on local
storage array stay online
B Synchronous Remote Copy
B
Secondary Primary VMs that reside on remote
with RTT of up to 5ms RTT
latency storage array shutdown or
A A enter zombie state
Primary Secondary
3PAR Array A
QW
3PAR Array B • Depending on the Remote
Copy transport used
Site A Site C Site B replication might still be
running or stopped.
Network Partition Handling – MS Windows Server
Redirected IO Redirected IO Cluster Network
Active Path Vol A
Active Path (Vol B)
Passive/Standby
Cluster APP
Path
OS
Hyper-V VM
LUN A.123
LUN B.456
LUN B.456
LUN A.123
Clustered Application
Fabric B
• If a SAN partition occurs
volumes from both arrays
remain available to local
Fabric A hosts. Volume paths to
remote site hosts get
redirected through the cluster
network using SMB2 and stay
B B online until the problem has
Synchronous Remote Copy been solved.
Secondar Primary
with RTT of up to 5ms RTT
y latency • Depending on the Remote
A A
Copy transport used
Primary Secondar
QW
y B replication might still be
3PAR Array A 3PAR Array
running or stopped.
Site A Site C Site B
The End
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.