0% found this document useful (0 votes)
177 views12 pages

SRDF Architecture PDF

Uploaded by

himanshu shekhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
177 views12 pages

SRDF Architecture PDF

Uploaded by

himanshu shekhar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 12
SRDF/A Architecture — Legacy Mode - 1 + Use VMAX Cache to group I/Os into cycles, which are propagated to remote VMAX in order, creating checkpoints of consistency. * No performance impact on host applications, even at long distances: No additional latency Source Target Host writes @ Create consistency checkpoint Let us review the legacy SRDF/A architecture - The legacy SRDF/A architecture applies to VMAX family arrays running Enginuity 5876 and lower. SRDF/A’s architecture delivers replication over extended distances with no performance impact. SRDF/A uses Delta Sets to maintain a group of writes over a short period of time. Delta Sets are discrete buckets of data that reside in different sections of the VMAX cache. Starting at 1, each Delta Set is assigned a numerical value that is ‘one more than the preceding one. ‘There are four types of Delta Sets to manage the data flow process in the legacy SRDF/A architecture. The Capture Delta Set in the source VMAX (numbered N in this example), captures (in cache) all incoming writes to the source volumes in the SRDF/A group. The Transmit Delta Set in the source VMAX (numbered N-1 in this example), contains data from the immediately preceding Delta Set. This data is being transferred to the remote VMAX. The Receive Delta Set in the target system is in the process of receiving data from the transmit Delta Set N-1. The target \VMAX contains an older Delta Set, numbered N-2, called the Apply Delta Set. Data from the Apply Delta set is being assigned to the appropriate cache slots ready for de-staging to disk. The data in the Apply Delta set is guaranteed to be consistent and restartable should there be a failure of the source VMAX. In the legacy SRDF/A mode, the VMAX performs a cycle switch once data in the N-1 set is completely received, data in the N-2 set is completely applied, and the minimum cycle time elapsed. Default minimum cycle time is 15 seconds with Enginuity 5875 onward, Prior to this it was 30 seconds. During the cycle switch, a new delta set (N+1) becomes the capture set, N is promoted to the transmit/receive set and N-1 becomes the apply Delta Set. The slide depicts writes to the capture delta set as red dots. Overlapping dots indicate writes to the same locations. Upon a cycle switch, SRDF/A has to send only the final version of a repeated writes, In this example two locations are written to multiple times. Copyright 2015 EMC Corporation All ahs reserve Module: Designing SRDF/A Solutions SRDF/A Architecture — Legacy Mode - 2 » Write Folding: repeat writes into a cycle are sent once: — Leads to reduction in link bandwidth Source Target Transmit Cycle sent to Target Host writes @ Create consistency checkpoint This is a continuation of the previous slide. A cycle switch has occurred indicated by the change in the cycle numbers. Capture is now numbered N+1, Transmit and Receive are numbered N, and Apply is numbered N- 1. The Transmit cycle only shows 4 red dots, this is because even though a total of 7 writes were performed in the cycle, five to those writes were rewrites to two locations, thus after the cycle switch, only the final version of the data on the 4 unique locations have to be transmitted. Module: Designing SRDF/A Solutions 4 SRDF/A Architecture — Legacy Mode - 3 Source Target Transmit Cycle @ sent to Target Host writes Q- Cycle switch, and @ Create consistency checkpoint completed cycles applied Ente This is a continuation of the previous slide. One more cycle switch has occurred. The Apply set is Numbered N, Transmit and Receive are numbered N+1, Capture is N+2. Module: Designing SRDF/A Solutions 5 SRDF/A Multi-cycle Mode — VMAX3 -~=- => => => => =» = => =» 1. Minimum cycle time has elapsed 2. Current capture set added to transmit queue 3. New capture set created Cycle switches are decoupled Ere SRDF/A Multi-Cycle Mode (MCM) allows more than two delta sets on the Ri side. If both arrays in the solution are running HYPERMAX OS, SRDF/A operates in multi-cycle mode. There can be 2 or more cycles on the R1, but only 2 cycles on the R2 side. Cycle switches are decoupled from committing delta sets to the next cycle. When the preset Minimum Cycle Time is reached, the R1 data collected during the capture cycle is added to the transmit queue and a new Ri capture cycle is started. There is no wait for the commit on the R2 side before starting a new capture cycle. The transmit queue is a feature of SRDF/A MCM. It provides a location for R1 captured cycle data to be placed so a new capture cycle can occur, ‘The capture cycle will occur even if no data is transmitted across the link. If no data is transmitted across the link the capture cycle data will again be added to the transmit queue. Data in the transmit queue is committed to the R2 receive cycle when the current transmit cycle and apply cycle are empty. The transmit cycle will transfer the data in the oldest capture cycle to the R2 first and then repeat the process. SRDF/A MCM is only supported if both the Ri and R2 are VMAX3 arrays. If either the R1 or R2 arrays is not a VMAX3 then cycling will behave as in previous version of Enginuity. MCM supports Single Session Consistency (SSC) and Multi Session Consistency (MSC). Copyright 2015 EMC Corporation All ahs reserve Module: Designing SRDF/A Solutions SRDF/A Legacy vs. MCM + Legacy Mode * MCM ~ Cycle switching is coupled AU ae ee ne eevee * Ri side ~ New capture set between R1 and R2 arrays created as soon 3s minimum = Delays in cycle switching can oyde Hime lapses lead to large delta sets and SRR re ern en unpredictable RPO — RPO at R2 side Is more granular and predictable ~ Adjusts to link speed decreases or R2 apply rates by queueing more cycles on Ri side ~ Improves robustness of SRDF/A sessions In SRDF/A legacy mode cycle switch is coupled between the Ri and R2 arrays. A new capture cycle cannot start until the transmit cycle completes its commit of data from the R1 side to the R2 side. Cycle switching can occur as often as the preset Minimum Cycle Time, but it can also take longer since it is dependent on both the time it takes to transfer the data from the Ri transmit cycle to the R2 receive cycle and the time it takes to mark the R2 apply cycle as write pending, Delays in cycle switching can lead to large delta sets and thus large and unpredictable RPO on the R2 side. As discussed on the previous slide, in MCM when the preset Minimum Cycle Time is reached, the R1 data collected during the capture cycle is added to the transmit queue and a new R1 capture cycle is started. There is no wait for the commit on the R2 side before starting a new capture cycle. Thus cycle switches are decoupled between the Ri and R2 arrays. Queuing allows smaller cycles of data to be buffered on the Ri side and smaller delta sets to be transferred to the R2. side. The SRDF/A session can adjust to accommodate changes in the solution. If the SRDF link speed decreases or the apply rate of the R2 side increases, more SRDF/A cycles can be queued on the R1 side. The R2 side will still have two delta sets, the receive and the apply. SRFD/A MCM increases the robustness of SRDF/A sessions and reduces DSE spillover. Copyright 2015 EMC Corporation All ahs reserve Module: Designing SRDF/A Solutions SRDF/A — Transmit Idle » Keeps SRDF/A sessions active in the event of a “temporary link loss” ~ Data transmission from source to target is halted — Legacy Mode « Cycle switching stops, capture cycle grows - MCM * Cycle switching continues on R1 side, transmit queue depth increases * Data transmission resumes when links are restored » Enabled by default when Dynamic SRDF groups are created VMAX and VMAX3 » Works with SRDI/A Delta Set Extension During short-term network interruptions, Transmit Idle keeps the SRDF/A session active, allowing for recovery that does not require user intervention. When there is an outage the SROF links, the remote SRDF mirror remains as Ready on the link even though no data is sent. This prevents SRDF/A from terminating abnormally. Transmit Idle is enabled by default when dynamic SROF groups are created. When all SRDF links are lost, SRDF/A still stays active. In SRDF/A legacy mode, cycle switching stops and the capture cycle continues to grow. In SRDF/A multi-cycle mode the cycle switching continues on the R1 side. Multiple transmit delta sets accumulate on the source side. Transmit idle works seamlessly with SRDF/A Delta Set Extension. With SRDF/A MCM between \VMAX3 arrays, Delta Set Extension is enabled by default. DSE will use the designated Storage Resource Pool. We will cover DSE shortly. Module: Designing SRDF/A Solutions SRDF/A — Delta Set Extension Allows offloading of SRDF/A delta sets from cache to disk Intended to make SRDF/A resilient to temporary increases in write workloads or link loss VMAX Arrays (Enginuity 5876 and lower) — Requires configuration of DSE pool(s) ~ Disabled by default VMAX3 Arrays (HYPERMAX US 59// and higher) Uses designated pre-configured Storage Resource Pool — Enabled by default Paging is triggered by the DSE threshold Ente SRDF/A Delta Set Extension (DSE) provides a mechanism for augmenting the cache-based Delta Set buffering mechanism of SRDF/A with a disk-based buffering ability. This extended Delta Set buffering ability may allow SRDF/A to ride through larger and/or longer SRDF/A throughput imbalances than would be possible with cache-based Delta Set buffering alone, DSE works in tandem with Transmit Idle and Group Level Write Pacing. ‘One VMAX arrays running Enginuity 5876, DSE offloads data to a DSE pool that must be configured. By default DSE is disabled. On VMAX3 arrays DSE offloads data into a designated Storage Resource Pool. By default the default SRP for FBA devices is used for DSE. DSE is, automatically enabled for SRDF/A between VMAX3 arrays. Copyright 2015 EMC Corporation All ahs reserve Module: Designing SRDF/A Solutions SRDF/A DSE - Enginuity 5876 and lower + Save Pools are designated as DSE pools at creation: ~ Contains SAVE devices of a single emulation + CKD3390, CKD3380, FBA or AS400 Pools can be associated (shared) with multiple SRDF/A RDF groups Each SRDF/A session can be associated with zero or one DSE pool of each lype (e.g. FBA, 3390); — Must have at least one DSE pool configured with a type that matches one of the device types in the SRDF/A group in order to activate DSE ~ Pools can optionally start automatically when SRDF/A is enabled DSE should be configured and enabled on both source and target arrays ‘One VMAX arrays running Enginuity 5876 or lower SRDF/A DSE pools have to be configured. SRDF/A DSE Pools and Save devices are managed in the same way as TimeFinder/Snap pools. ‘An RDF group can have at most one pool of each emulation. A single rdfa_dse pool can be associated with more than one RDF group, similar to snap pools shared by multiple snap sessions. SRDF/A DSE Threshold sets the percentage of cache used for SRDF/A that will start offloading cache to disk. DSE must be enabled on both the source and target arrays. DSE enabled on only one side of a link would lead to failure of the SRDF/A recovery with SRDF/A dropping because the R2 side would fail to have enough cache to hold the large and extended Transmit cycle. Copyright 2015 EMC Corporation All ahs reserve Module: Designing SRDF/A Solutions 10 SRDF/A DSE - VMAX3 Arrays One SRP is designated for DSE allocations and will support DSE for all SRDF/A sessions in the array ~ The default SRP for DSE is the default SRP for FBA devices DSE is enabled by default SRDF/A MCM — Smaller cycles eliminates the need for DSE on the R2 side — However DSE is enabled by default on the R2 side Mixed configurations - HYPERMAX OS and Enginuity 5876 = SRDF/A runs in legacy mode — DSE is disabled by default on both arrays — EMC recommends enabling DSE on both sides With VMAX3, DSE pools no longer need to be configured by a user. Instead when SRDF/A spills, tracks it will use a Storage Resource Pool (SRP) designated for use by DSE. Autostart for DSE is enabled by default on both the Ri or R2 sides, When running SRDF/A MCM, smaller cycles on the R2 side eliminate the need for DSE on the R2 side. Autostart is enabled on the R2 side in case there is a personality swap. Managing a DSE pool or associating a DSE pool with an SRDF group is no longer needed with VMAX3 arrays. If the array on one side of an SRDF device pair is running HYPERMAX OS and the other side is running a Enginuity 5876 or earlier, then SRDF/A sessions run in Legacy mode. DSE is disabled by default on both arrays. EMC recommends that you enable DSE on both sides. Copyright 2015 EMC Corporation All ahs reserve Module: Designing SRDF/A Solutions 12 Group-level Write Pacing — Enginuity 5876 » Monitors SRDF I/O service rates and slows down host I/O rates to match SRDF I/O rates: — Corrective action is based on - Cache usage on Ri side « Transfer rate of data from transmit delta set to receive delta set + Restore rate on the R2 side (apply delta set) » Write Pacing parameters ~ Delay — Threshold » Works in conjunction with DSE and Transmit Idle Ente ‘The SRDF/A Group-Level Write Pacing feature has the ability to dynamically monitor and detect when the SRDF/A I/O service rates are lower than host write I/Os to the Ri devices that are in an SRDF group. Service rates are refer to the transmit cycle rate between the Primary (R1) and Secondary (R2) VMAX systems. This monitoring is done at the SRDF group level on the Ri side. The pacing algorithm and takes corrective action to slow down or pace host I/O write rates to match the slower SRDF/A 1/O service rates. Only host writes are paced. System calls, reads, and other commands are not paced. SRDF/A Write Pacing help to control the amount of cache used by SRDF/A. This can prevent cache from being exhausted on the R1 side of the SRDF link, thereby keeping the SRDF/A sessions alive. SRDF/A Write Pacing provides the user an additional method of extending the availability of SRDF/A. Copyright 2015 EMC Corporation All ahs reserve Module: Designing SRDF/A Solutions 12 Device-level Write Pacing — Enginuity 5876 Enables TimeFinder/Snap of R2 devices in SRDF/A mode Writes to R1 device is paced — If apply rates of the R2 device is slower than the host write rate to the R1 device Both Group-level and Device-level write pacing can be active simultaneously * Device-level write pacing Is also set at the group level ~ Activated only for R2 devices involved in a TF/Snap session Ente Enginuity will monitor the apply rates (also known as the restore rates) to the R2 devices in an active SRDF/A session, when device-level write pacing has been configured and enabled. If the R2 devices are sources of TimeFinder/Snap sessions and if the apply rate for any R2 device(s) is slower than the host write rate to the corresponding R1 device, then Enginuity will slow down the writes to those Ri devices. Device-level write pacing is also set on the RDF group, however only those devices that are involved in a TimeFinder/Snap session will be paced. The pacing (or the delay) and the cache threshold are inherited from the values set of the RDF group. Copyright 2015 EMC Corporation All ahs reserve Module: Designing SRDF/A Solutions 13. Group-level Write Pacing - VMAX3 VMAX3 introduces enhanced Group-level write pacing Paces host I/Os to the DSE transfer rate for an SRDF/A session = Requires HYPERMAX OS on the Ri side ~ R2 can be HYPERMAX OS or Enginuity 5876 Responds to only to spillover rate on the R1 side — Not affected by spillover on the R2 side Device-level write pacing is not supported on VMAX3 arrays Ente \VMAX3_ introduces enhanced group-level pacing. Enhanced group-level pacing paces host I/Os to the DSE spill-over rate for an SRDF/A session, When DSE is activated for an SRDF/A session, host-issued write I/Os are throttled so their rate does not exceed the rate at which DSE can offload the SRDF/A session's cycle data, The system will pace at the spillover rate until the usable configured capacity for DSE on the SRP reaches its limit. At this point, the system will pace at the SRDF/A session’s link transfer rate. Enhanced group-level pacing responds only to spillover rate on the R1 side, it is not affected by the spillover on the R2 side, All existing pacing features are supported and can be utilized to keep SRDF/A sessions active. Enhanced group-level pacing is supported between VMAX3 arrays and VMAX arrays running Enginuity 5876 with fix 67492 Copyright 2015 EMC Corporation All ahs reserve Module: Designing SRDF/A Solutions 14

You might also like