Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
103 views
32 pages
Dell EMC PowerMax Reliability Availabili
vmax guides
Uploaded by
garbaddas garbad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save Dell EMC PowerMax Reliability Availabili For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
103 views
32 pages
Dell EMC PowerMax Reliability Availabili
vmax guides
Uploaded by
garbaddas garbad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save Dell EMC PowerMax Reliability Availabili For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save Dell EMC PowerMax Reliability Availabili For Later
You are on page 1
/ 32
Search
Fullscreen
Dell EMC PowerMax Reliability, Availability, and Serviceability Technical White Paper Dell EMC Technical White Paper Abstract This technical white paper explains the reliably, availability, and serviceability hardware and software features of Dell EMC” PowerMax storage arrays October 2018Revisions Revisions Date Description May 2018 Initial release October 2018 | Update “The information in his publication is provided “as is." Del Inc. makes no representations or warranties of any kind with respect othe information in this publication, and specially disclaims implied warranties of merchantability or ftness fr a particular purpose. Use, copying, and distribution of any software described inthis publication requires an applicable sofware Hoense. {© 2018 Dal inc. or ts subsidiaries. Al Rights Reserved, Dell, EMC, Dell EMC and other trademarks are trademarks of Del Inc. or its subsidiaries, Other ‘wademarks may be trademarks of thelr respective owners, Publshed in the USA. [10/18/2018] [Technical White Paper] [H17064 2] Dell believes the information in this document is accurate as of is publication date. The information is subject o change without notice. 2 —__DellEMC PowerMax Reliability, Availabilty, and Serviceability Technical White Paper | H17064.2 DeLLeMTable of contents Table of contents 3 1 10 "1 12 Introduction... co So 5 Dell EMC PowerMax System Family Overview . . 6 PowerMax engine and director components. 7 3.1. Channel front-end redundancy. PowerMax NVMe Back-end 4 4.1 Smart RAID. 42 RAIDS... 43 RAID 4.4. Drive sparing, 12 45 Data at Rest Encryption (D@RE).... 4.6 _ Drive monitoring and correction... InfiniBand fabric switch... Redundant power subsystem... 7 6.1 Vaulting 18 62 Power-down operation 19 6.3 Power-up operation Remote Support. a ERTS 20 7.1 Supportabilly through the Management Module Control Station 20 7.2. Secure Service Credential (SSC), secured by RSA a1 ‘Component-evel serviceability... . 22 8.1 Dell EMC intemal QE testing.. Non-Disruptive PowerMaxOS Upgrades....ucieumeuenmeiesnnunsmininnmannmnneninmnenenenenenenesiesesien dh TimeFinder and SRDF replication software ...nnnnnnen ee ramon BB 10.1. Local replication using TimeFinder 10.2. Remote replication using SRF. Unisphere for PowerMax System Health Check... Conclusion References. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 DealExecutive summary Executive summary Today's mission-critical environments demand more than redundancy. They require non-disruptive operations, non-disruptive upgrades and being “always online.” They require high-end performance, handling all workloads, predictable or not, under all conditions. They require the added protection of increased data availability provided by local snapshot replication and continuous remote replication. Dell EMC PowerMax storage arrays deliver all of these needs. The introduction of NVMe drives raises the performance expectations and possibilities of high-end arrays. A simple, service level-based provisioning ‘model simplifies the way users consume storage, taking the focus away from the back-end configuration ‘steps and allowing them to concentrate on other key roles. While performance and simplification of storage consumption are critical, other features also create a powerful platform. Redundant hardware components and intelligent software architecture deliver extreme performance while also providing high availablity. This combination provides exceptional reliability, while also leveraging components in new ways that decrease the total cost of ownership of each system. Important functionality such as local and remote replication of data, used to deliver business continuity, must cope with ‘more data than ever before without impacting production activities. Furthermore, at the end of the day, all of these challenges must be met while continually improving data center economics. Reliability, availablity, and serviceability (RAS) features are crucial for enterprise environments requiring always-on availabilty. PowerMax arrays are architected for six-nines (99.9999%) availabilty. The many redundant features discussed in this document are taken into account in the calculation of overall system availabilty. This includes redundancy in the back-end, cache memory, front-end and fabric, as well as the types of RAID protections given to volumes on the back-end. Caleulations may also include time to replace failed or failing FRUs (field replaceable units). In tumn, this also considers customer service levels, replacement rates of the various FRUs and hot sparing capability in the case of drives. ed | A Figure 1 PowerMax RAS highlights 4 Dell EMC PowerMax Reliability, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Introduction 5 Introduction PowerMax arrays include enhancements that improve reliabilty, availability, and serviceabilty. This makes PowerMax arrays ideal choices for critical applications and 24x7 environments demanding uninterrupted ‘access to information. PowerMax array components have a mean time between failure (MTBF) of several hundred thousand to millions of hours for a minimal component failure rate. A redundant design allows systems to remain online and operational during component replacement. All critical components are fully redundant, including director boards, global memory, internal data paths, power supplies, battery backup, and all NVMe back-end components. Periodically, the system tests all components. PowerMaxOS reports errors and environmental conditions to the host system as well as to the Customer Support Center. PowerMaxOS validates the integrity of data at every possible point during the lifetime of the data. From the point at which data enters an array, the data is continuously protected by error detection metadata, data redundancy, and data persistence. This protection metadata is checked by hardware and software mechanisms any time data is moved within the subsystem, allowing the array to provide true end-to-end Integrity checking and protection against hardware or software faults. Data redundancy and persistence allows recovery of data where the integrity checks fail The protection metadata is appended to the data stream, and contains information desoribing the expected data location as well as CRC representation of the actual data contents. The expected values found in protection metadata are stored persistently in an area separate from the data stream. The protection ‘metadata is used to validate the logical correctness of data being moved within the array any time the data transitions between protocol chips, intemal buffers, intemal data fabric endpoints, system cache, and system disks. PowerMaxOS supports industry standard T10 Data Integrity Field (DIF) block cyclic redundancy code (CRC) for track formats. For open systems, this enables a host-generated DIF CRC to be stored with user data and used for end-to-end data integrity validation. Additional protections for address/control fault modes provide increased levels of protection against faults. These protections are defined in user-definable blocks supported by the T10 standard. Address and write status information is stored in the extra bytes in the application tag and reference tag portion of the block CRC. ‘The objective of this technical note is to provide an overview of the architecture of PowerMax arrays and the reliability, availabilty, and serviceability (RAS) features within PowerMaxOS. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Dell EMC PowerMax System Family Overview 2 6 Dell EMC PowerMax System Family Overview The Dell EMC PowerMax 2000 and Dell EMC PowerMax 8000 are the fist Dell EMC hardware platforms with a Non-Volatile Memory Express (NVMe) back-end for customer data. NVMe is the protocol that runs on the PCI Express (PCle) transport interface, used to efficiently access storage devices based on Non-Volatile Memory (NVM) media, including today’s NAND-based flash along with future, higher-performing, Storage Class Memory (SCM) media technologies such as 3D XPoint and Resistive RAM (ReRAM). NVMe also contains a streamlined command set used to communicate with NVM media, replacing SCSI and ATA. NVMe was specifically created to fully unlock the bandwidth, IOPS, and latency performance benefits that NVMe offers to host-based applications which are currently unattainable using the SAS and SATA storage interfaces. ‘The NVMe back-end consists of a 24-slot NVMe DAE using 2.5" form factor drives connected to the Brick via dual-ported NVMe PCle Gen3 (8 lane) back-end I/O interface modules, delivering up to 8GB/sec of bandwidth per module, In addition to the allNVMe storage density and scale which provide high back-end IOPS and low latency, the Dell EMC PowerMax arrays also introduce a more powerful data reduction module capable of performing inline hardware data compression, deduplication, and adaptive tiering to lower TCO by using auto data placement Highlights of the PowerMax 2000 system include: + 1-2engines per system ‘+ 12-core Intel Broadwell CPUs yielding 48 cores per engine + Upto 2TB of DDR4 cache per engine * Upto 64 FE ports per system ‘+ Upto 1 PBe per system of PCle Gen3 NVMe storage Highlights of the PowerMax 8000 system include: + 1-8 engines per system + 18-core Intel Broadwell CPUs yielding 72 cores per engine + Upto 2TB DDR& cache per engine + Up to 256 FE ports per system ‘+ Upto 4 PBe per system of PCle Gen3 NVMe storage ‘The primary benefits that the PowerMax platforms offer Dell EMC customers are: + Massive scale with low latency NVMe design ‘+ More storage IOPS density per system in a much smaller footprint «Future proof technology - ready for next generation storage media such as 3D XPoint and NVMe over Fabric (NVMe-oF infrastructure + Applied machine learning to lower TCO by using inteligent data placement + Improved data efficiency and data reduction capabililies wit inline dedupe and compression Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!PowerMax engine and director components 3 PowerMax engine and director components ‘The engine isthe crtcal building block of PowerMax systems. It primarily consists of two redundant director boards that house global memory, front-end connectivity, back-end connectivity, internal network ‘communications and environmental monitoring components. Each director board has a dedicated power and cooling system. Even single-engine configurations are fully redundant. A PowerMax system may have between one and eight engines depending on model and configuration Table 1 lists the components within an engine, count per director, and defines their purposes, Table 1 PowerMax engine and director components Director Count Component (per director) aaa Power Supply 2 _| Provide redundant power to a director Fan 5 | Provide cooling for a director Management Module 1 | Manage environmental functionality NVMe Flash JO Module | _Upto4 _ Safely store data from cache during the vaulting sequence, Front-end VO Module | Upto4 Provide front-end connectivity to the array. There are different types of front-end (0 modules that allow connectivity to various interfaces, including SAN, FICON, SROF, and embedded NAS (eNAS), PCle Back-end /O 2 | Connect the director boards to the back-end ofthe system, Module allowing iO to the systems drives Compression and 1 Perform inline data compression and deduplication Deduplication /O Module Fabric VO module 1 | Provides connectivity between directors. In multi-engine PowerMax 8000 systems, the fabric ]O modules are connected toan internal InfiniBand switch. Memory Module 16 | Global memory component Figure 2 displays the front view of a PowerMax engine. Director 2 Fans Director 1 Figure 2 Front view of PowerMax 2000 and PowerMax 8000 engine 7 DellEMC PowerMax Reliability, Availabilty, and Serviceability Technical White Paper | H17064.2 Dea!PowerMax engine and director components ‘The following figures display rear views of engine components, with logical port numbering, Sot 0 1 2 3 8 9 10 Figure 3 Rear view of PowerMax a engine with logical port numbering Figure 4 Rear view of PowerMax 8000 multi-engine with logical port numbering Even Director 420 @ Odd Director ts) {24} Sot 0 1 2 3 4 5 6 7 8 9 10 Figure 5 Rear view of PowerMax 8000 single-engine with logical port numbering 8 —_DellEMC PowerMax Reliability, Availabilty, and Serviceability Technical White Paper | H17064.2 DeLLEMCPowerMax engine and director components 3.1 9 Note that a single-engine PowerMax 8000 system requires four NVMe Flash /]O modules per director ‘compared to a multi-engine PowerMax 8000 which requires three NVMe Flash JO modules per director. The four NVMe Flash I/O modules per director configuration will remain even if additional engines are added to the system. This must be considered when ordering new systems as the additional NVMe Flash /O module reduces the number of external l/O modules, thus reducing the total number of external ports. Channel front-end redundancy ‘Channel redundancy is provided by configuring multiple connections from the host servers (direct connect) or Fibre Channel switch (SAN connect) to the system. With SAN connectivity, through Fibre Channel switches, ‘each front-end port can support multiple host attachments, enabling storage consolidation across a large ‘number of host platforms. The multiple connections are distributed across separate directors to ensure Uninterupted access in the event of a channel failure. A minimum of two connections per server or SAN to different directors is necessary to provide full redundancy. Host connectivity to the front-end director ports should be spread across physical components for the most efficient form of redundancy. ‘The following are recommended for connecting a host or cluster: ‘+ 2-4 front-end paths are configured in the port group for masking and zones to the host (single initiator zoning is recommended), + For cabling options, one approach is to connect all even-numbered ports to fabric A and all odd- numbered ports to fabric B. ‘+ In single engine systems with this approach, select 2 /O ports spanning both SAN fabrics on each director, with each port being on a separate VO module. Example: Port 4 & 24 on both directors 1 and 2. + Ina multi-engine system, distributing the paths further across directors spanning different engines spreads the load for performance and ensures fabric redundancy. Example: Port 4 in directors 1, 2, 3 and 4 a. a Figure 6 SAN connectivity in a single engine environment Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!PowerMax engine and director components 3.1.1 3.1.2 10 Global memory technology overview Global memory is a crucial component in the architecture. All read and write operations are transferred to and from global memory. Transfers between the host processor and channel directors can be processed at much greater speeds than transfers involved with physical drives. PowerMaxOS uses complex statistical prefetch algorithms which can adjust to proximate conditions on the array. Inteligent algorithms adjust to the workload by constantly monitoring, evaluating and optimizing cache decisions. PowerMax arrays can have up to 2TB of mirrored DDR4 memory per engine and up to 16TB mirrored per array. Global memory within an engine is accessible by any director within the array. Dual-write technology is maintained by the array. Front-end writes are acknowledged when the data is written to mirrored locations in the cache. In the event of a director or memory failure, the data continues to be available from the redundant copy. If an array has a single engine, physical memory mirrored pairs are internal to the engine, Physical memory is paired across engines in multi-engine PowerMax 8000 arrays. Physical memory error verification and error correction PowerMaxOS can correct single-bit errors and report an error code once the single-bit errors reach a Predefined threshold. To protect against possible future mult-bit errors, if single-bit error rates exceed a predefined threshold, the physical memory module is marked for replacement. When a multi-bit error occurs, PowerMax0S iniates director falover and cals out the appropriate memory module for replacement When a memory module needs to be replaced, the array notifies Dell EMC support and a replacement is ordered. The failed module is then sent back to Dell EMC for failure analysis. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!PowerMax NVMe Back-end 4 41 1" PowerMax NVMe Back-end The PowerMax architecture incorporates an NVMe back-end that reduces command latency and increases data throughput while maintaining full redundancy. NVMe is an interface that allows host software to communicate with a non-volatile memory subsystem. This interface is optimized for Enterprise and Client solid state drives (SSDs), typically attached as a register-level interface to the PCI Express interface. ‘The NVMe back-end subsystem provides redundant paths to the data stored on solid state drives. This provides seamless access to information, even in the event of a component failure and/or replacement. Each PowerMax Drive Array Enclosure (DAE) can hold 24-2.5" NVMe SSDs. The DAE also houses redundant Canister Modules (Link Control Cards) and redundant AC/DC power supplies with integrated cooling fans. Figure 7 and Figure 8 show the front and rear views of the PowerMax DAE. Figure 7 PowerMax DAE (front) Figure 8 PowerMax DAE (rear) ‘The directors are connected to each DAE through a pair of redundant back-end I/O modules. The back-end VO modules connect to the DAEs at redundant LCCs. Each connection between a back-end /O module and an LCC uses a completely independent cable assembly. Within the DAE, each NVMe drive has two ports, each of which connects to one of the redundant LCs. ‘The dual-initiator feature ensures continuous availability of data in the unlikely event of a drive management hardware failure. Both directors within an engine connect to the same drives via redundant paths. If the sophisticated fencing mechanisms of PowerMaxOS detect a failure of the back-end director, the system can process reads and writes to the drives from the other director within the engine without interruption, Smart RAID ‘Smart RAID provides activelactive shared RAID support for PowerMax arrays. Smart RAID allows RAID (groups to be shared between back-end directors within the same engine. Each back-end director has access to every physical drive within the DAE but each TDAT on that physical drive will be primary to only one back- end director. ‘Smart RAID helps in cost reduction by allowing a smaller number of RAID groups while improving performance by allowing two directors to run I/O concurrently to the same set of drives. Figure 9 illustrates Smart RAID connectivity between directors, spindles, and TDATs. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!PowerMax NVMe Back-end 4.2 43 44 2 EFD Spindle 0 EFD Spindle 1 EFD Spindle 7 Figure 9 Smart RAID connectivity RAID 5 RAID 5 is an industry-standard data protection mechanism with rotating parity across all members of the RAID 5 set. In the event of a physical drive failure, the missing data is rebuilt by reading the remaining drives in the RAID group and performing XOR calculations. PowerMax systems support two RAID 5 configurations: ‘+ RAID 5 (3+1) — Data striped across 4 drives (3 data, 1 parity) + RAID 5 (7+1)— Data striped actoss 8 drives (7 data, 1 parity) RAID 6 RAID 6 enables the rebuilding of data in the event that two drives fail within a RAID group. Dell EMC's implementation of RAID 6 calculates two types of parity. This is important during events when two drives within the same RAID group fail, as it stil allows the data in this scenario to be reconstructed. Horizontal parity is identical to RAID 5 parity, which is calculated from the data across all ofthe disks in the RAID group. Diagonal parity is calculated on a diagonal subset of data members. For applications without demanding performance needs, RAID 6 provides the highest data availabilty. PowerMax systems implement RAID 6 (6+2) — Data striped across 8 drives (6 data, 2 parity) Drive sparing PowerMaxOS supports Universal Sparing to automatically protect a falling drive with a spare drive. Universal ‘Sparing increases data availability of all volumes in use without loss of any data capacity, transparently to the host, and without user intervention. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 DealPowerMax NVMe Back-end When PowerMaxOS detects a drive is failing, the data on the faulty drive is copied directly to a spare drive attached to the same engine. Ifthe faulty drive has failed, the data is rebuilt onto the spare drive through the remaining RAID members. When the faulty drive is replaced, data is copied from the spare to the new drive. PowerMax systems have one spare drive in each engine. The spare drives reside in dedicated DAE slots. In order to allow all drives in the engine to share the spare drive, the spare drive type is the same as the highest capacity and performance class as the other drives in the enk Solutions Enabler 9.0 provides tools to view information related to spare drives in PowerMax arrays. ‘The symcfg list -v output reports total values for Configured Actual Disks, Configured Spare Disks and Available Spare Disks in the system, The umber of Configured Actual Disks field reports only non-spare configured disks, and Nunber of Configured Spare Disks field reports only configured spare disks. Figure 10 symefg list -v The symdisk List -dskgrp_summary —by_engine reports spare coverage information per Disk Group per Engine, The Total and Available spare disk counts for each Disk Group include both spare disks that are in the same Disk Group in the same Engine, as well as shared spare disks in another Disk Group in the same Engine that provide acceptable spare coverage. These shared spares are also included in the total disk count for each Disk Group in each Engine. Therefore, the cumulative values of all Disk Groups in all Engines in this output should not be expected to match the values reported by the symcfg list —v command that were described in the previous example. Total Disk Spare Coverage percentage for a particular Disk Group is the spare capacity in comparison to usable capacity shown in the output. 13 Dell EMC PowerMax Reliablty, Avaiabilty, and Serviceabilty Technical White Paper | H17064.2 DettemcPowerMax NVMe Back-end Figure 11 symdisk list ~dskgrp_summary -by_engine However, Spare Coverage as reported by the symdisk list -vand symdisk show commands indicates whether the disk currently has at least one available spare; that is, a spare disk that is not in a failed state or already invoked to another disk. Figure 12 symdisk list -v 4.5 Data at Rest Encryption (D@RE) Data at Rest Encryption (D@RE) protects data confidentiality by adding back-end encryption to the entire array. D@RE provides hardware-based, on-array, back-end encryption. Back-end encryption protects information from unauthorized access when drives are removed from the system. DG@RE provides encryption on the back-end that incorporate XTS-AES 256-bit data-at-rest encryption. These VO modules encrypt and decrypt data as it is being written to or read from a drive. All configured drives are encrypted, including data drives, spares, and drives with no provisioned volumes. 14 DellEMC PowerMax Reliability, Availabilty, and Serviceability Technical White Paper | H17064.2 DeLLEMCPowerMax NVMe Back-end 46 8 D@RE incorporates RSA” Embedded Key Manager for key management. With D@RE, keys are self- ‘managed, and there is no need to replicate keys across volume snapshots or remote sites. RSA Embedded Key Manager provides a separate, unique Data Encryption Key (DEK) for each drive in the array, including spare drives. By securing data on enterprise storage, D@RE ensures that the potential exposure of sensitive data on discarded, misplaced, or stolen media is reduced or eliminated. As long as the key used to encrypt the data is secured, encrypted data cannot be read. In addition to protecting against threats related to physical removal ‘of media, media can readily be repurposed by destroying the encryption key used for securing the data previously stored on that media D@RE: «Is compatible with all PowerMaxOS features. + Allows for encryption of any supported local drive types or volume emulations, + Delivers powerful encryption without performance degradation or disruption to existing applications or infrastructure, D@RE can also be deployed with extemal key managers using Key Management Interoperability Protocol (KMIP) that allow for a separation of key management from PowerMax arrays. KMIP is an industry standard that defines message formats for the manipulation of cryptographic keys on a key management server Extemal key manager provides support for consolidated key management and allows integration between a PowerMax array with an already existing key management infrastructure. For more information on D@RE, refer to the Dell EMC PowerMax Data at Rest Encryption White Paper. Drive monitoring and correction PowerMaxOS monitors media defects by both examining the result of each data transfer and proactively ‘scanning the entire drive during idle time. If a block is determined to be bad, the director: + Rebuilds the data in physical memory if necessary. * Remaps the defective block to another area on the drive set aside for this purpose. ‘+ Rewrites the data from physical memory back to the remapped block on the drive. The director maps around any bad block(s) detected, thereby avoiding defects in the media. The director also keeps track of each bad block detected. If the number of bad blocks exceeds a predefined threshold, the primary MMCS invokes a sparing operation to replace the defective drive and then automatically alerts Customer Support to arrange for corrective action Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!InfiniBand fabric switch 5 16 InfiniBand fabric switch Multi-engine PowerMax 8000 systems employ two 18-port Infiniband fabric switches to carry contro, metadata, and user data through the system. This technology connects all ofthe engines in the system to provide a powerful form of redundancy and performance. This allows the engines to share resources and act as a single entity while communicating. For redundancy, each director has a connection to each switch. Each switch has redundant, hot pluggable power supplies. Figure 13 and Figure 14 show the front and rear views of the InfiniBand switches. Figure 13. Front view of InfiniBand switch Figure 14 Rear view of InfiniBand switch Note: Since the purpose of the dynamic virtual matrix is to create a communication interconnection between all of the engines, single-engine systems and dual-engine PowerMax 2000 systems do not require a fabric, switch, Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Redundant power subsystem 6 7 Redundant power subsystem ‘A modular power subsystem features a redundant architecture that facilitates fleld replacement of any of its ‘components without any interruption in processing, ‘The power subsystem has two power zones for redundancy. Each power zone connects to a separate dedicated or isolated AC power line. If AC power fails on one zone, the power subsystem continues to ‘operate through the other power zone. If any single power supply module fails, the remaining power supplies continue to share the load. PowerMaxOS senses the fault and reports it as an environmental error. Each director is configured with a management module that provides low-level, system-wide communications and environmental control for running application software, monitoring, and diagnosing the system. The ‘management modules are responsible for monitoring and reporting any environmental issues, such as power, cooling, or connectivity problems. Environmental information is carried through two redundant Ethemet switches. Each management module ‘connects to one switch, except for the MMCS modules in Engine 1 which connect to both Ethernet switches, ‘Management module A connects to Ethemet switch A, and management module B connects to Ethernet switch B. Each management module also monitors one of the system standby power supplies (SPS) through ‘an RS232 connection. Standard PowerMax 8000 racks have LED bars that are connected to the ‘management modules and are used for system/bay identification during service activities. Figure 15 illustrates management module connectivity. 58 S58 wm 8 MM A - {upped ower Figure 15 Management module connectivity ‘The internal Ethemet connectivity network monitors and logs environmental events across all critical ‘components and reports any operational problems. Critical components include director boards, global memory, power supplies, power line input modules, fans, and various on/off switches. The network's environmental control capability is able to monitor each component's local voltages, ensuring optimum power delivery. Temperature of director boards and memory are also continuously monitored. Failing components, can be detected and replaced before a failure occurs, ‘The AC power main is checked for the following: © ACfallures © Power loss to a single power zone © DC failures Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Redundant power subsystem 6.1 6.1.1 6.1.1.1 8 © Current sharing between DC supplies © DC output voltage ‘© Specific notification of overvoltage condition ‘© Current from each DC supply Voltage drops across major connectors Figure 16 illustrates the internal Ethernet connectivity. (ere) Ethernet Switch 8. Ethernet Switch A Tae] [mn] [ae] ere3)] [teas | fens) Figure 16 Internal Ethernet connectivity Vaulting ‘As cache size has grown, the time required to move all cached data to a persistent state has also increased. Vaulting is designed to limit the time needed to power off the system ifit needs to switch to a battery supply Upon complete system power loss or transitioning a system to an offline state, PowerMaxOS performs a vault of cache memory to dedicated I/O modules known as flash lO modules. The flash I/O modules use NVMe technology to safely store data in cache during the vaulting sequence. Lithium-ion standby power supply (Li-lon SPS) modules provide battery backup functionality during the vault operation. Two SPS modules are configured per engine. The SPS modules also provide back-up power to the InfiniBand switches in applicable configurations, Vault triggers State changes that require the system to vault are referred to as vault triggers. There are two types of vault triggers: internal availabilty triggers and external availability triggers. Internal availability triggers Internal availabilty triggers are initiated when global memory data becomes compromised due to component unavailability. Once these components become unavailable, the system triggers the Need to Vault (NTV) state, and vaulting occurs. There are three internal triggers: Vault flash availability - The NVMe flash /O modules are used for storage of metadata under normal conditions, as well as storing any data that is being saved during the vaulting process. PowerMax systems can withstand failure and replacement of flash I/O modules without impact to processing. However, if the overall available flash space in the system is reduced to the minimum to be able to store the required copies of global memory, the NTV process triggers. This is to ensure that all of the data is saved before a potential further loss of vault flash space occurs. Global memory (GM) availability — When any of the mirrored director pairs are both unhealthy either logically or environmentally, NTV triggers because of GM unavailability. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Redundant power subsystem 6.1.1.2 6.2 6.3 19 Fabric availability ~ When both the fabric switches are environmentally unhealthy, NTV triggers because of fabric unavailabilty. External availability triggers External availablity triggers are initiated under circumstances when global memory data is not compromised, butit is determined thatthe system preservation is improved by vaulting. Vaulting in this context is used as a ‘mechanism to stop host activity facilitate easy recovery, of act as an attempt to proactively take action to prevent potential data loss, There are three external triggers: Input power ~ If power is lost to both power zones, the system vaults. Engine trigger — If an entire engine fails, the system vaults. DAE trigger — Ifthe system has lost access to the whole DAE or DAEs, including duakinitiator failure, and Joss of access causes configured RAID members to become non-accessible, the system vaults. Power-down operation When a system is powered down or transitioned to offiine, or when environmental conditions trigger a vault situation, a vaulting procedure occurs. First, the part of global memory that is saved reaches a consistent image (no more writes). The directors then write the appropriate sections of global memory to the flash VO modules, saving multiple copies of the logical data. The SPS modules maintain power to the system during the vaulting process for up to 5 minutes. Power-up operation During power-up, the data is written back to global memory to restore the system. When the system is. powered-on, the startup program does the following: ‘© Initializes the hardware and the environmental system ‘© Restores the global memory from the saved data while checking the integrity of the data. This is accomplished by taking sections from each copy of global memory that was saved during the power- down operation and combining them into a single complete copy of global memory. If there are any data integrity issues in a section of the first copy that was saved, then that section is extracted from the second copy during this process. ‘© Performs a cleanup, data structure integrity, and initialization of needed global memory data structures At the end of the startup program, the system resumes normal operation when the SPS modules are recharged enough to support another vault operation. if any condition is not safe, the system does not resume operation and calls Customer Support for diagnosis and repair. In this state, Dell EMC Customer Support can communicate with the system and find out the reason for not resuming normal operation. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Remote Support a A 20 Remote Support Remote support is an important and integral part of Dell EMC Customer Support. Every PowerMax system has two integrated Management Module Control Stations (MMCS) that continuously monitor the PowerMax environment. The MMCS modules can communicate with the Customer Support Center through a network connection to the EMC Secure Remote Support (ESRS) Gateway. Through the MMCS, the system actively monitors all /O operations for errors and faults. By tracking these errors during normal operation, PowerMaxOS can recognize pattems of error activity and predict a potential hard failure before it occurs. This proactive error tracking capability can often prevent component failures by fencing off, or removing from service, a suspect component before a failure occurs. To provide remote support capabilities, the system is configured to call home and alert Dell EMC Customer ‘Support of a potential failure. An authorized Dell EMC Technical Support Engineer can run system. diagnostics remotely for further troubleshooting and resolution. Configuring Dell EMC products to allow inbound connectivity also enables Dell EMC Customer Support to proactively connect to the systems to gather needed diagnostic data or to attend to identified issues. The current connect-in support program for the system uses the latest digital key exchange technology for strong authentication, layered application security, and a centralized support infrastructure that places calls through an encrypted tunnel between Customer ‘Support and the MMCS located inside the system, Before anyone from Customer Support can initiate a connection to a system at the customer site, that person ‘must be individually authenticated and determined to be an appropriate member of the Customer Support team. Field-based personnel who might be known to the customer must stil be properly associated with the specific customer's account. ‘An essential part of the design of the connectivity support program is that the connection must originate from one of several specifically designed Remote Support Networks at Dell EMC. Within each of those Support Centers, the necessary networking and security infrastructure has been built to enable both the call-home and call-device functions. Supportability through the Management Module Control Station Each PowerMax system has two management module control stations (MMCS) in the first engine of each system (one per director). The MMCS combines the management module and control station (service processor) hardware into a single module. It provides environmental monitoring capabilties for power, cooling, and connectivity. Each MMCS monitors one of the system standby power supplies (SPS) through an RS232 connection. Each MMCS is also connected to both intemal Ethernet switches within the system as part ofthe internal communications and environmental control system. ‘The MMCS also provides remote support functionality. Each MMCS connects to the customer's local area network (LAN) to allow monitoring of the system, as well as remote connectivity for the Dell EMC Customer ‘Support team, Each MMCS can also be connected to an external laptop or KVM source. ‘The MMCS located in director 1 is known as the primary MMCS, and the MMCS located in director 2 is known as the secondary MMCS. The primary MMCS provides all control station functionality when itis operating normally, while the secondary MMCS provides a subset of this functionality. Ifthe primary MMCS fails, the secondary MMCS is put in an elevated secondary state, which allows more functionality for the duration of this state. Both MMCS are connected to the customer network, giving the system the redundant ability to report any errors to Dell EMC Customer Support, as well as allowing Dell EMC Customer Support to connect, to the system remotely. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Remote Support T2 2 ‘The MMCS is used in the following support and maintenance tasks: + PowerMaxOS upgrade procedures Hardware upgrade procedures Internal scheduler tasks that monitor the health of the system Error collection, logging, and reporting through the call-home feature Remote connectivity and troubleshooting by Dell EMC Customer Support ‘Component replacement procedures ‘The MMCS also controls the LED bars on the front and back of each standard PowerMax 8000 rack. These can be used for system identification purposes by remote and on-site Dell EMC service personnel. Figure 17 illustrates MMCS connectivity se SA Figure 17 MMCS connectivity Secure Service Credential (SSC), secured by RSA The Secure Service Credential technology applies exclusively to service processor activities and not host- initiated actions on array devices. These service credentials describe who is logging in, the capabilities they have, a time frame that the credential is good for, and the auditing of actions the service personnel performed which can be found in the symaudit logs. If these credentials are not validated, the user cannot log in to the MMCS or other internal functions. SSC covers both on-site and remote login. ‘Some of the security features are transparent to the customer, such as service access authentication and authorization by Dell EMC Customer Support and SC (user ID information) restricted access (MMCS and Dell EMC Customer Support intemal functions). Access is definable at a user level, not just at a host level. All user ID information is encrypted for secure storage within the array. MMCS-based functions honor Solutions Enabler Access Control settings per authenticated user in order to limit view/control of non-owned devices in shared environments such as SRDF-connected systems. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Component-level serviceability 8 8.1 2 Component-level serviceability PowerMax systems provide full component-level redundancy to protect against a component failure and ensure continuous and uninterrupted access to information. This non-disruptive replacement capability allows the Customer Support Engineer to install a new component, initalzeitf necessary, and bring it online without stopping system operation, taking unaffected channel paths offline, or powering the unit down, A modular design improves serviceability by allowing non-disruptive component replacements, should a failure occur. This low parts count minimizes the number of failure points. PowerMax systems feature non-disruptive replacement of all major components, including: © Engine components: = Director boards = 0 Modules Fibre Channel (front-end) Embedded NAS (eNAS) PCle (back-end) Flash (Vault) 'SRDF Compression Inline Compression/Deduplication > Fabric = Management modules/management module control stations. = Power supplies - Fans ‘© Drive Array Enclosure (DAE) components: ~ NVMe drives = Link Control Cards (LCC) - Power supplies - PCle cables © Cabinet Components = InfiniBand switches = Ethemet switches = Standby Power Supplies (SPS) = Power Distribution Units (PDU) Dell EMC internal QE testing Dell EMC’s Quality Engineering (QE) Teams perform thorough testing of all FRUs. Each FRU is tested ‘multiple times for each code level with very specific passifail criteria Standard tests perform verification of the GUI-based scripted replacement procedures that are used by Dell EMC field personnel. The tests are designed to verify the replaceabilty of each FRU without any adverse effects on the rest of the system, and to verify the functionality and ease-of-use of the scripted procedures. These tests are straightforward replacement procedures performed on operational components. Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Component-level serviceability 23 Non-standard tests are also performed on components that have failed either by error injection or hot removal of the component or its power source. These tests also incorporate negative testing by intentionally causing different failure scenarios during the replacement procedure. Please note that removing a drive hot will not cause sparing to invoke. This behavior is optimal as the system knows the device has not gone bad. The correct course of action is to recover the drive rather than go through needless sparing and full rebuild processes. Negative tests are designed to make sure that the replacement procedure properly detects the error and that the rest of the system is not affected ‘Some examples of negative tests are’ + Replacing the wrong component ‘+ Replacing component with an incompatible component + Replacing component with a faulty component + Replacing component with a new component that has lower code that needs to be upgraded + Replacing component with a new component that has higher code that needs to be downgraded + Replacing component with the same component and make sure script detects and alerts the user that the same component is being used + Improperly replacing a component (miscabled, unseated, etc) Initiating a system vault save (system power loss) operation during a replacement procedure Both the standard and non-standard tests are performed on all system models and various configurations with customer-lke workloads running on the array. Tests are also performed repeatedly to verify there are no residual issues left unresolved that could affect subsequent replacements of the same or different component(s). Components that are known to fail more frequently in the field, drives for example, as well as ‘complex component replacements, are typically tested more frequently Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Non-Distuptive PowerlMaxOS Upgrades 9 24 Non-Disruptive PowerMaxOS Upgrades Interim updates of PowerMaxOS can be performed remotely by the Remote Change Management (RCM) group. These updates provide enhancements to performance algorithms, error recovery and reporting techniques, diagnostics, and PowerMaxOS fixes. They also provide new features and functionality for PowerMaxOS. During an online PowerMaxOS code load, a member of the RCM team downloads the new PowerMaxOS code to the MMCS. The new PowerMaxOS code loads into the EEPROM areas within the directors, and remains idle until requested for a hot load in the control store. The system loads executable PowerMaxOS code within each director hardware resource unti all directors are loaded. Once the executable PowerMaxOS code is loaded, intemal processing is synchronized and the new code becomes operational. The system does not require customer action during the performance of this function. All directors remain ‘online to the host processor, thus maintaining application access, Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!TimeFinder and SRDF replication software 10 10.1 10.2 25 TimeFinder and SRDF replication software Local replication using TimeFinder TimeFinder™ software delivers point-in-time copies of volumes that can be used for backups, decision support, data warehouse refreshes, or any other process that requires parallel access to production data, ‘TimeFinder SnapVX is highly-scalable, highly-efficient, and easy to use. ‘SnapVX provides very low impact snapshots and clones for data volumes. SnapVX supports up to 256 snapshots per source volume, which are tracked as versions with less overhead and simple relationship tracking. Users can assign names to their snapshots and have the option of setting automatic expiration dates on each snapshot. ‘SnapVX provides the ability to manage consistent point-in-time copies for storage groups with a single operation. Up to 1024 target volumes can be linked per source volume, providing read/write access as pointer-based or full copies. Users can also create secure snapshots that prevent a snapshot from being terminated until a specified retention time has been reached. For more information on TimeFinder SnapVX, refer to Dell EMC PowerMaxOS TimeFinder Local Replication ‘Technical Notes. Remote replication using SRDF ‘Symmetrix Remote Data Facility (SRDF) solutions provide industry-leading disaster recovery and data ‘mobility solutions. SRDF replicates data between 2, 3 or 4 arrays located in the same room, on the same campus, or thousands of kilometers apart + SROF synchronous (SRDF/S) = Maintains a real-time copy at arrays located within 200 kilometers, = Writes from the production host are acknowledged from the local array when they are written to cache at the remote array. + SROF asynchronous (SRDF/A) - Maintains a dependentwrite, consistent copy at arrays located at unlimited distances. ~ Writes from the production host are acknowledged immediately by the local array. Thus replication has no impact on host performance. = Data at the remote array is typically only seconds behind the primary site. SRDF disaster recovery solutions use “active remote” mirroring and dependent-write logic to create consistent copies of data, Dependent.write consistency ensures transactional consistency when the applications are restarted at the remote location. SRDF can be tailored to meet various Recovery Point Objectives/Recovery Time Objectives, ‘SRDF can be used to create complete solutions to: Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!‘TimeFinder and SRDF replication software ‘© Create real-time (SRDF/S) or dependent-write-consistent (SRDF/A) copies at 1, 2, or 3 remote arrays, © Move data quickly over extended distances. © Provide 3-site disaster recovery with; = Business continuity = Zero data loss - Disaster restart ‘SROF integrates with other Dell EMC products to create complete solutions to: ‘© Restart operations after a disaster with: = Business continuity = Zero data loss ‘+ Restart operations in clustered environments, = For example, Microsoft Cluster Server with Microsoft Failover Clusters. ‘© Monitor and automate restart operations on an alternate local or remote server. ‘© Automate restart operations in VMware environments. 10.2.1 Cascaded SRDF and SRDF/Star support Cascaded SROF configurations use 3-site remote replication with SROF/A mirroring between sites B and C, delivering additional disaster restart flexibility. Figure 18 shows an example of a Cascaded SRDF solution. Figure 18 Cascaded SROF ‘SRDF/Star is commonly used to deliver the highest resiliency in disaster recovery. SRDF/Star is configured with three sites enabling resumption of SRDFIA with no data loss between the two remaining sites, providing continuous remote data mirroring and preserving disaster-testart capabilities. Figure 19 shows examples of Cascaded and Concurrent SRDF/Star solutions. 26 Dell EMC PowerMax Reliability, Availabilty, and Serviceability Technical White Paper | H17064.2 Dea!TimeFinder and SRDF replication software See A SRDFIA “Ste C Figure 19 SRDF/Star 10.2.2 SRDF/Metro support ‘SRDF/Metro significantly changes the traditional behavior of SRDF Synchronous mode with respect to the remote (R2) device availability to better support host applications in high-availabilty environments. With SRDF/Metro, the SRDF R2 device is readiwrite accessible to the host and takes on the federated (such as geometry and device WWN) personality of the primary Rt device. By providing this federated personality on the R2 device, both R1 and R2 devices then appear as a single virtual device to the host. With both the R1 and R2 devices being accessible, the host or hosts (in the case of a cluster) can read and write to both R1 and R2 devices with SRDF/Metro ensuring that each copy remains current, consistent, and addressing any rite conflicts that may occur between the paired SRDF devices. Figure 20 shows examples of SROF/Metro solutions. Rees wite esa write Figure 20 SRDF/Metro (On the left is an SRDF/Metro configuration with a standalone host that has read/write access to both arrays (R1 and R2 devices) using mult-pathing software such as PowerPath. This is enabled by federating the personality of the R1 device to ensure that the paired R2 device appears, through additional paths to the host, as a single virtualized device, 27 Dell EMC PowerMax Reliability, Availabilty, and Serviceability Technical White Paper | H17064.2 Dea!TimeFinder and SRDF replication software On the right is a clustered host environment where each cluster node has dedicated access to an individual array. In either case, writes to the R1 or R2 devices are synchronously copied to its SROF paired device. ‘Should a conflict occur between writes to paired SRDF/Metro devices, the conflicts are internally resolved to ensure a consistent image between palred SRDF devices is maintained to the individual host or host cluster. ‘SRDF/Metro may be selected and managed through Solutions Enabler, Unisphere for PowerMax, and REST API, SRDF/Metro requires a separate license on both arrays to be managed, For more information on SRDF, refer to the Dell EMC PowerMax Family Product Guide, and Introduction to SRDF/Metro White Paper. 28 Dell EMC PowerMax Reliability, Availabilty, and Serviceability Technical White Paper | H17064.2 Dea!Unisphere for PowerMax System Health Check 11 29 Unisphere for PowerMax System Health Check Unisphere for PowerMax has a system health check procedure that interrogates the health of the array hardware. The procedure checks various aspects of the system and reports the results as either pass or fail The results are reported at a high level with the intent of either telling the user that there are no hardware issues present or that issues were found and the user should contact Dell EMC Customer Support for further investigation, ‘The health check procedure is accessed from the System Health Dashboard as Figure 21 shows, Figure 21 Unisphere System Health Dashboard The test takes several minutes to complete. When complete, clicking the Run Health Check link displays test results in the format shown in Figure 22: Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!Unisphere for PowerMax System Health Check Health Check | 000197600 Time oflastrun FriDec 01 2017 10308 GMT.0500 Resut @ TAL Name Status Vault State Test Spare Drives Test Memory Test Locks Test Emulations Test Environmentals Test Baery Test General Test ‘Compression And Dedup Test Figure 22. Health Check Results 30 Dell EMC PowerMax Reliability, Availabilty, and Serviceability Technical White Paper | H17064.2 DeLLEMCConclusion 12 Conclusion PowerMax family platforms integrate a highly-redundant architecture, creating a remarkably reliable environment in a configuration that minimizes carbon footprint in the data center and reduces total cost of ownership, The introduction of PowerMaxOS enhances the customer's experience through new technologies such as service level-based provisioning, making storage management easier while also increasing availabilty of data through improvements to concepts such as vaulting, disk sparing, and RAID. The local and remote replication suites bring the system to an elevated level of availability, through TimeFinder SnapVX and ‘SRDF, respectively. The serviceability aspects make the component replacement process quick and easy. ‘The key enhancements that improve the reliability, availabilty, and serviceability of the systems make PowerMax the ideal choice for critical applications and 24x7 environments that require uninterrupted access to information. 31 DellEMC PowerMax Reliability, Availabilty, and Serviceability Technical White Paper | H17064.2 Dea!References A 32 References Reference information and product documentation can be found at delleme.com and support.emc.com, including: © Dell EMC PowerMax Family Product Guide ‘* Dell EMC PowerMax0S Local Replication Technical Note Dell EMC PowerMax SRDF/Metro Overview and Best Practices Technical Note Dell EMC PowerMax Reliabilty, Availabilty, and Serviceabilty Technical White Paper | H17064.2 Dea!
You might also like
DECS-IE PowerScale Implementation Engineer
PDF
No ratings yet
DECS-IE PowerScale Implementation Engineer
794 pages
1 Module: Introduction To VNX Management
PDF
No ratings yet
1 Module: Introduction To VNX Management
14 pages
Section 01 Introduction To Servers Course Guide
PDF
No ratings yet
Section 01 Introduction To Servers Course Guide
146 pages
Powermax Code Release Info
PDF
No ratings yet
Powermax Code Release Info
26 pages
Powerstore - Hardware Configuration Guide
PDF
No ratings yet
Powerstore - Hardware Configuration Guide
93 pages
Dell PowerFlex Specification Datasheet
PDF
No ratings yet
Dell PowerFlex Specification Datasheet
14 pages
h19254 Dell Powermax Data Reduction
PDF
No ratings yet
h19254 Dell Powermax Data Reduction
19 pages
Hitachi AMS 2100 Storage System - Control Unit Replacement Guide
PDF
No ratings yet
Hitachi AMS 2100 Storage System - Control Unit Replacement Guide
9 pages
EMC Vmax Architecture
PDF
50% (2)
EMC Vmax Architecture
12 pages
NetApp Shelf Cabling
PDF
No ratings yet
NetApp Shelf Cabling
2 pages
EMC Vmax Architecture: Detailed Explanation
PDF
No ratings yet
EMC Vmax Architecture: Detailed Explanation
9 pages
e07INST0 1
PDF
No ratings yet
e07INST0 1
87 pages
FAS2600 SE Presentation - v1.1
PDF
No ratings yet
FAS2600 SE Presentation - v1.1
42 pages
NetApp FAS2500 Technical FAQ
PDF
No ratings yet
NetApp FAS2500 Technical FAQ
31 pages
NS0-003 - A Tung
PDF
No ratings yet
NS0-003 - A Tung
34 pages
Powerstore - Configuring CIFS
PDF
No ratings yet
Powerstore - Configuring CIFS
26 pages
VPLEX VS6 Shutdown Procedure For Cluster 1 in A Metro Configuration
PDF
No ratings yet
VPLEX VS6 Shutdown Procedure For Cluster 1 in A Metro Configuration
33 pages
VSP Midrange Architecture and Concepts Guide
PDF
No ratings yet
VSP Midrange Architecture and Concepts Guide
60 pages
h17713 Dell Emc Unity XT Series Ss
PDF
No ratings yet
h17713 Dell Emc Unity XT Series Ss
10 pages
Dell EMC UnityVSA A Detailed Review
PDF
No ratings yet
Dell EMC UnityVSA A Detailed Review
57 pages
NetApp 全新系列新產品說明會-HW
PDF
No ratings yet
NetApp 全新系列新產品說明會-HW
38 pages
John Nuttle Resume
PDF
No ratings yet
John Nuttle Resume
3 pages
Hitachi Nas Platform 3080 and 3090 g1 Hardware Reference
PDF
No ratings yet
Hitachi Nas Platform 3080 and 3090 g1 Hardware Reference
112 pages
06-Huawei OceanStor Dorado V3 All-Flash Storage System Pre-Sales Training V3.0-Wang Jiaxin
PDF
No ratings yet
06-Huawei OceanStor Dorado V3 All-Flash Storage System Pre-Sales Training V3.0-Wang Jiaxin
51 pages
h18336 Powerstore Sap Hana VG
PDF
No ratings yet
h18336 Powerstore Sap Hana VG
55 pages
h6541 Drive Sparing Symmetrix Vmax WP
PDF
No ratings yet
h6541 Drive Sparing Symmetrix Vmax WP
19 pages
Brocade Product Training: Why Fibre Channel?
PDF
No ratings yet
Brocade Product Training: Why Fibre Channel?
26 pages
EMC Vmax Architecture
PDF
No ratings yet
EMC Vmax Architecture
11 pages
Vmax Allflash
PDF
100% (1)
Vmax Allflash
132 pages
VNX Architectural Overview Final Produced
PDF
No ratings yet
VNX Architectural Overview Final Produced
40 pages
PowerMaxOS+10+Data+Mobility Open-Minimally+Disruptive+Migration+O-MDM Participant+Guide
PDF
No ratings yet
PowerMaxOS+10+Data+Mobility Open-Minimally+Disruptive+Migration+O-MDM Participant+Guide
34 pages
EMC ECS (Elastic Cloud Storage) Architectural Guide v2.x
PDF
No ratings yet
EMC ECS (Elastic Cloud Storage) Architectural Guide v2.x
21 pages
Operating and Managing Hitachi Content Platform v8.x: Hardware Components
PDF
100% (1)
Operating and Managing Hitachi Content Platform v8.x: Hardware Components
21 pages
Metronode Spec Sheet
PDF
0% (1)
Metronode Spec Sheet
5 pages
Vplex P Hardware Reference v4 PDF
PDF
No ratings yet
Vplex P Hardware Reference v4 PDF
78 pages
PowerScale+Concepts SSP+ +Participant+Guide
PDF
100% (1)
PowerScale+Concepts SSP+ +Participant+Guide
90 pages
Emc E20 507
PDF
100% (1)
Emc E20 507
43 pages
Brocade Compatibility Matrix Fos 7x MX
PDF
No ratings yet
Brocade Compatibility Matrix Fos 7x MX
34 pages
DD P dd6400 Install Guide en Us
PDF
No ratings yet
DD P dd6400 Install Guide en Us
46 pages
Docu67503 - EMC® VMAX® All Flash Product Guide
PDF
No ratings yet
Docu67503 - EMC® VMAX® All Flash Product Guide
190 pages
RPE 232950 Dell EMC Unity XT Sales Presentation
PDF
No ratings yet
RPE 232950 Dell EMC Unity XT Sales Presentation
18 pages
Datasheet - Hitachi Universal Storage Platform VM
PDF
No ratings yet
Datasheet - Hitachi Universal Storage Platform VM
2 pages
Dell EMC Unisphere For PowerMax
PDF
No ratings yet
Dell EMC Unisphere For PowerMax
56 pages
NetApp cDOT Migration
PDF
No ratings yet
NetApp cDOT Migration
9 pages
h17108 Dell Emc Service Levels For Powermaxos
PDF
No ratings yet
h17108 Dell Emc Service Levels For Powermaxos
16 pages
User Manual of DS-8100HMI-ST-GW-WI
PDF
No ratings yet
User Manual of DS-8100HMI-ST-GW-WI
78 pages
ECS Overview and Architecture
PDF
No ratings yet
ECS Overview and Architecture
59 pages
Physical Storage: Data ONTAP 8.0 7-Mode Administration
PDF
No ratings yet
Physical Storage: Data ONTAP 8.0 7-Mode Administration
71 pages
Docu48786 White Paper VNX2 MCX Multicore Everything
PDF
No ratings yet
Docu48786 White Paper VNX2 MCX Multicore Everything
69 pages
Isilon Site Preparation and Planning Guide
PDF
No ratings yet
Isilon Site Preparation and Planning Guide
38 pages
Object Storage 101
PDF
No ratings yet
Object Storage 101
25 pages
Emc VNX Snapshots: White Paper
PDF
No ratings yet
Emc VNX Snapshots: White Paper
56 pages
Docu52651 VPLEX Command Reference Guide
PDF
100% (2)
Docu52651 VPLEX Command Reference Guide
498 pages
VSAN ESA Proposal - 1682293896439
PDF
No ratings yet
VSAN ESA Proposal - 1682293896439
24 pages
Data Domain Solution
PDF
No ratings yet
Data Domain Solution
15 pages
Vmax 3 Architecture
PDF
No ratings yet
Vmax 3 Architecture
16 pages
DD Os 5.4 Differences Full Version - Student Guide
PDF
No ratings yet
DD Os 5.4 Differences Full Version - Student Guide
183 pages
h17064 Dell Powermax Ras White Paper
PDF
No ratings yet
h17064 Dell Powermax Ras White Paper
44 pages
h17118 Dell Emc Powermax Family Overview
PDF
No ratings yet
h17118 Dell Emc Powermax Family Overview
50 pages
Reliability, Availability, and Serviceability On PowerMax 2500 and 8500 Arrays
PDF
No ratings yet
Reliability, Availability, and Serviceability On PowerMax 2500 and 8500 Arrays
31 pages