Dell 2000 Storage

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 115
At a glance
Powered by AI
The document discusses storage capacity optimization techniques like thin provisioning, compression, deduplication and tiering. It also covers storage performance factors, availability strategies, and components of hard disk drives and solid state drives.

A hard disk drive consists of platters, a spindle, read/write heads, an actuator arm assembly, and a drive controller board.

Solid state drives are superior to mechanical hard disk drives in terms of performance, power use, and availability since they have no moving parts and use flash memory instead.

Dell 2000 Storage

Capacity Optimization Techniques:

 Thin provisioning is the process of allocating storage capacity on demand rather than
reserving large amounts of space up front, in case you might need it.

 Compression is re-encoding of data to reduce its size.

 Deduplication is the replacement of multiple copies of identical data with references to


a single shared copy, all with the end goal of saving space.

 Tiering is the process of placing data on the appropriate storage tier based on its usage
profile. Data that is frequently accessed—hot data—resides on fast drives, and data that
is rarely accessed—cold data—resides on slower drives.

Storage Performance

Data storage performance mainly depends on following three factors:

 Latency : Each request takes time to complete. The time between issuing a
request and receiving the response is Latency.
It is measured in milliseconds (ms).
Lower the latency time = higher the performance.
 IOPS : Input/output operations per second is abbreviated as IOPS.
Higher the IOPS = higher the performance.
 Throughput : Throughput is data transfer rate per second. It is expressed in
Megabytes/Second (MB/s) or Gigabytes/Second (GB/s).
Higher the throughput = higher the performance.

IOPS influences throughput.


Any application/service that needs disk access asks to read or write a certain amount of
data at the same time. The size of this data is called the I/O size.
If the I/O size is small, higher numbers of IOPS and higher throughput (MB/s or GB/s) is
possible.

Average I/O size x IOPS = Throughput in GB/s


Availability

When a storage component fails, users still must access business critical data. It is
essential to have highly available, resilient storage solutions.

Data can be made available through backup, replication, and archiving.

 Backups are copies of production data that are created and retained for the sole
purpose of recovering any lost or corrupted data.
 Replication is the process of creating an exact copy of the data to ensure
business continuity in the event of local outage or disaster.
 Data archiving is the process of moving data that is no longer active to a
separate storage device for long-term retention.
Hard Disk Drive (HDD)

A hard disk drive is a persistent storage device that stores and retrieves data using
rapidly rotating disks that are coated with magnetic material.

 The key components of an HDD are:


o Platter: A typical HDD consists of one or more flat circular disks—called
platters—that are coated with magnetic material. The data is stored on the magnetic
surface.
o Spindle: A spindle connects all the platters and is connected to a motor.
The motor of the spindle rotates with a constant speed. Common spindle speeds are
5,400 RPM, 7,200 RPM, 10,000 RPM, and 15,000 RPM.
o Read/write head: Read/write (R/W) heads read and write data from or to
the platters. Drives have two R/W heads per platter, one for each surface.
o Actuator arm assembly: R/W heads are mounted on the actuator arm
assembly. It positions the R/W head at the location on the platter where the data is
written or read.
o Drive controller board: The controller is a printed circuit board that is
mounted at the bottom of a disk drive that controls the R/W operations.
Solid-State Drive (SSD)
 Solid state drives are storage devices that contain non-volatile flash memory.
They are superior to mechanical hard disk drives in terms of performance, power use,
and availability.
 SSDs consist of the following components:
o I/O interface connects the power and data to the SSDs.
o Controller includes a drive controller, RAM, and non-volatile memory
(NVRAM).
 The drive controller manages all the drive functions like availability,
diagnostics, and protection.
 The RAM is used in the management of data being read and
written from the SSD as a cache, and for the SSD’s operational programs and data.
 The NVRAM is used to store SSD internal software and data.
o Storage is an array of non-volatile memory chips. They retain their
contents when powered off. These chips are commonly called Flash memory. The
number and capacity of the individual chips vary directly in relationship to the SSD
capacity.
 SSDs consume less power than HDDs. Since SSDs do not have moving parts,
they generate less heat. This reduces the need for cooling in storage enclosures and
overall system power consumption.
Hybrid Drive (SSHD)

 A hybrid drive (SSHD) combines both flash storage and spinning disks into a
single drive to offer both capacity and speed.
 New data is typically written to spinning disk first. After it has been accessed
frequently, it is transferred up to solid-state memory.
 Hybrid drives are not frequently used in enterprise-class servers and storage
arrays, and should not be confused with 'hybrid arrays', which are storage enclosures
that contain individual HHDs and SSDs.
Tape Drive

 A tape drive reads and writes data onto cassettes or cartridges containing spools
of plastic film coated with a magnetic material. Tape drives provide linear sequential
read/write data access.
 A tape drive may be standalone or part of a larger tape library.
 Tape is a popular medium for long-term storage due to its relatively low cost and
portability. Tape drives are typically used by organizations to store large amounts of
data, typically for backup, offsite archiving, and disaster recovery.
 Some of the key limitations of tape include:
 Low access speed due to the sequential access mechanism.
 Lack of simultaneous access by multiple applications.
 Degradation of the tape surface due to the continuous contact with the
read/write head.

 Storage Drive Interfaces


 Drives are available in several interfaces. The interface that is selected is based on the
overall configuration of the storage network. Click each tab to learn about the interfaces.

Serial Attached SCSI (SAS)

 SAS is a serial point-to-point protocol that uses the SCSI command set and the
SCSI advanced queuing mechanism.
 SCSI-based drives, such as SAS and FC, are the best choice for high-
performance, mission-critical workloads.
 SAS drives are dual ported. Each port on the SAS drive can be attached to
different controllers. This means that if a single port, connection to the port, or even the
controller fails, the drive is still accessible over the surviving port.
Serial ATA (SATA)

 Serial Advanced Technology Attachment (SATA) is used in desktop PCs,


laptops, and in the high-end enterprise tech market.
 In enterprise, SATA drives are synonymous with economical, low performance,
and high capacity.
 Examples include:
 backup and archiving appliances
 auto-tiering solutions in storage arrays
Near Line SAS (NL-SAS)

 Nearline SAS (NL-SAS) drives are a hybrid of SAS and SATA drives. They have
a SAS interface and speak the SAS protocol, but also have the platters and RPM of a
SATA drive.
 NL-SAS drives can be connected to SAS backplanes with the added benefits of
the SCSI command set and advanced queuing, while at the same time offering the large
capacities common to SATA.
 NL-SAS has replaced SATA in the enterprise storage world, with all the major
array vendors supporting NL-SAS in their arrays.

Just a Bunch of Disks (JBOD)

Just a bunch of disks (JBOD) is a collection of hard disks that have not been


configured to act as a Redundant Array of Independent Disks (RAID).

JBOD arrays do not have any formal structure or architecture.

The disks within the JBOD can be of any size, and the drives can be pooled together
using disk volume applications. Failure of a single drive, however, can result in the
failure of the whole volume.

Direct Attached Storage (DAS)


When a storage device, such as an external hard drive, is connected directly to the
compute system, it is referred as Direct Attached Storage (DAS).

DAS is connected directly to a computer through a host bus adapter (HBA) and no
network devices exists between them.

Only the host that the DAS solution is connected to can share the storage resource.

PowerVault MD Storage

An example of DAS storage is the Dell EMC PowerVault MD series, a highly reliable
and expandable storage solution.
PowerVault MD DAS common features:

 Enclosure capacities from 12 to 84 drives


 Ability to expand capacity by daisy chaining enclosures
 All types of drives are supported (SSD, SAS, SATA)

 Tape Drive Solutions


 Backups provide an extra copy of production data, which are created and retained for
the sole purpose of recovering lost or corrupted data.
 Proper backup execution and archiving helps data restoration in the event of a failure.
 Tape Drives are a cost-effective solution for backing up data and storing it at offsite
locations.
 The Dell EMC Linear Tape Drive 8 (LTO 8) is a high-performance, high-capacity data-
storage device. It is used to back up and restore data and to archive and retrieve files in
an Open Systems environment.

 Network Attached Storage (NAS)



 A typical NAS environment
 Network Attached Storage (NAS) servers attach directly to a LAN and enable data
sharing among network clients.

You can manage NAS servers from anywhere on the network with a standard web
browser or other management tools. NAS appliance interfaces are simple, with no need
to learn a complex operating system.
Dell EMC NAS Storage Solutions
NX Storage Series
The NX storage series is an entry-level NAS solution.
NX series is built on PowerVault server hardware and powered MS Windows Storage
Server (WSS). Systems using WSS are typically referred to as Network Attached
Storage (NAS) Appliances.
FS Storage Series
Dell Storage FS Series with Fluid File System (FluidFS) software provides NAS for the
SC Series and PS Series platforms. FS offers best-in-class performance with linear
performance scaling and low cost per file IOPS.
The FS8600 appliance can be configured as either Fibre Channel or iSCSI.

Storage Area Network (SAN)


Storage Area Network (SAN) is a network that primarily connects the storage systems
with the compute systems and also connects the storage systems with each other.

 It enables multiple compute systems to access and share storage resources.


 It also enables data transfer between the storage systems. With long-distance
SAN, the data transfer over SAN can be extended across geographic locations.
Dell EMC SAN Storage Solutions
PS Series
Dell EMC PS Series arrays deliver the benefits of consolidated networked storage in a
self-managing, iSCSI SAN. With its unique peer storage architecture, the PS Series
delivers high performance and availability in a flexible environment with low cost of
ownership.
The PS Series is an excellent solution for small to medium businesses looking to
migrate from DAS or NAS, streamline data protection or expand capacity.
SC Series
All EMC SC Series arrays provide mid-tier enterprise customers with a unified platform
that is built for performance, adaptability, and machine-driven efficiency. SC Series
software delivers modern features that meet aggressive workload demands using the
fewest drives necessary.
With an open design, SC Series storage integrates seamlessly with applications and
infrastructure, enabling customers to scale capacity and performance on a single
platform.
Storage Arrays and Controllers
The Storage Array is the central hardware component of a SAN. The array contains the
controller modules that connect the storage devices to the network switches and
manages all traffic to and from the storage devices. Some arrays also contain internal
storage capacity and function as an all-in-one storage solution.

As the name suggests, the controllers within the array control the SAN. Every read or
write command goes through the controller.
The controller:

 Provides management capability to end users


 Physically connects to the enclosures containing the storage drives (Back End)
 Physically connects to the network switches (Front End)
 Physically connects to a NAS
 Processes all SAN functionality including core and application software and RAID
Most array systems have dual controllers—two controllers connected together—for
redundancy.
Dual Controller Architecture
Many of the SC series controllers are based on a dual controller architecture. Single
controller environments could result in system downtime. With dual controllers, if one
controller fails, the other keeps the system up and running while the faulty controller is
repaired or replaced, typically without downtime.

In a dual controller architecture, fault domains are divided by fabrics. Connections to


fabric 1 are in Fault Domain 1 (orange in the image). Connections to fabric 2 are in Fault
Domain 2 (blue in the image).
Enclosures
An Enclosure is connected to the storage array on the back end and provides the
storage capacity of the SAN. It houses and powers the disk drives that store business-
critical data.

Most enclosures can house SSDs, HDDs, or both in a hybrid configuration depending
on storage and performance requirements. Drives can be added or replaced without
taking the enclosure offline.

Enclosures can be added to the SAN as more capacity is required.


Each drive is labeled XX-YY: 

 XX is the enclosure ID (automatically generated at install).


 YY is the disk position.

Basic Enterprise Architecture


The architecture of a basic enterprise network storage system consists of three main
components: an Initiator, a Target, and Network device (switch/router).

An initiator can be a server or a client that starts the process of data transfer on the
storage network. The initiator sends commands to the target. The command can be an
instruction to read, write, or execute.

A target in this architecture is a storage system. The target device responds to the
initiator by retrieving the data from storage or by allowing the initiator to write the data to
storage. After the data is written, it then acknowledges back to the initiator.
The purpose of all storage systems is to pool together storage resources and make
those resources available to initiators connected to the storage network. 
Storage systems are classified into two types:

 Block-based storage system


 File-based storage system

Block-based storage systems share storage resources with initiators in blocks, raw
chunks of structured data, exactly the way a locally installed disk drive would. The host
operating system must format the data and write it to a file system before the user can
access and interact with it.
SAN storage arrays are block-based and connect using block-based protocols like:

 Fibre Channel (FC)


 Internet Small Computer System Interface (iSCSI)
 Fibre Channel over Ethernet (FCoE)
 Serial Attached SCSI (SAS)
File-based storage systems present storage data to initiators in a file system instead of
raw data segments, and maintain metadata about who can and cannot access files.
Where block storage is more concerned about optimizing read/write activities, file-based
storage is optimized for security and access.
NAS devices work with files instead of blocks and connect using file-based protocols
like:

 Network File System (NFS)


 SMB/CIFS

NAS Architecture Overview


A Network Attached Storage (NAS) device is a single, file-based storage appliance
used to share data with initiators within a network. The NAS appliance authenticates
clients and manages file operations internally, eliminating the need for additional
external file systems.

The NAS shares storage data via TCP/IP-based file-sharing protocols such Network
File System (NFS) and Common Internet File System (CIFS) or Server Message Block
(SMB).

 NFS is a client/server protocol from Sun Microsystems that operates typically


over TCP/IP networks for UNIX environments (also implemented in MS Windows).
 CIFS/SMB is the standard file-serving protocol for the MS Windows environment
and operates natively over TCP/IP networks. It supports several features including
authentication and encryption.
Note: The terms CIFS and SMB are interchangeable at this point.
Components of NAS System
A NAS system consists of two components: Controller and Storage:

 A controller is a compute system that contains components such as network,


memory, and CPU resources. An operating system that is optimized for file serving is
installed on the controller.
 Storage is used to persistently store data. The NAS system may have different
types of storage devices to support different requirements. The NAS system may
support SSD, SAS, and SATA in a single system.

SAN Architecture Overview


As discussed, a Storage Area Network (SAN) is a block-based storage network that
enables multiple compute systems to access and share storage resources. It also
enables to data transfer between the storage systems. With long-distance SAN, the
data transfer can even be extended across geographic locations.
The SAN is designed around the type of network protocol selected. Here is a list of
typical SAN protocols:

 Fibre Channel (FC) is a widely used, high-speed protocol transports data,


commands, and status information between the compute systems and the storage
systems. It provides a serial data transmission that operates over copper wire and
optical fiber.
 Fibre Channel over Ethernet (FCoE) transports FC data along with regular
Ethernet traffic over high speed (such as 10 Gbps or higher) Ethernet links.
 iSCSI is an IP-based protocol that establishes and manages connections
between compute systems and storage systems over IP. iSCSI encapsulates SCSI
commands and data into IP packets and transports them using TCP/IP.
 SAS is commonly used to communicate with enclosures in a SAN system and
may be used as front-end communication. It provides four lanes of communication at up
to 12 Gbps.
These protocols and some additional communication protocols are discussed in
upcoming slides.

Communication Links
All SAN components must communicate with each other.

Different communication types are supported for management, front end, and back


end.
How Do We Connect It All?
Based on the protocol that is selected, each storage array is customized with I/O cards,
also called Host Bus Adapters (HBA), to provide connectivity for the front and back
ends.
Fibre Channel is used for connection to servers on the front end
or to older enclosures on the back end.

 8 Gbps and 16 Gbps capable speeds


 Cards with 2 or 4 ports
 QLogic and, in older systems, Emulex brands

iSCSI uses Ethernet connectivity on the front end only.

 1 Gbps and 10 Gbps speeds


 Cards can have one or two ports.
 QLogic 1 Gbps and 10 Gbps (copper connectivity only)
 Chelsio 10 Gbps (copper or optical FC connectivity)
 Chelsio T520-BT (dual port RJ-45/10GBase-T)
 Chelsio T520-CR (10GBase-SR)

SAS is primarily for back end connectivity between controllers and enclosures. Some
newer controller models offer SAS connectivity on the front end.
SAS can be also be used for front end connection in the storage system models that
have controllers with internal storage, including the SC4020, SC7020, SC5020,
SCv2000, and SCv3000. SAS front end is connected directly to servers, bypassing
switches.

12 Gbps HBA 6 Gbps HBA 6 Gbps Low Profile 9206


Perforated Perforated faceplate Low-profile card requires Mini SAS
faceplate stamped 6 Gbps high-density cables

FC SAN Overview
Fibre Channel SAN (FC SAN) is a high-speed network technology that runs on high-
speed optical fiber cables and serial copper cables. The FC technology was developed
to meet the demand for the increased speed of data transfer between compute systems
and mass storage systems.
The FC architecture supports three basic interconnectivity options:
Point – Point

In this configuration, two nodes are connected directly to each other. This configuration provides
a dedicated connection for data transmission between nodes. However, the point-to-point
configuration offers limited connectivity and scalability and is used in a DAS environment.
FCAL

In this configuration, the devices are attached to a shared loop.

 Each device contends with other devices to perform I/O operations. The devices
on the loop must arbitrate to gain control of the loop.
 At any given time, only one device can perform I/O operation on the loop
because each device in the loop should wait for its turn to process the request.
 Adding or removing a device results in loop re-initialization, which can cause a
momentary pause in loop traffic.
 FC-AL can be implemented by directly connecting each device to one another in
a ring through cables.
 FC-AL implementations may also use FC hubs through which the arbitrated loop
is physically connected in a star topology.

 FC-SW involves a single FC switch or a network of FC switches (including FC


directors) to interconnect the nodes. It is also referred to as fabric connect.
 In a fabric, the link between any two switches is called an interswitch link (ISL).
ISLs enable switches to be connected together to form a single, larger fabric.
 Nodes do not share a loop; instead, data is transferred through a dedicated path
between the nodes.
 FC-SW configuration provides high scalability. The addition or removal of a node
in a switched fabric is not disruptive; it does not affect the ongoing traffic between other
nodes.

World Wide Name


Each device in the FC environment is assigned a 64-bit unique identifier called the
World Wide Name (WWN).

The FC environment uses two types of WWNs: World Wide Node Name (WWNN) and
World Wide Port Name (WWPN).

 WWNN is used to physically identify FC network adapters.


 WWPN is used to physically identify FC adapter ports or node ports.
A WWN is a static name for each device on an FC network. WWNs are burned into the
hardware or assigned through software. Several configuration definitions in an FC SAN
use WWN for identifying storage systems and FC HBAs. WWNs are critical for FC SAN
configuration as each node port must be registered by its WWN before the FC SAN
recognizes it. The name server in an FC SAN environment keeps the association of
WWNs to the dynamically created FC addresses for node ports.
Zoning
Zoning is an FC switch function that enables node ports within the fabric to be logically
segmented into groups called "zones." These groups enable communication with each
other within the zone. A zone in a SAN should only have one initiator with multiple
targets.
Zoning can be categorized into three types:

 WWN zoning uses World Wide Names to define zones. The zone members are
the unique WWN addresses of the FC HBA and its targets (storage systems).
 Port zoning uses the switch port ID to define zones. In port zoning, access to a
node is determined by the physical switch port to which a node is connected. The zone
members are the port identifiers (switch domain ID and port number) to which FC HBA
and its targets (storage systems) are connected.
 Mixed zoning combines the qualities of both WWN zoning and port zoning.
Mixed zoning enables a specific node port to be tied to the WWN of another node.
iSCSI SAN Network

IP SAN uses Internet Protocol (IP) for the transport of storage traffic. It transports block
I/O over an IP-based network via protocols such as iSCSI.
Advantages of using IP for storage networking:

 Most organizations have an existing IP-based network infrastructure, which could


also be used for storage networking and may be a more economical option than
deploying a new FC SAN infrastructure.
 Many long-distance disaster recovery (DR) solutions are already using IP-based
networks. In addition, many robust and mature security options are available for IP
networks.

iSCSI SAN Overview


Key components for iSCSI communication are:

 iSCSI initiators such as an iSCSI HBA


 iSCSI targets such as a storage system with an iSCSI port
 IP-based network such as a Gigabit Ethernet LAN

The three types of initiators are:

 Standard Network Interface Card (NIC) with software iSCSI adapter


 TCP Offload Engine (TOE) NIC with software iSCSI adapter
 iSCSI HBA
Native iSCSI connectivity
 iSCSI initiators are connected directly to iSCSI targets through IP network.
 FC components are not required for native iSCSI connectivity.
Bridged iSCSI connectivity

 This allows the initiators to exist in an IP environment while the storage systems
remain in an FC SAN environment.
 iSCSI gateway provides connectivity between a compute system with an iSCSI
initiator and a storage system with an FC port.

iSCSI Protocol Stack


The iSCSI protocol stack depicts the encapsulation order of the SCSI commands for
their delivery through a physical carrier.

 SCSI is the command protocol that works at the application layer of the Open
System Interconnection (OSI) model.
 The SCSI commands, data, and status messages are encapsulated into TCP/IP
and transmitted across the network between the initiators and the targets.
 iSCSI is the session-layer protocol that initiates a reliable session between
devices that recognize SCSI commands and TCP/IP. The iSCSI session-layer interface
is responsible for handling login, authentication, target discovery, and session
management.
 TCP is used with iSCSI at the transport layer to provide reliable transmission.
TCP controls message flow, windowing, error recovery, and re-transmission.
iSCSI Protocol Data Unit (PDU)
A PDU acts as the basic information unit of the iSCSI and is used to communicate
between initiators and targets.
The PDU functions to:

 Establish the connections


 Discover iSCSI
 Start a session
 Send commands
 Move data
 Receive status
 Control the SCSI task management functions
SAN Vs NAS

Evaluating Storage Solutions


You should now be familiar with the three main types of storage solutions: DAS, NAS,
and SAN.
All three solutions can be implemented on common storage systems, based on needs.

DAS

 Is suitable for enterprise and start-up businesses.


 The storage space cannot be shared or scaled
 Comparatively less efficient.
 Can integrate with SAN and NAS environments.
NAS

 Suitable for small to mid-range businesses.


 Storage space can be shared between multiple ESX (hyper-visor) deployments.
 Highly scalable with respect to performance and capacity.
 Provides advanced features like thin provisioning, replication, and snapshot.
 It abstracts storage management from server.

SAN

 Suitable for large-scale businesses.


 Storage space can be shared between multiple servers.
 Highly scalable with respect to performance and capacity.
 Supports synchronous replication.
 Provides highly resilient environment.

Introduction to PS Series
The Dell EMC PS Series delivers the benefits of consolidated networked storage in a self-
managing, iSCSI SAN. With its unique peer storage architecture, the PS Series delivers high
performance and availability in a flexible environment with low cost of ownership.

PS Series provides an enterprise software solution that delivers single-view management,


optimized performance, seamless expansion, comprehensive data protection, and flexible
online tiering.

The PS Series is an excellent solution for small to medium businesses looking to migrate from
DAS or NAS, streamline data protection or expand capacity.
Benefits of PS Series
The PS Series storage arrays support Tiered Storage, which enables you to define multiple tiers
or pools of storage in a single PS Series group (SAN). Tiered Storage provides administrators
with greater control over how disk resources are allocated. While online, volumes can be
allocated and moved between tiers of storage, providing high levels of service.

Benefits of PS series

 Flexible On-demand growth: A PS Storage Stack combines multiple PS Series


arrays into a single SAN.
 Single View Management: Uses a single SAN Manager regardless of the size
or form of the SAN.
 SAN Configuration choice: Aims to be scalable and run in the most severe
circumstances. Processes such as array expansion, disk replacements, and controller
upgrades can all be completed while the array is up and running.
 Meet cost-of-service needs: You can place data on the storage resource that is
appropriate for the type of data and how it is being used. For example, Backup data and
Replication data can be placed on RAID 50, and the transaction logs can be placed on
RAID 10.
 Set up technology-defined tiers: You can set up tiers with common RAID
levels or disk types or speeds.
 Organize according to physical location: Servers can be customized to
access the data from the nearest storage facility.
Peer Storage Architecture
peer describes the collaboration and equal partnership of components and arrays.
These peers work together to share resources, evenly distribute loads, and collaborate
to help optimize application performance and provide comprehensive data protection.

The result is an intelligent storage array that can deliver rapid installation and simple
SAN management. Using patented, page-based data mover technology, members of a
SAN work together to automatically manage data and load balance across resources.
Because of this shared architecture, enterprises can use PS Series arrays as modular
building blocks for simple SAN expansion. This architecture provides the basis for
numerous features and capabilities, including:

 Peer deployment
 Control
 Provisioning
 Protection
 Integration
PS Architecture: Groups, Pools, Members
The foundation of the PS Series storage solution is the PS Series Group. The group is
the starting point from which storage resources are assigned and allocated.

 A Group consists of at least one PS Series array and up to 16 arrays, depending


on the PS model. You can connect multiple arrays as a group to an IP network and
manage them as a single storage system.
 Each array added to a group is called a Member.
 By default, a group provides a single Pool of storage. From this pool, space is
allocated to users and applications by creating volumes, which are seen on the network
as iSCSI targets.
 If you have multiple members, you can divide group space into different storage
pools and then assign members. A member can only provide capacity once it is
configured with a RAID type.
PS Architecture: Volumes
Volumes provide the storage allocation structure within the PS Series Group. To access
storage in a group, some portion of a storage pool is allocated to volumes.

The volumes may be created on a storage pool with a single group member, or with
multiple group members. The administrator provides each volume a name, size, and a
storage pool.

When an administrator creates a volume, Thin Provisioning sizes it for the long-term
needs of the application without initially allocating the full amount of physical storage.
Instead, as the application needs more storage, capacity is allocated to the volume from
a free pool.

Snapshots protect volume data from mistakes, viruses, or database corruption.


Replicating volume data from one group to another also protects against potential
disasters.

 PS Series group
 PS Series members
 PS Series storage pools
 PS Series single-member pool
 PS Series multimember pool
 Storage space
 Volume
 Snapshot collections
 Snapshot
 Thin-provisioned volume

Dynamic load balancing lets the group quickly find and correct bottlenecks as the
workload changes with no user intervention or application disruption.
A group provides three types of load balancing within the arrays in a storage pool:

 Capacity load balancing: The group distributes volume data across disks and
members, based on capacity.
 Performance load balancing: The group tries to store volume data on members
with a RAID configuration that is optimal for volume performance, based on internal
group performance metrics.
 Network connection load balancing: The group distributes iSCSI I/O across
network interfaces, minimizing I/O contention and maximizing bandwidth.

PS Series Product Portfolio


The simplified, unified, virtualized Dell EMC PS Series is the iSCSI market leader in storage
solutions. PS Series is ideal for companies and organizations with growing data and
performance needs.

Click the tabs for a brief description of each entry in the PS Series line.

Dell EMC PS4210 Array Series:

The PS4210 Series is the first hybrid storage array in the entry-level PS4000 Series.

The PS4210 is aimed at small-to-medium data centers and remote offices that want to upgrade
to a SAN.

Dell EMC PS6210 Array Series:

The Dell EMC PS6210 arrays offer IT generalists the ability to manage more data with fewer
resources. It has the capability to tightly integrate with common application environments and
the flexibility to be used with various operating systems.

Dell EMC PS6610 Array Series:

The PS6610 arrays combine next-generation powerful controllers and an updated dense
storage architecture to meet large data repository requirements.
Dell EMC PS-M4110 Blade Storage System:

The PS-M4110 Series delivers all the functionality and enterprise-class features of a traditional
PS array in a blade form factor. It is the industry’s only fully redundant, enterprise-class storage
array designed to fit inside a blade chassis.

Dell EMC FS7610 with FluidFS v4:

An FS7610 NAS appliance supports all PS arrays.


PS4210 Array Series
PS4210 Series arrays offer next-generation storage with increased memory, performance, and
connectivity options for small-to-medium data centers, remote offices, and current PS Series
environments.

The PS4210 is a similar design as PS6210, using the same dual controller architecture and
chassis hardware.

PS6210 Array Series


The PS6210 family of high-performance arrays provides enhanced, simplified
performance for the small-to-medium enterprise. Powered by updated firmware, the
PS6210 Series is ideal for virtualization, databases, consolidation, and heavy I/O
applications.
PS6610 Array Series
The PS6610 Series arrays are designed to address high-capacity requirements such as
mixed workloads consolidation, secondary storage/ archival, and video surveillance.
This ultra-dense series includes a hybrid array configuration that autotiers data. It stores
frequently accessed data on SSDs for rapid response, and stores cold data on HDDs
within the same array.

PS-M4110 Blade Array Series


The PS-M4110 converges compute, network, and storage in a single PowerEdge M1000e blade
chassis. Up to four blade arrays per chassis help reduce data center real estate, cable
configurations, and power and cooling requirements. The PS-M4110 Blade Array is available in
four different configurations.
FS7610 Array Series
The FS7610 is a simplified and versatile NAS appliance built to be integrated into the PS Series
architecture. It is compatible with all existing PS Series arrays.

The FS7610 is built on Fluid File System (FluidFS) software, a high-performance distributed file
system that optimizes file access performance. It does not have strict limits on file system size
and share size inherent in other NAS solutions. So the capacity required to store common
enterprise data can be dramatically decreased.

The storage infrastructure can be managed through a single FS7610 Group Manager interface
to improve productivity. It is easy to configure and manage iSCSI, CIFS/SMB, and NFS storage.

PS Series Software Overview


PS Series arrays offer all-inclusive array management software, host software, and free
firmware updates. This comprehensive software suite provides event monitoring, data
protection, automatic virtualization, and resource optimization within the SAN.
SAN Operating System

 PS Series Firmware
Infrastructure Management

 PS Series Group Manager


 Manual Transfer Utility
 Dell Storage Update Manager
Monitoring & Analytics

 EqualLogic SAN Headquarters


Host Integration

 Microsoft, VMware, and Linux Platform Integration Tools


Storage Center Integration

 Dell Storage Manager

 PS Series Firmware

 Dell Storage PS Series Firmware is integrated across the entire family of PS Series
storage arrays to virtualize SAN resources. System resources are automatically adjusted
to optimize performance and capacity and reduce IT administrative requirements.

PS Series Firmware is installed on each array and provides capabilities such as volume
snapshots, cloning, and replication to protect data in the event of an error or disaster.

Infrastructure Management
PS Series Group Manager
Group Manager is a management tool that is integrated with the PS Series Firmware that
provides detailed information about block and file storage configuration. It is an easy-to-use tool
for storage provisioning, data protection and array management.

The Group Manager is available as a CLI or GUI with search functionality, accessible through
web browsers with a connection to the PS Series SAN.

Manual Transfer Utility


The Manual Transfer Utility is a host-based tool that enables the replication of large amounts of
block storage data using removable media. Integrated with the native replication function of the
PS Series Firmware, Manual Transfer Utility helps to eliminate network congestion, minimize
downtime and accelerate replication setup.

Dell Storage Update Manager


Dell Storage Update Manager provides guided update management to simplify the process of
updating EqualLogic platform firmware. Storage Update Manager works with PS Series, FS
Series, and disk drive firmware to facilitate both single-member and multimember group
updates.

Monitoring and Analytics


EqualLogic SAN Headquarters

PS Series SAN Headquarters (SAN HQ) provides in-depth reporting and analytics, consolidated
performance, and robust event monitoring across multiple PS Series groups.

From a single GUI, information about the monitored PS Series groups is available at a glance.
This includes configuration and performance of pools, members, disks, and volumes, as well as
any alerts. If action is required, the PS Series Group Manager can be launched directly from SAN
HQ.

With SAN HQ, administrators can also:

 Analyze how close a group, pool, or array is to its full capabilities

 Integrate with third-party SNMP event management consoles

 Monitor performance and capacity metrics

 Launch Storage Update Manager directly from the SAN HQ Group list

 Evaluate storage reliability based on current RAID levels


Integration Tools
EqualLogic Host Integration Tools
The PS Series set of Host Integration Tools allow PS Series platforms to coordinate
functions with hosts and applications for Microsoft, VMware, and Linux environments.
These integration tools deliver advanced data protection, high availability and
performance, and simplified management of application data and virtual machines.
Key tools include:

 Datastore Manager
 Auto-Snapshot Manager
 Virtual Desktop Deployment Manager

Dell Storage Manager


Dell Storage Manager (DSM)
When adding Storage Center (SC) arrays to the PS Series environment, DSM offers
common management and cross-platform replication capabilities for both PS Series and
SC Series arrays. DSM provides a “single pane of glass” management tool for PS
Series and SC Series activities including:

 Initiating a volume replication between PS Series and SC Series


 Configuring and monitoring scheduled replication tasks
 Configuring system alerts for specific management activity or issues via log files,
SNMP traps, and email notification

Hardware Troubleshooting Basics


Identifying failed drives

A drive failure is one of the most common PS hardware issues encountered. Properly
diagnosing a failed drive is critical to quick issue resolution.
Drive failure can be confirmed by the following:

 A message on the console


 An alert in the event log
 An alert in the Group Manager Alarms panel
 Indications in the Group Manager Member Disks window
 CLI > member select show disks command
 LEDs on the drive
The fastest way to identify a failed drive is by LED activity. Locate the ACT LED on the
suspect drive. If the LED is amber, the drive has failed. Follow standard operating
procedures to remove and replace the failed drive.
Note: To locate the drive number:

 For 2.5-inch drives (installed vertically), the drives are numbered 0–23, left to
right.
 For 3.5-inch drives (installed horizontally), the drives are numbered from left to
right and top to bottom, starting with 0 on the upper left side.
Hardware Troubleshooting Basics
Identifying Faulty Power Supplies

LEDs are also a quick way to diagnose issues with power supply modules. Locate the power
supply module on the back of the array and check the LED activity. If the DC Power LED is off
and the Fault LED is amber, the module most likely needs to be replaced.

Follow standard operating procedures to remove and replace the faulty module.
Troubleshooting PS Series Groups in DSM
There are several logs available to help troubleshoot issues that may arise with the PS Series
Group. By default, the logs are disabled. The user can enable debug logs for PS Groups from
the Data Collector Manager (DCM).
Types of logs available:

 Data Collector Logs Path


 <DSM Install Location>\msaservice\etc\compservices\compservice*.log
 <DSM Install Location>\msaservice\wildfly-
8.2.0.Final\standalone\log\server.log
 Debug Logs Path
 <DSM Install Location>/msaservice/etc/compservices/debuglogs/SIMS/
 SIMS Logs Path
 <DSM Install Location>/msaservice/etc/compservices/debuglogs/SIMS/
 DSM Client Logs Path
 Windows - %userprofile%/AppData/Local/Compellent/EMClient/
 Linux - ~\.config\dell\EMClient

Key points covered in this module include:

 The PS Series is an excellent solution for small to medium businesses looking to


migrate from DAS or NAS, streamline data protection or expand capacity.
 The PS Series is based on a unique peer storage architecture that can deliver
rapid installation and simple SAN management.
 The foundation of the PS Series storage solution is the PS Series Group.
 PS Series Firmware is installed on each array and provides capabilities such as
volume snapshots, cloning, and replication to protect data in the event of an error or
disaster.
 SAN HQ provides in-depth reporting, analytics, and event monitoring across
multiple PS Series groups.

Introduction to SC Series
Dell EMC SC Series is a versatile storage solution in the midrange storage portfolio.
The benefits of the SC Series include:

 Simple setup and configuration


 Performance and scalability
 Available in all-flash and hybrid solutions
 SC5020 offers the lowest price for GB with compression and deduplication
 Flexible block protocols: FC, SAS, iSCSI, or FCoE for host connection

SC Series Product Portfolio


The SC Series portfolio can be organized into three categories:

 Value Storage: Models including a "v" in the name are geared toward smaller
business with entry-level storage needs. The controller and storage disks exist in the
same chassis.
 Storage Arrays: These models include the controller and storage disks in the
same chassis. Expansion enclosures can be added as storage needs increase.
 Storage Controllers: These models are controllers only. To function as a SAN,
expansion enclosures must be added.
SCv2000 Series
All three versions of the SCv2000 / SCv2020 / SCv2080 are controller/enclosure combinations.
The last two numbers indicate the quantity of built-in disks.
SCv2080 = 84 disks
SCv2000 = 12 disks SCv2020 = 24 disks
(5U height)
(2U height) (2U height)

SCv2000/SCv2020 and SCv2080 have five different controller module options:


Compatible Enclosures

SCv3000 Series
Both versions of the SCv3000 / SCv3020 are controller/enclosure combinations. The
naming convention for these models is different from other models in the SC Series.
SCv3000 = 16 disks (3U height) SCv3020 = 30 disks (3U height)

SCv3000

 3.5-inch SAS hard drives are installed horizontally side by side.


 Minimum: seven drives (four if they are SSDs) internally
 Maximum: 16 drives internally

SCv3020

 2.5-inch SAS hard drives are installed horizontally side by side.


 Minimum: seven drives (four if they are SSDs) internally
 Maximum: 30 drives internally
Compatible Enclosures

 Supports up to 960 drives in five redundant path SAS chains.


 An SCv3020 supports up to 16 SCv300 or up to 8 SCv320 per chain.

The SCv360:

 4U in height
 12 Gbps speed
 SCv3000/SCv3020 supports up to a maximum of three SCv360 expansion
enclosures.

 SC4020
 The SC4020 is a controller/enclosure combination 2U in height.

The SC4020 only comes in a 24-disk option.

 2.5-inch SAS hard drives are installed vertically side by side.


 Minimum: seven drives (four if they are SSDs) internally
 Maximum: 24 drives internally

Compatible Enclosures
In addition to its 24 internal drive slots, the SC4020 supports up 192 additional drives in a
combination of expansion enclosures.

The SC200 Series of enclosures is compatible.

SC200 and SC220 enclosures:

 2U in height
 Requires SCOS 6.2 or newer
 6 Gbps speed
 Allows a total of 168 disks in a chain
 SC200/SC220 cannot be in the same chain with other enclosure models

SC280 enclosure:

 5U in height
 Requires SCOS 6.4 or newer
 6 Gbps speed
 Three 14-drive rows in each of two drawers

 SC7020
 The SC7020 combines a controller and disk enclosure in a single 3U chassis.

The SC7020 is a storage system that includes both controller and enclosure in the
same chassis.
 2.5-inch SAS hard drives are installed horizontally side by side.
 Minimum: seven drives (four if they are SSDs) internally
 Maximum: 30 drives internally
SC7020F

 All-flash version of the SC7020 optimized for SSDs and SEDs, and does not
support HDD spinning drives.
 Unlike the SC7020, the SC7020F supports all premium SC features without
requiring additional licenses.

In addition to its 24 internal drive slots, the SC4020 supports up 192 additional drives in a
combination of expansion enclosures.

The SC200 and SC400 Series of enclosures are compatible.

SC400 and SC420 enclos SC420F enclosure: SC460 enclos


ures: ure:
 Flash-only: 24 2.5" SSDs
 2U in height or SEDs  4U in
height
 12Gbps SAS  Requires SC5020F or SC7
020F (all-flash) controllers  12Gbps
SAS
 12Gbps SAS

SC5020
The SC5020 is a 3U storage system that is intended to replace the SC4020. It uses the
same chassis as the SC7020, so the SC5020 looks similar but has fewer features than
the SC7020.
SC5020

 2.5-inch SAS hard drives are installed horizontally side by side.


 Minimum: seven drives (four if they are SSDs) internally.
 Maximum: 30 drives internally.
SC5020F

In addition to its 24 internal drive slots, the SC5020 supports up to 192 additional drives in a
combination of expansion enclosures.

SC8000
The SC8000 is a controller that is 2U in height.
This model is no longer being sold but still exists in many customer environments.
Compatible Enclosures

The SC200 series of enclosures is compatible with the SC8000, along with older modules of
enclosures no longer supported.

SC9000
The SC9000 is a controller that is 2U in height.
The SC9000 allows customers to build their SAN as they see fit. Dual SC9000s
combined with compatible enclosures make up a custom system.
The SC9000 can be configured as:

 All-Flash
 Hybrid
 HDD
Compatible Enclosures

Storage Center Operating System (SCOS)


Storage Center Operating System (SCOS) software enables the core and licensed
features that make the SC Series function.

Newer hardware models require newer SCOS versions. Refer to the table for
specific requirements per controller.

SCOS Compatibility
Storage Tiers
Fluid Data Architecture allows all of the storage available to be used for all of the data.
Data flows between RAID and disk types. Storage Center manages all disks as a single
pool of resources which are accessed by multiple volumes.
Storage Profiles
Storage Profiles are associated with volumes. Data written to that volume is stored
using the RAID level and storage tier defined in that Storage Profile.

By default, a Storage Center with Data Progression licensed uses the Recommended


(All Tiers) Storage Profile. This should provide optimal performance for most systems.
For systems without a Data Progression license, High Priority (Tier 1) is the default
(data does not move between tiers without licensed Data Progression).

The default Storage Profiles cannot be modified, but custom Storage Profiles can be
created.

Core Features
Key points covered in this module include:

 The SC8000 & SC9000 are the only models that are controllers only. All other
models include the controller and storage disks in the same chassis.
 The Storage Center OS (SCOS) enables the core and licensed features that
make the SC Series function.
 Dell Storage Manager (DSM) is the sole management tool used for systems
running SCOS 7.0 or newer.
 Data is stored in sectors, sectors are stored in pages, and pages are written
across disks or mirrored depending on the RAID designation.
 Storage Center manages all disks as a single pool of resources which are
accessed by multiple volumes.
 Data written to a volume is stored using the RAID level and storage tier defined
in the associated Storage Profile.
 Dual controllers enhance overall performance and availability through multi-
threaded read-ahead, and mirrored write cache.

Dell EMC Hyper-Converged Storage


Solutions
One area in IT infrastructure that has seen substantial development in recent years is
around converged storage solutions.

Converged storage represents efforts to collapse the compute, networking, storage,


and virtualization layers into a single appliance. This reduces the storage footprint in
data centers, minimizes compatibility issues, and simplifies the scale out of storage
capacity.

Hyper-converged storage describes a software-defined approach to storage


management that adds tighter integration between components and offers a more
robust set of features.

Dell EMC hyper-converged offerings include the XC Series and VxRail.


This section explores both appliances, how they scale to form networks called clusters,
and how their software platforms work to allocate and manage storage across the
cluster.
XC Series + Nutanix
The Dell EMC XC Series of hyper-converged appliances integrates the PowerEdge server
platform with Nutanix storage management software.

VxRail
Dell EMC VxRail is a hyper-converged appliance that is tightly integrated with and managed by
VMware vSAN software.

XC Series Introduction

XC Series appliances can be deployed into any data center in less than 30 minutes and
support multiple virtualized, business-critical workloads and virtualized Big Data
deployments.

This enables data center capacity and performance to be expanded—one node at a


time—delivering linear and predictable scale-out expansion with pay-as-you-grow
flexibility.

XC Series offers a flexible choice of hypervisors, which makes it an excellent solution


for all enterprise workloads and applications running in virtual environments.
Key Nutanix Terms
The XC series is built on the Nutanix web-scale hyper-converged infrastructure. The key
concepts of this infrastructure are defined here.
XC Series: PowerEdge and Nutanix
The XC series is based on 14th generation PowerEdge servers (14G) with factory-
installed Nutanix software and a choice of industry-leading hypervisors. It is available in
flexible combinations of CPU, memory, and SSD/HDD, including NVMe SSDs.

Standard features include thin provisioning and cloning, replication, data tiering,
deduplication, and compression.
The 14G PowerEdge server provides higher core counts, greater network throughput, and
improved power efficiency.

The PowerEdge server portfolio supports the latest generation of Intel Skylake processors and
the AMD Epyc SP3 processor.

Nutanix delivers web-scale IT infrastructure to medium and large enterprises. The software-
driven virtual computing platform combines compute and storage into a single solution to drive
simplicity in the datacenter.

Customers may start with a few servers and scale to thousands, with predictable performance
and cost. With a patented elastic data fabric and consumer-grade management, Nutanix is the
go-to solution for optimized performance and policy-driven infrastructure.

XC Series Portfolio Overview


XC Architecture: Node
The XC series architecture is built around the node. Essentially, a node is a single XC appliance
consisting of a hypervisor, flash storage for hot data, an HDD for cold data, and a virtual
controller.

Data is managed dynamically based on how frequently it is accessed. Frequently accessed, or


"hot" data is kept on the SSD tier while "cold" data is migrated to the HDD tier. Cold data that is
accessed frequently is again moved back to the SSD tier.

The XC6420 has 4 nodes per chassis and is available in an all-flash configuration.
XC Architecture: Cluster
Nodes are interconnected into clusters, which process huge amounts of data in a
distributed infrastructure. Each scale-out cluster is composed of a minimum of three
nodes and has no specified maximum.

Built-in data redundancy in the cluster supports high availability (HA) provided by the
hypervisor. If a node fails, HA-protected VMs can be automatically restarted on other
nodes in the cluster.
XC Architecture: DSF
The Nutanix Acropolis OS powers the Distributed Storage Fabric (DSF) feature, which
aggregates local node storage across the cluster. DSF operates via the interconnected
network of Controller VMs (CVMs) that form the cluster.

 DSF aggregates local storage into a single storage pool.


 Storage pools can be partitioned into data stores.
 Data stores are presented to the hypervisors using the NFS protocol.
 The hypervisors and the DSF communicate using industry-standard NFS, iSCSI,
and SMB3 protocols.
 Every node in the cluster has access to data from shared SSD, HDD, and cloud
resources.
DSF handles the data path for snapshots, clones, high availability, disaster recovery,
deduplication, compression, and erasure coding.

The Nutanix Acropolis OS is the foundation of the XC platform. It uses the compute, storage,
and networking resources in the XC Series nodes to increase existing application efficiency and
achieve maximum performance benefits. Also, leftover compute, storage, and networking
resources are available to do things like power new applications or expand existing ones.

Nutanix Prism is an end-to-end management solution for the XC Series that streamlines and
automates common workflows, eliminating the need for multiple management solutions. Prism
provides all administrative tasks related to storage deployment, management, and scaling from
a single pane of glass.

Powered by advanced machine learning technology, Prism also analyzes system data to
generate actionable insights for optimizing virtualization and infrastructure management.
Cluster Monitoring
The Nutanix web console provides several mechanisms to monitor events in the cluster.
To monitor the health of a cluster, virtual machines, performance, and alerts and events,
the Nutanix Web GUI provides a range of status checks.

 Dell XC730 Series appliances require a minimum of three nodes to create a


cluster.
 It is a best practice to connect the management network port across all three
nodes in the cluster to the same switch.

 The “C” in XC730xd-12C means the node is for storage capacity ONLY. It does
not run workload VMs or virtual desktops.
 It is a best practice to connect the management network port for each 12C to the
same switch and then out to all three node management ports.
XC Hardware: SATADOM
The SATA Disk‐On‐Motherboard (SATADOM) shipped with XC Series appliances is intended as
an appliance boot device.

By default, the SATADOM comes with a power cable installed and is set in a read/write position.

The hypervisor boot device is not intended for application use. Adding more write intensive
software to the SATADOM boot disk results in heavy wear on the device beyond design
specifications, resulting in premature hardware failure.
Key points covered in this module include:

 The XC Series of hyper-converged appliances integrate PowerEdge servers,


storage, and Nutanix software to create a single, easy-to-deploy platform.
 The Nutanix Acropolis Operating System (AOS) and Nutanix Prism end-to-end
management software are pre-installed on every XC appliance.
 The XC series architecture is built around the node: a single XC appliance
consisting of a hypervisor, flash storage for hot data, an HDD for cold data, and a virtual
controller.
 Nodes are interconnected into clusters, which are composed of a minimum of
three nodes.
 Every node in the cluster has access to data from shared SSD, HDD, and cloud
resources.

Introduction to VxRail
VxRail is a VMware hyper-converged infrastructure appliance delivering an all-in-one IT
infrastructure solution.

Powered by VMware vSAN and Dell EMC PowerEdge servers, VxRail delivers
continuous support for the most demanding workloads and applications.
VxRail is engineered for efficiency and flexibility, and is fully integrated into the VMware
ecosystem.

VxRail Benefits: Use Cases


VxRail supports a wide variety of use cases from mission-critical workloads to Virtual Desktop
Infrastructure.

VxRail for Remote Office

 VxRail is tailored for compute and capacity deployment to remote locations.


 VxRail is a fast and easy solution for IT professionals managing enterprise data
centers with numerous satellite locations.
 All management can be done from a central location.
 Remote Office/Back Office (ROBO) solutions also offer processing, separation,
and protection across geographies
VxRail for Mixed Server Workloads
 The VxRail Appliance is ideal for customers with mixed server workloads. It is a
perfect platform for organizations that need to manage hundreds of applications across
the enterprise, while being asked to shorten time-to-market for new ones.
 VxRail lets you rapidly provision virtual machines (VM) where you can create a
template based on your specifications or can customize the VM, as required.
VxRail for Business Critical Workloads

 Departments, projects, or application owners are challenged to meet SLAs while


identifying and resolving application issues and ensuring high availability.
 VxRail provides real-time monitoring and diagnosis, automated patches and
maintenance of systems, and quick provisioning of new environments to meet SLAs.
VxRail for Virtual Desktop Infrastructure (VDI)

 Organizations need a fast, easy, non-disruptive way to deploy end-user


computing (EUC) with a consistent user experience. They want to improve productivity,
support/manage new devices (including mobile) and ensure that intellectual property is
safe.
 VxRail's Flash offerings are a perfect fit for VDI deployments under 1000
desktops.
 VxRail supports both industry-leading VMware Horizon 6 and Citrix XenDesktop,
providing the flexibility to adapt to the company's VDI requirements.
 With the optional addition of GPUs with the Dell nodes, VxRail can support 2D &
3D high end visualization.

VxRail in the Dell-EMC Storage Portfolio


Dell EMC provides a full scale portfolio of converged and hyper-converged platforms
that help customers increase the value of their IT investments by reducing time, cost,
and the risks associated with:

 Deployment
 Configuration
 Management of distinct components
VBlock/VxBlock are converged platform offerings. These systems integrate enterprise
class compute and networking with Dell EMC storage while offering a choice of network
virtualization.
VxRack System is a rack-scale hyper-converged system with integrated networking. It
is the perfect choice for core data centers that require both enterprise-grade resiliency
and the ability to start small and grow linearly.
VxRail Appliances are specifically for departmental and EDGE applications as well as
small enterprise and mid-market data centers. Like VxRack, they do not contain a
storage array, but instead run a software-defined storage environment on the appliance.

VxRail Appliance Hardware Components


The VxRail appliance is built on a Dell EMC branded 2U 4-node chassis or Dell
PowerEdge servers with a single node in 1U or 2U form factors.
The VxRail architecture includes:

 Top of Rack network switches


 Chassis/servers connected through a customer provided network switch.
 VxRail nodes containing dedicated compute, memory, storage, and network
ports.
VxRail nodes are clustered and can scale from 3 to 64 nodes in a cluster. All nodes
within a cluster must have the same storage configuration, either Hybrid or All Flash.

Minimum Configuration
A 3-node VxRail is the minimum configuration and is ideal for proof of concept (POC) projects.
The configuration is easily expandable, but any additional nodes must have the same physical
attributes as the first three nodes.
VxRail Models
The introduction of the Dell PowerEdge server platform adds flexibility to meet any challenge or
deployment scenario. Dell EMC updated the naming and branding of the VxRail to reflect the
use cases they target.
All models are available with All Flash except the storage-dense S series.

On Demand Scaling
Customers can expand their infrastructure on demand by dynamically adding additional
chassis for a maximum of 16 chassis or up to 64 nodes with an RPQ.

This enables organizations to run from 40-200 VMs on a single appliance and up to
3,200 VMs overall with 64 nodes. All nodes within a cluster must have the same storage
configuration, either Hybrid or All Flash.
VMware Software Stack
The VMware solution for hyper-converged infrastructures consists of the following:

 vSphere - an industry-leading hypervisor


 vCenter Server - a complete data center services management solution
 VMware vSAN - a software-defined, enterprise class storage solution
VMware Virtual SAN (vSAN)
VxRail is powered by VMware vSAN.

vSAN is a software-defined storage solution that pools together VxRail nodes across a VMware
vSphere cluster to create a distributed, shared data store that is controlled through Storage
Policy-Based Management.

An administrator creates policies that define storage requirements like performance and
availability for VMs on a VxRail cluster. vSAN ensures that these policies are administered and
maintained.
vSphere 6.5
VMware vSphere is the brand name for the VMware suite of virtualization products.
The VMware vSphere suite includes:

 VMware ESXi abstracts processor, memory, storage, and other resources into


multiple virtual machines (VMs).
 VMware vCenter Server is a central control point for data center services such
as access control, performance monitoring, and alarm management.
 VMware vSphere Web Client allows users to remotely connect to vCenter
Server from various Web browsers and operating systems.
 vSphere Virtual Machine File System (VMFS) provides a high-performance
cluster file system for ESXi VMs.
 vSphere vMotion allows live migration for powered-on virtual machines in the
same data center.
 vSphere High Availability (HA) provides uniform, cost-effective failover
protection against hardware and operating system outages.
 vSphere Distributed Resource Scheduler (DRS) divides and balances
computing capacity for VMs dynamically across collections of hardware resources.
Note: This feature is not part of the vSphere Standard Edition.
 vSphere Distributed Switch (VDS) allows VMs to maintain network
configurations as the VMs migrate across multiple hosts.
VxRail Manager
The VxRail Manager software is designed to deploy, update, monitor, and maintain the VxRail
appliance. It provides broader hardware awareness than is available in the node tools.

The VxRail Manager captures events and provides real-time holistic notifications about the state
of virtual applications, virtual machines, and appliance hardware.

vSAN Features: Fault Domains


 A fault domain consists of one or more vSAN hosts grouped according to their
physical location in the data center. Fault domains enable vSAN to tolerate failures of
entire physical racks, a single host, capacity device, network link, or a network switch
dedicated to a fault domain.
 A minimum of three fault domains are required. For best results, configure four or
more fault domains in the cluster. A cluster with three fault domains has the same
restrictions as a three host cluster.
 The vSAN cluster in the graphic consists of eight nodes esxi-01 through esxi-08
and four fault domains (FD). Each fault domain has two nodes.

vSAN Features: Stretched Clusters


Stretched clusters extend the vSAN cluster from a single site to two sites for a higher
level of availability and intersite load balancing.

 Stretched clusters are typically deployed in environments where the distance


between data centers is limited, such as metropolitan or campus environments.
 In a stretched cluster configuration, both sites are active sites. If either site fails,
vSAN uses the storage on the other site. vSphere HA restarts any remaining VM on the
active site.
 One site is designated as the preferred site. The other site becomes a secondary
site.

vSAN Features: Snapshots


With vSAN, administrators can create, roll back, or delete VM snapshots using the Snapshot
Manager in the vSphere Web client.

Enhancements to snapshots include minimal performance degradation and greater snapshot


depth.
Each VM supports a chain of up to 32 snapshots.

vSAN Features: Erasure Coding


Erasure coding provides enhanced data protection from a disk or node failure in a
storage efficient fashion compared to traditional protection solutions. Erasure coding
stripes the data in equal size chunks.
There are two protection options.

 RAID-5, single parity to protect against one failure


 RAID-6 to protect against two concurrent failures
Erasure coding is available only for all flash VxRail systems.
VxRail is delivered to customers without a preconfigured network switch. Customers
have the option of including a Dell EMC Connectrix switch with their purchase or
provide, provision, install, and configure top-of-rack network switches for use with their
appliance.

 A typical implementation uses one or more customer-provided 10 GbE Top of


Rack (ToR) switches to connect each node in the VxRail cluster.
 For smaller environments, an option to use 1 GbE switches is available, but
these lower bandwidth switches limit performances and scaling.
 Switch configuration guidelines are provided to streamline the deployment.
 A validation utility is available to test the switch setup prior to running the setup
utility.
 Traffic isolation is highly recommended using separate VLANs.
Below are the available cables for VxRail.
Hardware Troubleshooting
While diagnosing hardware issues:

 Check for notifications of hardware errors in VxRail Manager, iDRAC, and


vCenter.
 Refer to the diagnostic procedures in the VxRail knowledge base to identify the
error and cause of the issue.
 When replacing hardware components, you can use SolVe procedures. Ensure
that you download the latest SolVe, as procedures are periodically updated.
Click each image to learn about VxRail hardware replacement.
Software Troubleshooting
When investigating a software-related problem:

 Check the knowledge base system.


 Check VMware technical forums.
 If the issue occurred after a recent component replacement or expansion,
validate software and firmware versions. Update, if necessary.
 Current software levels:
 VxRail Manager 4.5.2
 vSphere and vCenter 6.5
 vRealize Log Insight
 Virtual SAN (vSAN) 6.5
Key points covered in this module:

 VxRail Appliances are specifically built for departmental and EDGE applications
as well as small enterprise and mid-market data centers.
 The VxRail Appliance architecture combines hardware and software to create a
system that is simple to deploy and operate.
 Hardware components include customer-provided nodes and clusters as well as
chassis that include CPU, memory, and drives.
 vSAN delivers flash-optimized, secure shared storage with the simplicity of a
VMware vSphere-native experience.

Dell EMC Software Defined Storage


Solutions
Software Defined Storage (SDS) is a policy-based approach where the software manages
resources and functionality independent of the underlying physical storage hardware.

Dell EMC offers Software Defined Storage solutions as an application bundled with hardware or
as software only—allowing customers to independently choose their hardware vendor. The
offerings include the VxFlex OS and Nexenta.

VxFlex OS
The Dell EMC VxFlex OS is a software-only solution that uses LAN and existing local storage—
turned into shared block storage—to create a virtual SAN.

Nexenta
Nexenta is open-source software-defined storage solution that delivers fully featured file-based
and block-based storage services.

Introduction to VxFlex OS
Traditional SAN storage offers high performance and high availability required to
support business applications, hypervisors, file systems, and databases. But a SAN
does not provide the massive scalability and flexibility required by some modern
enterprise data centers.
Dell EMC VxFlex OS is a hardware-agnostic software solution that uses existing server
storage in application servers to create a server-based Storage Area Network (SAN).
This software-defined storage environment gives all member servers access to all
unused storage in the environment, regardless of which server the storage is on.
VxFlex OS Software Components
VxFlex OS is comprised of three main components:

 Meta Data Manager (MDM)


The MDM allows administrators to configure and monitor VxFlex OS. It can be set up in
Single Mode on a single server, or in a redundant Cluster Mode—three members on
three servers or five members on five servers.
 VxFlex OS Data Client (SDC)
The SDC is a lightweight, block device-driver that presents VxFlex OS shared block
volumes to applications. The SDC runs on the same server as the application. This
enables the SDC to fulfill I/O requests issued by the application regardless of where the
particular blocks physically reside.
 VxFlex OS Data Server (SDS)
The SDS manages the local storage that contributes to the VxFlex OS storage pools.
The SDS runs on each of the servers that contribute storage to the VxFlex OS system.
The SDS performs the back-end operations that the SDCs request.

Meta Data Manager (MDM)


The MDM configures and monitors the VxFlex OS system. It contains all the metadata required
for system operation.

MDM is responsible for data migration, rebuilds, and all system-related functions. But user data
never passes through the MDM.

The number of MDM entities can grow with the number of nodes. To support high availability,
three or more instances of MDM run on different servers.

Note: The MDM can be configured in single mode on a single server or in


redundant cluster mode.
VxFlex OS Storage Data Client (SDC)
The SDC exposes VxFlex OS shared block volumes to applications. SDC is a block
device driver that can run alongside other block device drivers.

 SDC runs on the same server as the application. This enables the application to
issue an I/O request, which is fulfilled regardless of where the particular blocks
physically reside.
 SDC communicates with other nodes (beyond its own local server) over TCP/IP-
based protocol, so it is fully routable.
 There is no interference with I/Os directed at traditional SAN LUNs which are not
part of the VxFlex OS configurations at the site.
 Users may modify the default VxFlex OS configuration parameter to enable two
SDCs to access the same data. The SDC is the only VxFlex OS component that
applications see in the data path.

VxFlex OS Storage Data Server (SDS)


 The SDS owns local storage that contributes to the VxFlex OS storage pools.
 Within the VxFlex OS virtual SAN, an instance of the SDS runs on every server
that contributes local storage space (HDDs, SSDs, or PCIe flash cards) to the
aggregated pool of storage.
 The SDS manages the capacity of a single server and performs the back-end I/O
operations as requested by the Storage Data Client (SDC). It acts as a back-end for
data access.
Storage Media
In VxFlex OS, storage media may be HDD, SSD, or PCIe flash cards in form of Direct
Attached Storage (DAS) or external storage.

 A Read Hit is a read to the VxFlex OS system (SDS) where it finds the requested
data already in the server's read cache space. Read hits run at memory speeds, not
disk speeds, and there are no disk operations required.
 A Read Miss is a read to the VxFlex OS system when requested data is not in
cache. It must be retrieved from physical disks (HDD or SSD).
 When there is a Read Miss, the data that is fetched from the physical disk leaves
a copy in the read cache to serve other I/O pertaining to the same data.
 If any I/O is served from the host read cache, then those I/Os are counted as
Read Hits. Any other I/O is counted as a Read Miss.

 VxFlex OS Architecture: Write I/O


 I/O from the application is serviced by the SDC that runs on the same server as
the application.

 The SDC fulfills the I/O request regardless of where any particular block
physically resides.
 When the I/O is a Write, the SDC sends the I/O to the SDS where the primary
copy is located.
 The primary SDS sends the I/O to the local drive and in parallel, another I/O is
sent to the secondary mirror.
 After an acknowledgment (ack) is received from the secondary SDS, the primary
SDS acknowledges the I/O to the SDC. Then an acknowledgement is sent to the
application.
Note: Writes are only buffered in the host memory after write caching. Writes in cache
can be used for future reads.

Command Line Interface (CLI)


The VxFlex OS CLI can be used to perform the entire set of configuration, maintenance,
and monitoring activities in a VxFlex OS system. AMS CLI can be used to query and
perform actions on the hardware components of the Ready Node system.

Graphical User Interface (GUI)


The VxFlex OS GUI enables standard configuration, maintenance, and monitoring the
health and performance of the storage system. The VxFlex OS GUI provides a view of
the entire system with access to different elements through the menu system.
VMware Plug-in
For VMware environments, VxFlex OS offers a plug-in for vSphere Web Client. The
plug-in enables VMware administrators to perform basic VxFlex OS management
activities directly from the vSphere Web Client.

VxFlex OS Ready Node


Nodes are the basic hardware units that are used to install and run the VMware ESX
hypervisor and the VxFlex OS system. They can be the same servers used for the
applications (server convergence), or a dedicated cluster.
VxFlex OS Ready Node provides different server node configurations that are designed
to be density or capacity optimized.

There are two different nodes of VxFlex OS.

Key points covered in this module are:

 Dell EMC VxFlex OS is a hardware-agnostic software solution that uses existing


server storage in application servers to create a server-based Storage Area Network
(SAN).
 VxFlex OS is comprised of three main components: Meta Data Manager (MDM),
VxFlex OS Data Client (SDC), VxFlex OS Data Server (SDS).
 In VxFlex OS, storage media may be HDD, SSD, or PCIe flash cards in the form
of Direct Attached Storage (DAS) or external storage.
 Dell EMC sells VxFlex OS ready PowerEdge servers that can easily be as nodes
into a VxFlex OS cluster framework.

Introduction to Nexenta
Nexenta open-source software solutions, NexentaStor and NextenaEdge, deliver fully
featured file-based and block-based storage services that include snapshot, cloning,
and deduplication.
Key Features:

 It can scale from terabytes to petabytes.


 The software can be deployed on bare metal hardware.
 It can replace the OS of legacy all-flash, hybrid, and all-disk storage appliances.
 It adds an enterprise grade file service to traditional SAN and hyper-converged
infrastructures.

 NexentaStor OS Introduction
 NexentaStor OS is designed for network-attached storage (NAS) and Storage
Area Network (SAN) based deployments. It creates storage virtualization pools
consisting of multiple HDDs and SSDs.

Data can be organized in a flexible number of file systems and block storage.
Files can be accessed over the widely used Network File System (NFS) and
CIFS protocols, while block storage uses iSCSI or Fibre Channel protocols.
NexentaStor allows online snapshots to be taken of data and replicated to other
systems.

Click the tabs to learn more about the features and components of NexentaStor.

Nexenta Management View (NMV) is a GUI that enables you to perform most of the
NexentaStor operations. You can access NMV and manage NexentaStor from Windows, UNIX,
or Macintosh operating systems.

Nexenta Management Console (NMC) is a command line interface that enables you to
perform advanced NexentaStor operations. NMC provides more functionality than NMV.
The following activities are only available in the NMC:

 System upgrades
 Restarting NexentaStor services
 Advanced configurations
 Expert mode operations
The Nexenta Management Server (NMS) controls all NexentaStor services and processes. It
receives and processes requests from NMC and NMV and returns the output.

Nexenta provides the following licenses:

 Free Trial: It provides 45-day trial (ISO image) of Enterprise Edition.


 Enterprise Edition: Provides a perpetual license with a limitation based on
quantity of TBs used and technical support.
 Community Edition: This option has most of the main features but supports
only 18 TB of storage. Technical support is not available for this edition.

NexentaEdge
NexentaEdge is Nexenta's scalable object software platform.

NexentaEdge provides iSCSI block services to store data using an object storage API
that is designed for rapid scalability.

NexentaEdge was designed for OpenStack deployment along with full support for both
the Swift and S3 APIs.
Most data centers need both NexentaEdge and NexentaStor:

 A high performance NAS that supports a variety of protocols and consolidates


legacy applications and data to a single storage platform delivered with NexentaStor.
 NexentaEdge for storing the petabytes of data that the internet of things (IoT)
generates.
NexentaEdge Deployment
NexentaEdge powered nodes can be deployed in three configurations:

 Dedicated nodes enable the Gateway services and Storage services to run on


bare metal servers.
 Mixed nodes enable the Gateway and Storage services to coexist on a bare
metal server with both public and private networks.
 Container Converged model enables deployment using Docker container
technologies to run both NexentaEdge services and Application containers directly on
the same bare metal servers.
Key Feature: Quick Erasure Coding
Quick Erasure Coding (QEC) breaks data into pieces that are then put back together
when the data is needed. QEC eliminates the performance penalty that is associated
with accessing objects in traditional erasure coded solutions. It also offers data
protection against multiple hardware failures with 30% overhead.

Some of the other benefits are:

 It is designed for efficient storage of cold data—data that has not been accessed
in a certain amount of time.
 Encoding is completed as post-processing when data goes cold.
 It supports a fixed set of data/parity combinations.

Dell EMC Build-to-Order Configurations


Dell EMC hardware and Nexenta software options are pre-configured, pre-sized, and certified to
offer customers a range of solutions with 44 TB to 1.92 PB raw capacity.

 Built using industry-leading PowerEdge servers.


 Meets large capacity and performance requirements with a choice of CPU, RAM,
read/write cache, and drive options.
 No additional software license costs.
 Expand capacity non-disruptively by adding more Dell MD series storage enclosures.
The key points covered in this module include:

 NexentaStor OS can be organized in a flexible number of block storage and file


systems, and creates storage virtualization pools consisting of multiple HDDs and
SSDs.
 NexentaEdge provides iSCSI block services that store data using an object
storage API and is designed for rapid scalability.
 Quick Erasure Coding is designed for efficient storage of cold data and avoids
the read penalty that is associated with traditional erasure coding.
 Dell EMC hardware and Nexenta software options are pre-configured, pre-sized,
and certified to offer customers a range of solutions using PowerEdge servers and MD
series storage enclosures.

Storage Support Best Practices

Overview
 SolVe, or “Solutions for Validating your Engagement," is an interactive,
standalone application that generates specific, step-by-step service procedures for
legacy EMC products.
 The SolVe Desktop combines the Product Generators (PG) of Dell EMC products
into one application.
 These procedures come directly from Dell EMC product support engineers and
include videos, Customer Replaceable Unit (CRU) Instructions, related knowledgebase
content, and warnings for known issues.
 Legacy EMC products that are supported by SolVe Desktop include NetWorker,
VPLEX, Connectrix, VNX, Data Domain, Isilon, VMAX, and many others.

 SolVe Desktop Download


 To download the SolVe Desktop Application, log in to the EMC support page.
(Preferred browser: Google Chrome)

On the main page of the Support Site, click 'SolVe' and follow the installation
steps.

When it launches, the SolVe Desktop authenticates the user as an Employee,


Customer, or Partner. It provides appropriate content based on the user’s
authentication rights.

SolVe Desktop Features


SolVe features include Top 10 Service Topics and a Help option. Click each tab for
more details.
 Top 10 Service Topics gives quick access to resources that support common
calls.
 The list is at the bottom of the SolVe Desktop, or by clicking it in the menu at top
of the screen.
 To access documentation for a top ten topic, click the article number to view the
Knowledgebase article.

From the main tool bar, the Help tab includes:

 Lessons - a link to videos


 Frequently Asked Questions
 Release Notes
 Contact EMC SolVe Support option

 SolVe Product Download


 To access procedures on specific products, you must first download the
documentation.
Click each tab to learn how to download product content.
SolVe Procedure Generation

Key points covered in this module include:

 SolVe is an interactive, standalone application that generates specific, step-by-


step service procedures for Dell EMC products.
 Products that are supported by SolVe include NetWorker, VPLEX, Connectrix,
VNX, Data Domain, Isilon, VMAX.
 SolVe features include Top 10 Service Topics and a Help option.
 After product information is downloaded, a procedure guide is generated
containing the Knowledgebase articles.

Replaceable Components Introduction


Replaceable components are individual parts in a storage array that can be replaced without
replacing the entire device.

These units are called Field Replaceable Units (FRU), or Customer Replaceable Units (CRU),
although these terms are often used interchangeably.

Note: Always refer to product documentation for the proper unit replacements steps.

A FRU is replaceable hardware component that is replaced by a Dell EMC authorized


technician.
Some of the commonly replaced storage FRUs are:

 Disk Drives
 Power Supplies
 I/O Cards
 Fans
 Controllers
 System boards
A CRU is designed for quick, safe, and easy part replacement without special skills,
knowledge, or tools. CRU parts may be removed and replaced by the customer.
CRUs include:

 Controller Module (certain models only)


 Bezel
 Rail Kit
 Serial Cable
 HDD blank

Hot-Swappable Components
Replaceable components are divided into two categories:

 Hot-swappable - Can be replaced while the system is up and running without a


shutdown or reboot to avoid a business outage.
 Non Hot-swappable - Requires a system shutdown.
Dell EMC uses orange-coding to identify hot-swappable components, such as this
cooling fan unit.

Note: Always refer to Owner's or Service Manual for the proper replacement procedure.

Replacing Hot-Swappable Components


Replacement of hot-swappable components is designed to be quick and easy.

Click each tab to see how few steps are required to remove and replace these components.

Replacing Non Hot-Swappable


Components
Replacing components that require a system shutdown typically requires more planning
and preparation. A typical replacement process follows these steps to verify that the
replaced component is functioning properly, and minimizes the amount of time that the
system is down.

Cabling Best Practices Introduction


Cabling is the process of connecting the storage controller to the network. If the storage
array also has enclosures, these are cabled to the controller as well.
The process of connecting the array to the network switch is called front end cabling.
Connecting the array to the enclosures is called back end cabling.

 Front End: Switches on the front end can be connected to the controller array
via Fibre Channel (FC), FCoE, iSCSI, or SAS.
 Back End: The enclosures on the back-end are connected to the controller via
SAS.

Front End Cabling Best Practices


Front end cabling strategies differ depending on several factors, including:

 Number of controllers
 Type of interface: Fibre Channel, iSCSI, or SAS
 Number of switches or fabrics
 Transport type: Virtual Port or Legacy Mode
 Storage Center version and features enabled
Note: Always refer to the product reference guides for specific cabling procedures.

Virtual Port Mode Cabling


In virtual port mode, each port has a physical world-wide name (WWN) and a virtual WWN.
During normal conditions, all ports are accessible for processing I/O requests. This is a
performance improvement over Legacy port mode, where not all ports can process I/O.

If a port or controller failure occurs, the virtual WWN moves to a working physical WWN in the
same fault domain and continues sending I/O.
When the failure is resolved and ports are re-balanced, the virtual port returns back to the
preferred physical port.

Common Cabling Configurations: SC9000


The SC9000 can be cabled in virtual port mode in either a single or dual fabric configuration. A
fabric refers to the server-switch-controller path.

In a Single Fabric FC connection, the controllers are connected to a single switch. There is only
one FC path available so there is only one fault domain.

This configuration protects against the failure of a port or controller, but does not protect against
the failure of the switch.
In Dual Fabric FC configuration, two switches are used creating two fabrics. This configuration
is recommended since it prevents an FC switch failure or storage controller failure from causing
an outage.
Back End Cabling Best Practices
While back end cabling consists of SAS connections between controller and enclosures,
there are some factors to consider when identifying the cabling strategy, including:

 Number of ports per SAS card


 Number of SAS cards
 Number of chains
There are two types of SAS cable connectors that connect the controllers to the enclosures.

SAS cable connectors are typically used with 6 Gb expansion enclosures, including the SC200
and SC220.
Mini-SAS HD cable connectors are typically used with 12 Gb expansion enclosures, including
the SC400, SC420 and SC460.

Back End SAS Chains


The SAS connection between a storage system and the expansion enclosures is
referred as a SAS chain.

A SAS chain is made up of two paths—the A side and B side. The two separate paths
create redundancy. Each path can handle the data flow and each controller has full
access to all enclosures.
SAS chain limits are based on the number of disks within a chain. For example, the
SC9000 has a max disk limit of 192 per chain, while the SCv2020 has a max limit of 168
disks per chain.

Back End Cabling Best Practices: SC9000


This example shows an SC9000 single chain configuration with dual controllers, two
enclosures, and a two port SAS card.

 The connection between a controller and an enclosure should be on same


ports for each chain of enclosure.
 In this example, you can see that port 1 of controller 1 is connected to port
1 of enclosure 1 (orange path).
 The connection between two enclosures should be on different ports.
 In this example, you can see that port 2 of enclosure 1 is connected to
port 1 of enclosure 2 (orange path).

If a controller has multiple SAS cards, each path should start and end in the same card slots
between the controller and the expansion enclosures (1:1, 2:2). See the red path as an
example.
When there are multiple SAS cards connecting to multiple chains, put all the A side paths to
one card.

Put all the B side paths on the second card to provide SAS card redundancy.
Key points covered in this module include:

 The process of connecting the array to the network switch is called front end
cabling.
 Connecting the array to the enclosures is called back end cabling.
 Virtual port mode provides port and storage controller redundancy by connecting
multiple active ports to each Fibre Channel or Ethernet switch.
 The SAS connection between a storage system and the expansion enclosures is
referred as a SAS chain.
 A SAS chain is made up of two paths—the A side and B side. The two separate
paths create redundancy.
 Refer to the product reference guides for specific cabling procedures.

LED Overview
LEDs provide a quick way to check overall hardware health. Understanding where to locate and
identify LEDs for each hardware component—and what the colors and patterns indicate—are
key steps to making a fast diagnosis and addressing issues.

Most replaceable units, such as power supplies, have LEDs to indicate the status of unit.

Because various product models and server generations may have different LED indicators,
always refer to the model product information for specific LED diagnostic and troubleshooting
guidelines.
To provide a general understanding of LED diagnostics, this module identifies the status
indicators on a PowerEdge server and a Storage Center array. Each hardware model may be
different.

PowerEdge R740 Front Panel LEDs


The front panel of this PowerEdge server contains LEDs that indicate power status,
overall system health, and individual drive activity. These LEDs are located in three
areas:

System ID/health:

 Solid blue indicates that the system is turned on, and is healthy.


 Blinking blue indicates that the system ID mode is active.
 Solid amber indicates that the system is in fail-safe mode.
 Blinking amber indicates that the system is experiencing a fault.
Status indicators turn am ber to indicate a
problem:
Each hard drive has an activity LED and a status LED. The Activity
LED indicates whether the hard drive is currently in use. The Status LED indicates the power
condition of the hard drive.
PowerEdge R740 Back Panel LEDs
The back panel of the server also contains LEDs that indicate the status of the Network
Interface Card (NIC) ports and the Power Supply Units (PSU).

Each NIC has two indicators that provide information about the activity and link status. The
activity LED indicates if data is flowing through NIC, and the link LED indicator indicates the
speed of the connected network.
SC7020 Front Panel LEDs
The front panel of SC7020 storage array contains LEDs that indicate power status, overall
system health, and individual drive activity. These LEDs are located in two areas:
Quick Reference: Common Chassis LED
Indicators

You might also like