Dell 2000 Storage
Dell 2000 Storage
Dell 2000 Storage
Thin provisioning is the process of allocating storage capacity on demand rather than
reserving large amounts of space up front, in case you might need it.
Tiering is the process of placing data on the appropriate storage tier based on its usage
profile. Data that is frequently accessed—hot data—resides on fast drives, and data that
is rarely accessed—cold data—resides on slower drives.
Storage Performance
Latency : Each request takes time to complete. The time between issuing a
request and receiving the response is Latency.
It is measured in milliseconds (ms).
Lower the latency time = higher the performance.
IOPS : Input/output operations per second is abbreviated as IOPS.
Higher the IOPS = higher the performance.
Throughput : Throughput is data transfer rate per second. It is expressed in
Megabytes/Second (MB/s) or Gigabytes/Second (GB/s).
Higher the throughput = higher the performance.
When a storage component fails, users still must access business critical data. It is
essential to have highly available, resilient storage solutions.
Backups are copies of production data that are created and retained for the sole
purpose of recovering any lost or corrupted data.
Replication is the process of creating an exact copy of the data to ensure
business continuity in the event of local outage or disaster.
Data archiving is the process of moving data that is no longer active to a
separate storage device for long-term retention.
Hard Disk Drive (HDD)
A hard disk drive is a persistent storage device that stores and retrieves data using
rapidly rotating disks that are coated with magnetic material.
A hybrid drive (SSHD) combines both flash storage and spinning disks into a
single drive to offer both capacity and speed.
New data is typically written to spinning disk first. After it has been accessed
frequently, it is transferred up to solid-state memory.
Hybrid drives are not frequently used in enterprise-class servers and storage
arrays, and should not be confused with 'hybrid arrays', which are storage enclosures
that contain individual HHDs and SSDs.
Tape Drive
A tape drive reads and writes data onto cassettes or cartridges containing spools
of plastic film coated with a magnetic material. Tape drives provide linear sequential
read/write data access.
A tape drive may be standalone or part of a larger tape library.
Tape is a popular medium for long-term storage due to its relatively low cost and
portability. Tape drives are typically used by organizations to store large amounts of
data, typically for backup, offsite archiving, and disaster recovery.
Some of the key limitations of tape include:
Low access speed due to the sequential access mechanism.
Lack of simultaneous access by multiple applications.
Degradation of the tape surface due to the continuous contact with the
read/write head.
SAS is a serial point-to-point protocol that uses the SCSI command set and the
SCSI advanced queuing mechanism.
SCSI-based drives, such as SAS and FC, are the best choice for high-
performance, mission-critical workloads.
SAS drives are dual ported. Each port on the SAS drive can be attached to
different controllers. This means that if a single port, connection to the port, or even the
controller fails, the drive is still accessible over the surviving port.
Serial ATA (SATA)
Nearline SAS (NL-SAS) drives are a hybrid of SAS and SATA drives. They have
a SAS interface and speak the SAS protocol, but also have the platters and RPM of a
SATA drive.
NL-SAS drives can be connected to SAS backplanes with the added benefits of
the SCSI command set and advanced queuing, while at the same time offering the large
capacities common to SATA.
NL-SAS has replaced SATA in the enterprise storage world, with all the major
array vendors supporting NL-SAS in their arrays.
The disks within the JBOD can be of any size, and the drives can be pooled together
using disk volume applications. Failure of a single drive, however, can result in the
failure of the whole volume.
DAS is connected directly to a computer through a host bus adapter (HBA) and no
network devices exists between them.
Only the host that the DAS solution is connected to can share the storage resource.
PowerVault MD Storage
An example of DAS storage is the Dell EMC PowerVault MD series, a highly reliable
and expandable storage solution.
PowerVault MD DAS common features:
You can manage NAS servers from anywhere on the network with a standard web
browser or other management tools. NAS appliance interfaces are simple, with no need
to learn a complex operating system.
Dell EMC NAS Storage Solutions
NX Storage Series
The NX storage series is an entry-level NAS solution.
NX series is built on PowerVault server hardware and powered MS Windows Storage
Server (WSS). Systems using WSS are typically referred to as Network Attached
Storage (NAS) Appliances.
FS Storage Series
Dell Storage FS Series with Fluid File System (FluidFS) software provides NAS for the
SC Series and PS Series platforms. FS offers best-in-class performance with linear
performance scaling and low cost per file IOPS.
The FS8600 appliance can be configured as either Fibre Channel or iSCSI.
As the name suggests, the controllers within the array control the SAN. Every read or
write command goes through the controller.
The controller:
Most enclosures can house SSDs, HDDs, or both in a hybrid configuration depending
on storage and performance requirements. Drives can be added or replaced without
taking the enclosure offline.
An initiator can be a server or a client that starts the process of data transfer on the
storage network. The initiator sends commands to the target. The command can be an
instruction to read, write, or execute.
A target in this architecture is a storage system. The target device responds to the
initiator by retrieving the data from storage or by allowing the initiator to write the data to
storage. After the data is written, it then acknowledges back to the initiator.
The purpose of all storage systems is to pool together storage resources and make
those resources available to initiators connected to the storage network.
Storage systems are classified into two types:
Block-based storage systems share storage resources with initiators in blocks, raw
chunks of structured data, exactly the way a locally installed disk drive would. The host
operating system must format the data and write it to a file system before the user can
access and interact with it.
SAN storage arrays are block-based and connect using block-based protocols like:
The NAS shares storage data via TCP/IP-based file-sharing protocols such Network
File System (NFS) and Common Internet File System (CIFS) or Server Message Block
(SMB).
Communication Links
All SAN components must communicate with each other.
SAS is primarily for back end connectivity between controllers and enclosures. Some
newer controller models offer SAS connectivity on the front end.
SAS can be also be used for front end connection in the storage system models that
have controllers with internal storage, including the SC4020, SC7020, SC5020,
SCv2000, and SCv3000. SAS front end is connected directly to servers, bypassing
switches.
FC SAN Overview
Fibre Channel SAN (FC SAN) is a high-speed network technology that runs on high-
speed optical fiber cables and serial copper cables. The FC technology was developed
to meet the demand for the increased speed of data transfer between compute systems
and mass storage systems.
The FC architecture supports three basic interconnectivity options:
Point – Point
In this configuration, two nodes are connected directly to each other. This configuration provides
a dedicated connection for data transmission between nodes. However, the point-to-point
configuration offers limited connectivity and scalability and is used in a DAS environment.
FCAL
Each device contends with other devices to perform I/O operations. The devices
on the loop must arbitrate to gain control of the loop.
At any given time, only one device can perform I/O operation on the loop
because each device in the loop should wait for its turn to process the request.
Adding or removing a device results in loop re-initialization, which can cause a
momentary pause in loop traffic.
FC-AL can be implemented by directly connecting each device to one another in
a ring through cables.
FC-AL implementations may also use FC hubs through which the arbitrated loop
is physically connected in a star topology.
The FC environment uses two types of WWNs: World Wide Node Name (WWNN) and
World Wide Port Name (WWPN).
WWN zoning uses World Wide Names to define zones. The zone members are
the unique WWN addresses of the FC HBA and its targets (storage systems).
Port zoning uses the switch port ID to define zones. In port zoning, access to a
node is determined by the physical switch port to which a node is connected. The zone
members are the port identifiers (switch domain ID and port number) to which FC HBA
and its targets (storage systems) are connected.
Mixed zoning combines the qualities of both WWN zoning and port zoning.
Mixed zoning enables a specific node port to be tied to the WWN of another node.
iSCSI SAN Network
IP SAN uses Internet Protocol (IP) for the transport of storage traffic. It transports block
I/O over an IP-based network via protocols such as iSCSI.
Advantages of using IP for storage networking:
This allows the initiators to exist in an IP environment while the storage systems
remain in an FC SAN environment.
iSCSI gateway provides connectivity between a compute system with an iSCSI
initiator and a storage system with an FC port.
SCSI is the command protocol that works at the application layer of the Open
System Interconnection (OSI) model.
The SCSI commands, data, and status messages are encapsulated into TCP/IP
and transmitted across the network between the initiators and the targets.
iSCSI is the session-layer protocol that initiates a reliable session between
devices that recognize SCSI commands and TCP/IP. The iSCSI session-layer interface
is responsible for handling login, authentication, target discovery, and session
management.
TCP is used with iSCSI at the transport layer to provide reliable transmission.
TCP controls message flow, windowing, error recovery, and re-transmission.
iSCSI Protocol Data Unit (PDU)
A PDU acts as the basic information unit of the iSCSI and is used to communicate
between initiators and targets.
The PDU functions to:
DAS
SAN
Introduction to PS Series
The Dell EMC PS Series delivers the benefits of consolidated networked storage in a self-
managing, iSCSI SAN. With its unique peer storage architecture, the PS Series delivers high
performance and availability in a flexible environment with low cost of ownership.
The PS Series is an excellent solution for small to medium businesses looking to migrate from
DAS or NAS, streamline data protection or expand capacity.
Benefits of PS Series
The PS Series storage arrays support Tiered Storage, which enables you to define multiple tiers
or pools of storage in a single PS Series group (SAN). Tiered Storage provides administrators
with greater control over how disk resources are allocated. While online, volumes can be
allocated and moved between tiers of storage, providing high levels of service.
Benefits of PS series
The result is an intelligent storage array that can deliver rapid installation and simple
SAN management. Using patented, page-based data mover technology, members of a
SAN work together to automatically manage data and load balance across resources.
Because of this shared architecture, enterprises can use PS Series arrays as modular
building blocks for simple SAN expansion. This architecture provides the basis for
numerous features and capabilities, including:
Peer deployment
Control
Provisioning
Protection
Integration
PS Architecture: Groups, Pools, Members
The foundation of the PS Series storage solution is the PS Series Group. The group is
the starting point from which storage resources are assigned and allocated.
The volumes may be created on a storage pool with a single group member, or with
multiple group members. The administrator provides each volume a name, size, and a
storage pool.
When an administrator creates a volume, Thin Provisioning sizes it for the long-term
needs of the application without initially allocating the full amount of physical storage.
Instead, as the application needs more storage, capacity is allocated to the volume from
a free pool.
PS Series group
PS Series members
PS Series storage pools
PS Series single-member pool
PS Series multimember pool
Storage space
Volume
Snapshot collections
Snapshot
Thin-provisioned volume
Dynamic load balancing lets the group quickly find and correct bottlenecks as the
workload changes with no user intervention or application disruption.
A group provides three types of load balancing within the arrays in a storage pool:
Capacity load balancing: The group distributes volume data across disks and
members, based on capacity.
Performance load balancing: The group tries to store volume data on members
with a RAID configuration that is optimal for volume performance, based on internal
group performance metrics.
Network connection load balancing: The group distributes iSCSI I/O across
network interfaces, minimizing I/O contention and maximizing bandwidth.
Click the tabs for a brief description of each entry in the PS Series line.
The PS4210 Series is the first hybrid storage array in the entry-level PS4000 Series.
The PS4210 is aimed at small-to-medium data centers and remote offices that want to upgrade
to a SAN.
The Dell EMC PS6210 arrays offer IT generalists the ability to manage more data with fewer
resources. It has the capability to tightly integrate with common application environments and
the flexibility to be used with various operating systems.
The PS6610 arrays combine next-generation powerful controllers and an updated dense
storage architecture to meet large data repository requirements.
Dell EMC PS-M4110 Blade Storage System:
The PS-M4110 Series delivers all the functionality and enterprise-class features of a traditional
PS array in a blade form factor. It is the industry’s only fully redundant, enterprise-class storage
array designed to fit inside a blade chassis.
The PS4210 is a similar design as PS6210, using the same dual controller architecture and
chassis hardware.
The FS7610 is built on Fluid File System (FluidFS) software, a high-performance distributed file
system that optimizes file access performance. It does not have strict limits on file system size
and share size inherent in other NAS solutions. So the capacity required to store common
enterprise data can be dramatically decreased.
The storage infrastructure can be managed through a single FS7610 Group Manager interface
to improve productivity. It is easy to configure and manage iSCSI, CIFS/SMB, and NFS storage.
PS Series Firmware
Infrastructure Management
PS Series Firmware
Dell Storage PS Series Firmware is integrated across the entire family of PS Series
storage arrays to virtualize SAN resources. System resources are automatically adjusted
to optimize performance and capacity and reduce IT administrative requirements.
PS Series Firmware is installed on each array and provides capabilities such as volume
snapshots, cloning, and replication to protect data in the event of an error or disaster.
Infrastructure Management
PS Series Group Manager
Group Manager is a management tool that is integrated with the PS Series Firmware that
provides detailed information about block and file storage configuration. It is an easy-to-use tool
for storage provisioning, data protection and array management.
The Group Manager is available as a CLI or GUI with search functionality, accessible through
web browsers with a connection to the PS Series SAN.
PS Series SAN Headquarters (SAN HQ) provides in-depth reporting and analytics, consolidated
performance, and robust event monitoring across multiple PS Series groups.
From a single GUI, information about the monitored PS Series groups is available at a glance.
This includes configuration and performance of pools, members, disks, and volumes, as well as
any alerts. If action is required, the PS Series Group Manager can be launched directly from SAN
HQ.
Launch Storage Update Manager directly from the SAN HQ Group list
Datastore Manager
Auto-Snapshot Manager
Virtual Desktop Deployment Manager
A drive failure is one of the most common PS hardware issues encountered. Properly
diagnosing a failed drive is critical to quick issue resolution.
Drive failure can be confirmed by the following:
For 2.5-inch drives (installed vertically), the drives are numbered 0–23, left to
right.
For 3.5-inch drives (installed horizontally), the drives are numbered from left to
right and top to bottom, starting with 0 on the upper left side.
Hardware Troubleshooting Basics
Identifying Faulty Power Supplies
LEDs are also a quick way to diagnose issues with power supply modules. Locate the power
supply module on the back of the array and check the LED activity. If the DC Power LED is off
and the Fault LED is amber, the module most likely needs to be replaced.
Follow standard operating procedures to remove and replace the faulty module.
Troubleshooting PS Series Groups in DSM
There are several logs available to help troubleshoot issues that may arise with the PS Series
Group. By default, the logs are disabled. The user can enable debug logs for PS Groups from
the Data Collector Manager (DCM).
Types of logs available:
Introduction to SC Series
Dell EMC SC Series is a versatile storage solution in the midrange storage portfolio.
The benefits of the SC Series include:
Value Storage: Models including a "v" in the name are geared toward smaller
business with entry-level storage needs. The controller and storage disks exist in the
same chassis.
Storage Arrays: These models include the controller and storage disks in the
same chassis. Expansion enclosures can be added as storage needs increase.
Storage Controllers: These models are controllers only. To function as a SAN,
expansion enclosures must be added.
SCv2000 Series
All three versions of the SCv2000 / SCv2020 / SCv2080 are controller/enclosure combinations.
The last two numbers indicate the quantity of built-in disks.
SCv2080 = 84 disks
SCv2000 = 12 disks SCv2020 = 24 disks
(5U height)
(2U height) (2U height)
SCv3000 Series
Both versions of the SCv3000 / SCv3020 are controller/enclosure combinations. The
naming convention for these models is different from other models in the SC Series.
SCv3000 = 16 disks (3U height) SCv3020 = 30 disks (3U height)
SCv3000
SCv3020
The SCv360:
4U in height
12 Gbps speed
SCv3000/SCv3020 supports up to a maximum of three SCv360 expansion
enclosures.
SC4020
The SC4020 is a controller/enclosure combination 2U in height.
Compatible Enclosures
In addition to its 24 internal drive slots, the SC4020 supports up 192 additional drives in a
combination of expansion enclosures.
2U in height
Requires SCOS 6.2 or newer
6 Gbps speed
Allows a total of 168 disks in a chain
SC200/SC220 cannot be in the same chain with other enclosure models
SC280 enclosure:
5U in height
Requires SCOS 6.4 or newer
6 Gbps speed
Three 14-drive rows in each of two drawers
SC7020
The SC7020 combines a controller and disk enclosure in a single 3U chassis.
The SC7020 is a storage system that includes both controller and enclosure in the
same chassis.
2.5-inch SAS hard drives are installed horizontally side by side.
Minimum: seven drives (four if they are SSDs) internally
Maximum: 30 drives internally
SC7020F
All-flash version of the SC7020 optimized for SSDs and SEDs, and does not
support HDD spinning drives.
Unlike the SC7020, the SC7020F supports all premium SC features without
requiring additional licenses.
In addition to its 24 internal drive slots, the SC4020 supports up 192 additional drives in a
combination of expansion enclosures.
SC5020
The SC5020 is a 3U storage system that is intended to replace the SC4020. It uses the
same chassis as the SC7020, so the SC5020 looks similar but has fewer features than
the SC7020.
SC5020
In addition to its 24 internal drive slots, the SC5020 supports up to 192 additional drives in a
combination of expansion enclosures.
SC8000
The SC8000 is a controller that is 2U in height.
This model is no longer being sold but still exists in many customer environments.
Compatible Enclosures
The SC200 series of enclosures is compatible with the SC8000, along with older modules of
enclosures no longer supported.
SC9000
The SC9000 is a controller that is 2U in height.
The SC9000 allows customers to build their SAN as they see fit. Dual SC9000s
combined with compatible enclosures make up a custom system.
The SC9000 can be configured as:
All-Flash
Hybrid
HDD
Compatible Enclosures
Newer hardware models require newer SCOS versions. Refer to the table for
specific requirements per controller.
SCOS Compatibility
Storage Tiers
Fluid Data Architecture allows all of the storage available to be used for all of the data.
Data flows between RAID and disk types. Storage Center manages all disks as a single
pool of resources which are accessed by multiple volumes.
Storage Profiles
Storage Profiles are associated with volumes. Data written to that volume is stored
using the RAID level and storage tier defined in that Storage Profile.
The default Storage Profiles cannot be modified, but custom Storage Profiles can be
created.
Core Features
Key points covered in this module include:
The SC8000 & SC9000 are the only models that are controllers only. All other
models include the controller and storage disks in the same chassis.
The Storage Center OS (SCOS) enables the core and licensed features that
make the SC Series function.
Dell Storage Manager (DSM) is the sole management tool used for systems
running SCOS 7.0 or newer.
Data is stored in sectors, sectors are stored in pages, and pages are written
across disks or mirrored depending on the RAID designation.
Storage Center manages all disks as a single pool of resources which are
accessed by multiple volumes.
Data written to a volume is stored using the RAID level and storage tier defined
in the associated Storage Profile.
Dual controllers enhance overall performance and availability through multi-
threaded read-ahead, and mirrored write cache.
VxRail
Dell EMC VxRail is a hyper-converged appliance that is tightly integrated with and managed by
VMware vSAN software.
XC Series Introduction
XC Series appliances can be deployed into any data center in less than 30 minutes and
support multiple virtualized, business-critical workloads and virtualized Big Data
deployments.
Standard features include thin provisioning and cloning, replication, data tiering,
deduplication, and compression.
The 14G PowerEdge server provides higher core counts, greater network throughput, and
improved power efficiency.
The PowerEdge server portfolio supports the latest generation of Intel Skylake processors and
the AMD Epyc SP3 processor.
Nutanix delivers web-scale IT infrastructure to medium and large enterprises. The software-
driven virtual computing platform combines compute and storage into a single solution to drive
simplicity in the datacenter.
Customers may start with a few servers and scale to thousands, with predictable performance
and cost. With a patented elastic data fabric and consumer-grade management, Nutanix is the
go-to solution for optimized performance and policy-driven infrastructure.
The XC6420 has 4 nodes per chassis and is available in an all-flash configuration.
XC Architecture: Cluster
Nodes are interconnected into clusters, which process huge amounts of data in a
distributed infrastructure. Each scale-out cluster is composed of a minimum of three
nodes and has no specified maximum.
Built-in data redundancy in the cluster supports high availability (HA) provided by the
hypervisor. If a node fails, HA-protected VMs can be automatically restarted on other
nodes in the cluster.
XC Architecture: DSF
The Nutanix Acropolis OS powers the Distributed Storage Fabric (DSF) feature, which
aggregates local node storage across the cluster. DSF operates via the interconnected
network of Controller VMs (CVMs) that form the cluster.
The Nutanix Acropolis OS is the foundation of the XC platform. It uses the compute, storage,
and networking resources in the XC Series nodes to increase existing application efficiency and
achieve maximum performance benefits. Also, leftover compute, storage, and networking
resources are available to do things like power new applications or expand existing ones.
Nutanix Prism is an end-to-end management solution for the XC Series that streamlines and
automates common workflows, eliminating the need for multiple management solutions. Prism
provides all administrative tasks related to storage deployment, management, and scaling from
a single pane of glass.
Powered by advanced machine learning technology, Prism also analyzes system data to
generate actionable insights for optimizing virtualization and infrastructure management.
Cluster Monitoring
The Nutanix web console provides several mechanisms to monitor events in the cluster.
To monitor the health of a cluster, virtual machines, performance, and alerts and events,
the Nutanix Web GUI provides a range of status checks.
The “C” in XC730xd-12C means the node is for storage capacity ONLY. It does
not run workload VMs or virtual desktops.
It is a best practice to connect the management network port for each 12C to the
same switch and then out to all three node management ports.
XC Hardware: SATADOM
The SATA Disk‐On‐Motherboard (SATADOM) shipped with XC Series appliances is intended as
an appliance boot device.
By default, the SATADOM comes with a power cable installed and is set in a read/write position.
The hypervisor boot device is not intended for application use. Adding more write intensive
software to the SATADOM boot disk results in heavy wear on the device beyond design
specifications, resulting in premature hardware failure.
Key points covered in this module include:
Introduction to VxRail
VxRail is a VMware hyper-converged infrastructure appliance delivering an all-in-one IT
infrastructure solution.
Powered by VMware vSAN and Dell EMC PowerEdge servers, VxRail delivers
continuous support for the most demanding workloads and applications.
VxRail is engineered for efficiency and flexibility, and is fully integrated into the VMware
ecosystem.
Deployment
Configuration
Management of distinct components
VBlock/VxBlock are converged platform offerings. These systems integrate enterprise
class compute and networking with Dell EMC storage while offering a choice of network
virtualization.
VxRack System is a rack-scale hyper-converged system with integrated networking. It
is the perfect choice for core data centers that require both enterprise-grade resiliency
and the ability to start small and grow linearly.
VxRail Appliances are specifically for departmental and EDGE applications as well as
small enterprise and mid-market data centers. Like VxRack, they do not contain a
storage array, but instead run a software-defined storage environment on the appliance.
Minimum Configuration
A 3-node VxRail is the minimum configuration and is ideal for proof of concept (POC) projects.
The configuration is easily expandable, but any additional nodes must have the same physical
attributes as the first three nodes.
VxRail Models
The introduction of the Dell PowerEdge server platform adds flexibility to meet any challenge or
deployment scenario. Dell EMC updated the naming and branding of the VxRail to reflect the
use cases they target.
All models are available with All Flash except the storage-dense S series.
On Demand Scaling
Customers can expand their infrastructure on demand by dynamically adding additional
chassis for a maximum of 16 chassis or up to 64 nodes with an RPQ.
This enables organizations to run from 40-200 VMs on a single appliance and up to
3,200 VMs overall with 64 nodes. All nodes within a cluster must have the same storage
configuration, either Hybrid or All Flash.
VMware Software Stack
The VMware solution for hyper-converged infrastructures consists of the following:
vSAN is a software-defined storage solution that pools together VxRail nodes across a VMware
vSphere cluster to create a distributed, shared data store that is controlled through Storage
Policy-Based Management.
An administrator creates policies that define storage requirements like performance and
availability for VMs on a VxRail cluster. vSAN ensures that these policies are administered and
maintained.
vSphere 6.5
VMware vSphere is the brand name for the VMware suite of virtualization products.
The VMware vSphere suite includes:
The VxRail Manager captures events and provides real-time holistic notifications about the state
of virtual applications, virtual machines, and appliance hardware.
VxRail Appliances are specifically built for departmental and EDGE applications
as well as small enterprise and mid-market data centers.
The VxRail Appliance architecture combines hardware and software to create a
system that is simple to deploy and operate.
Hardware components include customer-provided nodes and clusters as well as
chassis that include CPU, memory, and drives.
vSAN delivers flash-optimized, secure shared storage with the simplicity of a
VMware vSphere-native experience.
Dell EMC offers Software Defined Storage solutions as an application bundled with hardware or
as software only—allowing customers to independently choose their hardware vendor. The
offerings include the VxFlex OS and Nexenta.
VxFlex OS
The Dell EMC VxFlex OS is a software-only solution that uses LAN and existing local storage—
turned into shared block storage—to create a virtual SAN.
Nexenta
Nexenta is open-source software-defined storage solution that delivers fully featured file-based
and block-based storage services.
Introduction to VxFlex OS
Traditional SAN storage offers high performance and high availability required to
support business applications, hypervisors, file systems, and databases. But a SAN
does not provide the massive scalability and flexibility required by some modern
enterprise data centers.
Dell EMC VxFlex OS is a hardware-agnostic software solution that uses existing server
storage in application servers to create a server-based Storage Area Network (SAN).
This software-defined storage environment gives all member servers access to all
unused storage in the environment, regardless of which server the storage is on.
VxFlex OS Software Components
VxFlex OS is comprised of three main components:
MDM is responsible for data migration, rebuilds, and all system-related functions. But user data
never passes through the MDM.
The number of MDM entities can grow with the number of nodes. To support high availability,
three or more instances of MDM run on different servers.
SDC runs on the same server as the application. This enables the application to
issue an I/O request, which is fulfilled regardless of where the particular blocks
physically reside.
SDC communicates with other nodes (beyond its own local server) over TCP/IP-
based protocol, so it is fully routable.
There is no interference with I/Os directed at traditional SAN LUNs which are not
part of the VxFlex OS configurations at the site.
Users may modify the default VxFlex OS configuration parameter to enable two
SDCs to access the same data. The SDC is the only VxFlex OS component that
applications see in the data path.
A Read Hit is a read to the VxFlex OS system (SDS) where it finds the requested
data already in the server's read cache space. Read hits run at memory speeds, not
disk speeds, and there are no disk operations required.
A Read Miss is a read to the VxFlex OS system when requested data is not in
cache. It must be retrieved from physical disks (HDD or SSD).
When there is a Read Miss, the data that is fetched from the physical disk leaves
a copy in the read cache to serve other I/O pertaining to the same data.
If any I/O is served from the host read cache, then those I/Os are counted as
Read Hits. Any other I/O is counted as a Read Miss.
The SDC fulfills the I/O request regardless of where any particular block
physically resides.
When the I/O is a Write, the SDC sends the I/O to the SDS where the primary
copy is located.
The primary SDS sends the I/O to the local drive and in parallel, another I/O is
sent to the secondary mirror.
After an acknowledgment (ack) is received from the secondary SDS, the primary
SDS acknowledges the I/O to the SDC. Then an acknowledgement is sent to the
application.
Note: Writes are only buffered in the host memory after write caching. Writes in cache
can be used for future reads.
Introduction to Nexenta
Nexenta open-source software solutions, NexentaStor and NextenaEdge, deliver fully
featured file-based and block-based storage services that include snapshot, cloning,
and deduplication.
Key Features:
NexentaStor OS Introduction
NexentaStor OS is designed for network-attached storage (NAS) and Storage
Area Network (SAN) based deployments. It creates storage virtualization pools
consisting of multiple HDDs and SSDs.
Data can be organized in a flexible number of file systems and block storage.
Files can be accessed over the widely used Network File System (NFS) and
CIFS protocols, while block storage uses iSCSI or Fibre Channel protocols.
NexentaStor allows online snapshots to be taken of data and replicated to other
systems.
Click the tabs to learn more about the features and components of NexentaStor.
Nexenta Management View (NMV) is a GUI that enables you to perform most of the
NexentaStor operations. You can access NMV and manage NexentaStor from Windows, UNIX,
or Macintosh operating systems.
Nexenta Management Console (NMC) is a command line interface that enables you to
perform advanced NexentaStor operations. NMC provides more functionality than NMV.
The following activities are only available in the NMC:
System upgrades
Restarting NexentaStor services
Advanced configurations
Expert mode operations
The Nexenta Management Server (NMS) controls all NexentaStor services and processes. It
receives and processes requests from NMC and NMV and returns the output.
NexentaEdge
NexentaEdge is Nexenta's scalable object software platform.
NexentaEdge provides iSCSI block services to store data using an object storage API
that is designed for rapid scalability.
NexentaEdge was designed for OpenStack deployment along with full support for both
the Swift and S3 APIs.
Most data centers need both NexentaEdge and NexentaStor:
It is designed for efficient storage of cold data—data that has not been accessed
in a certain amount of time.
Encoding is completed as post-processing when data goes cold.
It supports a fixed set of data/parity combinations.
Overview
SolVe, or “Solutions for Validating your Engagement," is an interactive,
standalone application that generates specific, step-by-step service procedures for
legacy EMC products.
The SolVe Desktop combines the Product Generators (PG) of Dell EMC products
into one application.
These procedures come directly from Dell EMC product support engineers and
include videos, Customer Replaceable Unit (CRU) Instructions, related knowledgebase
content, and warnings for known issues.
Legacy EMC products that are supported by SolVe Desktop include NetWorker,
VPLEX, Connectrix, VNX, Data Domain, Isilon, VMAX, and many others.
On the main page of the Support Site, click 'SolVe' and follow the installation
steps.
These units are called Field Replaceable Units (FRU), or Customer Replaceable Units (CRU),
although these terms are often used interchangeably.
Note: Always refer to product documentation for the proper unit replacements steps.
Disk Drives
Power Supplies
I/O Cards
Fans
Controllers
System boards
A CRU is designed for quick, safe, and easy part replacement without special skills,
knowledge, or tools. CRU parts may be removed and replaced by the customer.
CRUs include:
Hot-Swappable Components
Replaceable components are divided into two categories:
Note: Always refer to Owner's or Service Manual for the proper replacement procedure.
Click each tab to see how few steps are required to remove and replace these components.
Front End: Switches on the front end can be connected to the controller array
via Fibre Channel (FC), FCoE, iSCSI, or SAS.
Back End: The enclosures on the back-end are connected to the controller via
SAS.
Number of controllers
Type of interface: Fibre Channel, iSCSI, or SAS
Number of switches or fabrics
Transport type: Virtual Port or Legacy Mode
Storage Center version and features enabled
Note: Always refer to the product reference guides for specific cabling procedures.
If a port or controller failure occurs, the virtual WWN moves to a working physical WWN in the
same fault domain and continues sending I/O.
When the failure is resolved and ports are re-balanced, the virtual port returns back to the
preferred physical port.
In a Single Fabric FC connection, the controllers are connected to a single switch. There is only
one FC path available so there is only one fault domain.
This configuration protects against the failure of a port or controller, but does not protect against
the failure of the switch.
In Dual Fabric FC configuration, two switches are used creating two fabrics. This configuration
is recommended since it prevents an FC switch failure or storage controller failure from causing
an outage.
Back End Cabling Best Practices
While back end cabling consists of SAS connections between controller and enclosures,
there are some factors to consider when identifying the cabling strategy, including:
SAS cable connectors are typically used with 6 Gb expansion enclosures, including the SC200
and SC220.
Mini-SAS HD cable connectors are typically used with 12 Gb expansion enclosures, including
the SC400, SC420 and SC460.
A SAS chain is made up of two paths—the A side and B side. The two separate paths
create redundancy. Each path can handle the data flow and each controller has full
access to all enclosures.
SAS chain limits are based on the number of disks within a chain. For example, the
SC9000 has a max disk limit of 192 per chain, while the SCv2020 has a max limit of 168
disks per chain.
If a controller has multiple SAS cards, each path should start and end in the same card slots
between the controller and the expansion enclosures (1:1, 2:2). See the red path as an
example.
When there are multiple SAS cards connecting to multiple chains, put all the A side paths to
one card.
Put all the B side paths on the second card to provide SAS card redundancy.
Key points covered in this module include:
The process of connecting the array to the network switch is called front end
cabling.
Connecting the array to the enclosures is called back end cabling.
Virtual port mode provides port and storage controller redundancy by connecting
multiple active ports to each Fibre Channel or Ethernet switch.
The SAS connection between a storage system and the expansion enclosures is
referred as a SAS chain.
A SAS chain is made up of two paths—the A side and B side. The two separate
paths create redundancy.
Refer to the product reference guides for specific cabling procedures.
LED Overview
LEDs provide a quick way to check overall hardware health. Understanding where to locate and
identify LEDs for each hardware component—and what the colors and patterns indicate—are
key steps to making a fast diagnosis and addressing issues.
Most replaceable units, such as power supplies, have LEDs to indicate the status of unit.
Because various product models and server generations may have different LED indicators,
always refer to the model product information for specific LED diagnostic and troubleshooting
guidelines.
To provide a general understanding of LED diagnostics, this module identifies the status
indicators on a PowerEdge server and a Storage Center array. Each hardware model may be
different.
System ID/health:
Each NIC has two indicators that provide information about the activity and link status. The
activity LED indicates if data is flowing through NIC, and the link LED indicator indicates the
speed of the connected network.
SC7020 Front Panel LEDs
The front panel of SC7020 storage array contains LEDs that indicate power status, overall
system health, and individual drive activity. These LEDs are located in two areas:
Quick Reference: Common Chassis LED
Indicators