Midrange+Storage+Performance+Planning+ +Participant+Guide (PDF) 2
Midrange+Storage+Performance+Planning+ +Participant+Guide (PDF) 2
PERFORMANCE
PLANNING
PARTICIPANT GUIDE
PARTICIPANT GUIDE
Midrange Storage Performance Planning
Back-end Connectivity 35
Dell Unity XT Onboard SAS Back-End Cabling 35
Dell Unity XT SAS I/O Module Back-End Cabling 38
Drive Configuration 49
Dell Unity XT Disk Processor Enclosure (DPE) 49
Dell Unity XT Drive Slot Layout 49
Dell Unity XT System Drives 51
Dell Unity XT Rotating Drive Support 53
Dell Unity XT SAS Flash Drive Support 55
Dell Unity XT SSD Wear Leveling 57
Mixing Drive Types in Dell Unity XT 58
Dell Unity XT Supported DAEs 59
Dell Unity XT Maximum Recommended Drive Types and IOPS 60
Dell Unity XT Hot Spares 61
PowerStore Base Enclosure 62
PowerStore Base Enclosure Drive Slot Layouts 63
PowerStore Drive Offerings 65
PowerStore Supported Expansion Enclosures 66
PowerStore Appliances and Drive Configurations 67
Appendix 209
Introduction
A solution architect must follow the best practices for each product when
planning and sizing a midrange storage solution with performance in mind.
At the highest level, design for optimal performance follows these few
simple rules:
The main principles for designing a midrange storage system for performance.
Overview
Dell Unity XT arrays are based on the Intel family of multicore processors.
The processors provide up to 16 cores capable of high levels of
storage performance.
The architecture is designed to support the latest flash technologies
such as Triple Level Cell (TLC).
The system come in two variants: All-Flash Array (AFA) and Hybrid-
Flash Array (HFA) models.
Capacity
Considerations
The 12Gb SAS SSD drives are in a 2.5-inch form factor with 520
byte/sector and are supported on:
25-drive Disk Processor Enclosure (DPE)
25-drive Disk Array Enclosure (DAE)
Distribute Workloads
Front-end Connectivity
Back-end Connectivity
When configuring drives, spread all the Flash drives across the available
back-end buses and DAES.
All Dell Unity XT arrays provide two integrated back-end bus
connections.
Unity XT 480/F and higher models also support a four port SAS
expansion I/O module.
Workloads
A single NAS server uses compute resources from only one node of the
PowerStore appliance.
If one PowerStore node is busier than the other, manually move NAS
servers to the peer node to balance the workload.
Simplify Configuration
Verify that the software update file is valid and not corrupt using the
SHA256 checksum.
Upgrade to the latest drive firmware available following the software
upgrade.
Environment
Instructions
Activity
c. 32
The CMI channels are used for communication between the Dell Unity XT
storage processors.
The bus effectively controls how much write bandwidth can be ingested by
the system.
Consider the table with an analysis of the expected read and write
bandwidth (sequential I/O).
The maximum capability for the Dell Unity XT 380/F is up to ~5.5 GB/s.
In Unity XT 480/F and higher models, the maximum write bandwidth
can be up to ~9.3 GB/s, accounting for parity writes.
Typically the bottleneck becomes the onboard SAS chip (if SAS I/O
module is not used).
Unity XT 480/F (or higher model) with only 9.3 GB/s Up to 8.4
DPE drives GB/s
Unity XT 380/F with DPE drives and drives 10.5 GB/s Up to 5.5
on SAS bus 1 GB/s *
Unity XT 480/F (or higher model) with SAS Depends on front-end ports
I/O Module
Table showing the maximum read and write bandwidth for Dell Unity XT 380/F and Dell
Unity XT 480/F (or higher models).
*On Dell Unity XT 380/F systems the performance is affected by the single
CMI per channel limit.
These tabs compare the average sustained levels of CPU utilization over
long periods of time.
For each operating range, an analysis of the system workload handling
is provided.
The expected system behavior is explained in the case that a single
SP must service the entire workload.
Low Utilization
Normal Utilization
High Utilization
Dell UnityVSA
1
Dell Unity virtual storage appliance or UnityVSA is a software-defined
storage solution which provides most of the same features as Dell Unity
XT systems.
PowerStore Deployment
PowerStore T ✓ ✓ X
Unified
PowerStore T ✓ X X
Block Optimized
PowerStore X ✓ X ✓
PowerStore T Unified
In this type of deployment, CPU and memory resources are used for both
block and file IOPS.
A PowerStore T block optimized system can deliver more block IOPS than
the same model deployed as a unified system.
The deployment can increase the amount of block workload that the
system can provide.
The mode devotes the additional CPU and memory that is not used for
file capabilities.
PowerStore X
PowerStore Cluster
Recommendations
When deploying multiple appliances for file access, plan to have multiple
clusters.
Migration of storage resources between cluster appliances is
applicable to block storage resources only.
File resources cannot migrate to a different appliance in a cluster.
In a PowerStore T Unified mode deployment, file services are
restricted to the cluster’s primary appliance.
Environment
Instructions
Activity
Back-end Connectivity
SAS Ports
Dell Unity XT Storage Processors use the SAS ports to move data to and
from the back-end drives.
Dell Unity XT systems have two onboard 12 Gb SAS ports in each of the
SPs within the Disk Processor Enclosure (DPE).
The onboard SAS ports provide sufficient IOPS and bandwidth
capabilities to support most workloads.
Maximum of 250,000 IOPS per port.
Maximum of 2,500 MB/s per port.
Two buses are connected to mini-SAS HD ports for Disk Array
Enclosure (DAE) expansions.
SAS Buses
On Unity XT 480/F, 680/F, and 880/F systems, one SAS internal bus is
dedicated to the DPE.
DPE drives are directly accessed by the underlying SAS controllers
over internal connections.
The DPE operates on bus 99 which is separate from the SAS
expansion ports.
The DPE read and write bandwidth is higher than a Unity XT 380/F
with no expansion.
When only the two onboard SAS ports on the DPE are available, Dell
Technologies recommends connecting DAEs in the following order:
1. DAE 1 connects to SAS Bus 1 (onboard SAS port 1).
2. DAE 2 connects to SAS Bus 0 (onboard SAS port 0).
3. DAE 3 connects to SAS Bus 1 (onboard SAS port 1).
DAEs can be added to the system while the operating system is active up
to the DAE and drive slot limit for the storage system.
DAEs or drive slots over the system limit are not allowed to come
online.
Consider the maximum number of drives supported for each storage
system model.
SAS Buses
The Unity XT DPE onboard SAS ports are set on the default buses 0 and
1.
The four ports on the SAS I/O Module are on designated buses 2, 3, 4,
and 5.
Backend Cabling
The first expansion DAE is cabled to DPE SAS port 1 to begin back-end
bus 1 as enclosure 0 (BE1 EA0).
The rest of the DAEs in the bus are daisy-chained where they are
intertwined.
The first DAE is daisy-chained to the seventh DAE designated as BE1
EA1, and so on.
The embedded module does not support 2-port card (100 GbE QSFP) –
no NVMe Expansion Enclosure support.
To support the addition of NVMe Expansion shelves, the module must
be converted to Embedded I/O Module v2.
Recommendations:
Use 2-meter cables to connect the base enclosure to the expansion
enclosure easily.
Use 1-meter cables to connect expansion enclosures to other
expansion enclosures.
Cable both I/O modules on the base enclosure to the Link Control Card
(LCC) on the first expansion enclosure.
Connect node A, SAS port B to LCC A, port A on the expansion
enclosure.
Connect node B, SAS port B to LCC B, port A on the expansion
enclosure.
Connect node A, SAS port A to LCC B, port B on the expansion
enclosure.
Connect node B, SAS port A to LCC A, port B on the expansion
enclosure.
1. Cable both I/O modules on the base enclosure to the Link Control Card
(LCC) on the first expansion enclosure.
a. Connect node A, SAS port B to LCC A, port A on the expansion
enclosure.
b. Connect node B, SAS port B to LCC B, port A on the expansion
enclosure.
2. Cable both I/O modules on the base enclosure to the LCCs on the last
expansion enclosure in the stack:
a. Connect node A, SAS port A to LCC B, port B on the last expansion
enclosure.
b. Connect node B, SAS port A to LCC A, port B on the last expansion
enclosure.
3. Cable expansion enclosure to expansion enclosure:
PowerStore 500
In PowerStore 500 models, the 4-port Mezz card (MEZZ 0) in each node
provides two NVMe ports.
Ports 2 and 3 are used for back-end connectivity to an ENS24
Expansion enclosure.
Each base enclosure supports two redundant connections to an
expansion enclosure.
The maximum number of expansion enclosures that are supported is
3.
Recommendations:
To avoid performance issues, cables cannot be longer than 3 meters.
PowerStore 1200-9200
PowerStore 1200, 3200, 5200, and 9200 models come with an embedded
I/O module v2 in each of its nodes.
The I/O Personality Module (IOPM) has a one 2-port card slot that is
primarily used for 100 GbE (QSFP) backend NVMe Expansion
Enclosure connectivity.
The base enclosure supports two redundant connections to the
expansion enclosure.
The maximum number of expansion enclosures that are supported is
3.
Recommendations:
To avoid performance issues, cables cannot be longer than 3 meters.
Drive Configuration
There are LEDs on the front of the DPE for both the enclosure and drives
to indicate status and faults.
The Dell Unity XT 380/380F model uses a different physical chassis than
the Dell Unity XT 480/480F and higher models.
The Unity XT three high-end system models are 480/480F, 680/680F, and
880/880F.
The DPE on these models houses 25x drive slots supporting 2.5-in. SAS,
and SAS Flash drives.
Important: The first four drive slots are reserved for system
drives which contain data that is used by the operating
environment (OE) software. Space is reserved for the system
on these drives, and the remaining space is available for
storage pools. These drives should not be moved within the
DPE or relocated to another enclosure.
The first four drives of Dell Unity XT systems are system drives (DPE disk
0 through DPE disk 3). Capacities from these drives store configuration
and other critical system data.
The available capacity from each of these drives is about 107 GB less
than from other drives.
System drives can be added to storage pools like any other drive, but
offer less usable capacity due to the system partitions.
To reduce the capacity difference when adding the system drives to a
pool, use a smaller RAID width for pools which contain the system
drives.
For example, choose RAID 5 (4+1) for a pool containing the system
drives, instead of RAID 5 (12+1).
Considerations
Heavy client workloads to these drives may interfere with the ability
of the system to capture configuration changes and result in slow
management operations.
Conclusion
Consider not using the system drives in storage pools for large
configurations with high drive counts and many storage objects.
System drives should not be used for systems which do not allow
remote access by support.
SAS 10 K
SAS Drives
NL-SAS
NL-SAS Drives
SAS Flash 2
400 GB
SAS Flash 3 SAS Flash
2 drive
SAS Flash 4
1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB SAS Flash 4 drives
SSDs have a limit for the amount of program/erase (PE) cycles that can
be done before the device becomes unreliable.
Dell Unity XT systems use a wear leveling technique to ensure that each
drive is operational until the end of the warranty period. Wear leveling
optimization ensures that the Flash drives do not prematurely wear out.
The SSD optimizations limit the number of writes to the drive. The drive is
given a certain quota of writes calculated based on wearing consumption
model.
Slice allocation requests take into account the wear level. In traditional
pools, wear information is propagated at the RAID Group [RG] level per
storage resource. The Dell Unity XT system determines which RAID
Group that the slice is allocated from in an attempt to balance wear across
other RAID Groups. This information is not visible to the user.
Unisphere users can view wear alerts, which are issued at 180, 90, 60,
and 30-day periods before the predicted end of life. Proactive Copy
operations (PACO) are automatically initiated at 30 days to spare drive.
Also, drive health status is updated to show faulted state and no longer
usable by the system.
SAS Flash 2
SAS Flash 3
The mixing of different Flash drive types in the same pool is supported.
The same drive sparing rules still apply.
For example, a SAS Flash 4 drive still requires a SAS Flash 4 drive
to be available for sparing.
The drive types under the Mixed Pools column can be used in mixed
pools.
The drive types under the All Flash Pools column can be used in all-
flash pools.
380/380F 500
480/480F 750
680/680F 1000
880/880F 1500
Depending on the workload attributes that are applied (I/O size, access
patterns, queue depth), individual drives provide varying levels of
performance.
SAS 15 K 350
NL-SAS 150
Nodes use the latest NVMe interface to connect to the internal drives.*
For systems that use two NVMe NVRAM drives, slots 21 and 22 must
remain unpopulated.
4. Drive Ready/Activity LED (Blue) on each drive.
5. Drive Fault LED (Amber) on each drive.
6. World Wide Name (WWN) Seed Tag: A blue pull-out tag that is located
between slots 7 and 8
7. Dell Serial Number Tag: A black pull-out tag that is located between
slots 16 and 17. The left side of the tag shows the Serial Number,
Service Tag, and Part Number. The right side shows the QRL code.
The QRL can be used to quickly find product info.
The PowerStore Base Enclosure has 25 slots that are labeled 0 to 24, and
support only NVMe devices.
Data drive slots in the base enclosure can be populated with NVMe
SSD or NVMe SCM drives (For data) in any combination.
Models—except for the PS 500 model have drive slots for NVMe
NVRAM devices (For caching and vaulting).
A minimum of six data drives must be used.
PS 500 Model
On the PowerStore 500 model, slots 0 through 24 are used for data
storage. There are no NVRAM drives.
On the PowerStore 1000, 1200, 3000, and 3200 models, the last two slots
(23 and 24) are populated with two NVMe NVRAM devices.
On these models, the two NVMe NVRAM devices are used for write
caching and vaulting.
PowerStore 5000, 5200, 7000, 9000, and 9200 have higher performance
requirements.
These models are populated with four NVMe NVRAM devices in all the
NVRAM slots, 21 through 24.
On these models, the four NVMe NVRAM devices are used for write
caching and vaulting.
Processors Two Intel Four Intel Four Intel Four Intel Four
Xeon Xeon Xeon Xeon Intel
CPUs 24 CPUs 40 CPUs 64 CPUs 96 Xeon
cores, 2.2 cores, 2.4 cores, 2.1 cores, 2.2 CPUs
GHz GHz GHz GHz 112
cores,
2.2 GHz
Max Drives 97 93 93 93 93
NVRAM N/A 2 2 4 4
Drives
Environment
Instructions
Activity
Fibre Channel, and Ethernet networks play a large role in determining the
performance potential of Dell midrange storage solutions.
High Availability
The NAS sever should also be associated to port 0 of the first I/O
Module on SPB.
For this configuration, access is available to the same networks.
Dell Technologies recommends using redundant switch hardware
between the midrange storage system and external clients.
Load Balancing
In this case, all eight ports across the two SPs are used. Do not
zone all hosts to the first port of each I/O Module.
2
A failure domain encompasses a section of a network that is negatively
effected when a critical device or network service experiences problems.
Ethernet Networks
A Unity XT 380/380F system DPE has three options of ports for front-end
connectivity.
Rear view of a Unity XT 380/380F 12 Gb/s 25 drives DPE showing front-end connectivity
port options.
The DPE on Unity XT 480/480F and higher models has two options of
ports for front-end connectivity.
Rear view of a Unity XT 680 DPE showing front-end connectivity port options.
Fibre Channel
iSCSI
If possible, configure Jumbo frames (MTU 9000) on all ports in the end-to-
end network, to provide the best performance.
Dell Unity supports 10 GbE and 1GBase-T ports that provide iSCSI
offload.
The CNA ports (when configured as 10 GbE or 1GBase-T) and the 2-
port 10 GbE I/O Module ports provide iSCSI offload.
Using these modules with iSCSI can reduce the protocol load on SP
CPUs by 10-20%, so that those cycles can be used for other services.
Port Performance
The table provides maximum expected IOPS and bandwidth values from
different Unity XT ports used for Block front-end connectivity.
The capability of a port does not guarantee that the system can reach
that level, nor does it guarantee that performance scales with
additional ports.
System capabilities are highly dependent on other configuration
parameters.
File Connectivity
Dell Unity supports NAS, NFS, FTP, and SMB connections on multiple 1
Gb/s and 10 Gb/s port options.
10G BASE-T ports can auto-negotiate to 1 Gb/s speeds.
10 Gb/s is recommended for the best performance.
If possible, configure Jumbo frames (MTU 9000) on all ports in the end-to-
end network, to provide the best performance.
LACP
LACP can be configured across ports on-board the SP, or across ports
on the same I/O Module.
LACP can be configured across any Ethernet ports that have the same
speed, duplex, and MTU.
LACP cannot be enabled on ports that are also used for iSCSI
connections.
FSN
Combine FSN and LACP with redundant switches to provide the highest
network availability.
Load Balancing
Port Performance
The table provides maximum expected IOPS and bandwidth values from
different Unity XT ports used for File front-end connectivity.
The capability of a port does not guarantee that the system can reach
that level, nor does it guarantee that performance scales with
additional ports.
System capabilities are highly dependent on other configuration
parameters.
Overview
Theory of Operations
A tenant is created with a name, one or more VLAN IDs, and a Universally
Unique Identifier (UUID).
A tenant created with three isolated NAS servers and access defined by associated host
configuration profiles.
Once a tenant is created, NAS servers must be created for each tenant
VLAN.
Host configurations that provide access to hosts, subnets, and
netgroups can then be created and associated with the tenants.
These host configurations are used to control the access of NFS and
SMB clients to shared file systems.
Access to SMB file systems is controlled through a file and directory
access permissions set using Windows Directory controls.
The associated VLANs separate the tenant traffic, providing tenant data
separation and increasing security. The tenant traffic is separated at the
Linux Kernel layer.
Each tenant has one or multiple NAS servers, however, each NAS server
can be associated with only one tenant.
VLAN Tagging
Three NAS servers with the same IP address isolated by VLAN tagging
The association of the NAS servers with each tenant provides the wanted
isolated network, providing each tenant with its own IP namespaces.
PowerStore 500
PowerStore 1000-9000
The 4-port card on MEZZ 0 is used for connection of the Base Enclosure
to an intercluster switch.
PowerStore 1200-9200
Embedded Module v2 with 4-port card and optional 2-port 100 GBe QSFP card
The limitation occurs only when all four ports on the Fibre Channel
I/O module are operating at 32 Gb/s.
Both I/O module slots on PowerStore 500 are 8-lane PCIe, and therefore
there is no slot preference.
NVMe/FC
Note that all parts of the network, including switches and HBAs, must
support NVMe over Fibre Channel.
Ethernet Ports
iSCSI
Enable Jumbo frames for iSCSI by setting the Cluster MTU to 9000, and
setting the storage network MTU to 9000.
The embedded module 4-port card and the optional network I/O modules
are 8-lane PCIe Gen3.
When more than two 25 GbE ports are used, the cards are
oversubscribed for MBPS. To maximize MPBS scaling in the system, it is
recommended to:
Cable and map the first two ports of all cards in the system first.
Then cable and map other ports as needed.
For PowerStore T Unified deployments configured for both iSCSI and file
access, the recommendations are:
Use different physical ports for NAS and iSCSI.
Log in host iSCSI initiators to iSCSI targets on the ports specifically
planned for iSCSI.
NVMe/TCP
NAS
The ports must be on the same node and operate at the same
speed.
A mirror link aggregation will automatically be created on the peer
node.
Enable Jumbo Frames for NAS by settings the cluster MTU to 9000.
Environment
The infrastructure of an ISP data center was upgraded with new Ethernet
and Fibre Channel switches to support 25 Gb/s and 32 Gb/s speeds. A
Dell Unity XT 680F system is configured to provide isolated NAS services
for multiple tenants and consistency groups for the invoicing application
running on a cluster of servers. An analysis of the connected hosts and
NAS clients indicate that none requires replacement of HBAs or NICs.
Instructions
Activity
Dell Unity XT supports two different types of storage pools: dynamic pools
and traditional pools.
Pools contain groups of drives in one or more RAID configurations.
For each RAID type, there are multiple drive count options.
Dell recommends that a storage pool always has at least 10% free
capacity to maintain proper operation.
By default, Dell Unity XT systems raise an alert if a storage pool has less
than 30% free capacity.
Dell Unity XT applies RAID protection to the storage pool to protect user
data against drive failures.
Storage pools are built using one or more individual drive groups that are
based on the RAID type and stripe width for each selected tier.
The RAID type determines the performance characteristics of each
drive group.
For example, a RAID 5 drive group can still operate with the loss of
one drive in a Traditional Pool or it's equivalent in a Dynamic Pool.
The stripe width determines the fault characteristics of each drive
group.
RAID Characteristics
Protection
Level
RAID 1/0 RAID 1/0 provides the highest level of performance from
a given set of drive resources. RAID 1/0 also has the
lowest CPU requirements; however, only 50% of the total
drive capacity is usable. Each drive has identical data.
Data is striped across the drive pairs.
Overview
Dynamic pools are storage pools whose tiers are composed of Dynamic
Pool private RAID Groups.
Dynamic Pools apply RAID to groups of drive extents from drives
within the pool and allow for greater flexibility in managing and
expanding the pool.
The feature enables improved pool planning, provisioning, and
delivering a better cost per GB.
3
Pool consisting of a single drive type such as SAS-Flash, SAS, or NL-
SAS drives.
4
Pool consisting of a combination of SAS-Flash, SAS, and/or NL-SAS
drives.
Provisioning
The storage administrator must select the RAID type (RAID 1/0, RAID 5 or
RAID 6) for the tiers that will build the dynamic pool.
The system automatically populates the RAID width which is based on the
number of drives in the system.
The example shows the configuration process for an All-Flash pool with
RAID 5 (4+1) protection.
1. The process combines the drives of the same type into a drive
partnership group.
2. At the physical disk level, the system splits the whole disk region into
identical portions of the drive called drive extents.
a. Drive extents hold a position of a RAID extent or are held in reserve
as a spare space.
b. The drive extents are grouped into a drive extent pool.
3. The drive extent pool is used to create a series of RAID extents. RAID
extents are then grouped into one or more RAID Groups.
4. The process creates a single private LUN for each created RAID
Group by concatenating pieces of all the RAID extents and striping
them across the LUN.
Performance
At the time of creation, dynamic pools use the largest RAID width possible
with the maximum number of drives that are specified for the stripe width.
When creating a dynamic pool, RAID type and spare space may be
selected.
With Unisphere, the system automatically defines the RAID width to
use based on the number of drives selected.
The selected drive count must be in compliance with the RAID
stripe width plus spare space reservation requirement for each of
the selected tiers.
A storage administrator may set the RAID width only when creating
the pool with the UEMCLI or REST API interfaces.
Up to two drives of spare space per 32 drives may be selected.
If there is a drive failure, the data that was on the failed drive is
rebuilt into the spare capacity on the other drives in the pool.
Also, unbound drives of the appropriate type can be used to
replenish a pools spare capacity, after the pool rebuild has
occurred.
With dynamic pools, there is no performance or availability advantage to
using smaller RAID widths. To maximize usable capacity with parity RAID,
it is recommended to initially create the pool with enough drives to
guarantee the largest possible RAID width.
For RAID 5, initially create the pool with at least 14 drives.
For RAID 6, initially create the pool with at least 17 drives.
Spare Space
Dynamic pools use spare space to rebuild failed drives within the pool.
Spare space consists of drive extents that are not associated with a RAID
Group, used to rebuild a failed drive in the drive extent pool.
Each drive extent pool reserves a specific percentage of extents on
each disk as the spare space.
The percentage of reserved capacity varies based on drive type and
the RAID type that is applied to this drive type.
If a drive within a dynamic pool fails, spare space within the pool is
used.
Drive Rebuild
When a pool drive fails, the spare space within the same Drive
Partnership Group as the failed drive is used to rebuild the failed drive.
Example of a rebuild process on a seven drives (D1-D7) pool with a faulted drive (D4).
A spare extent must be from a drive that is not already in the RAID extent
that is being rebuilt.
Considerations
A dynamic pool can expand up to the system limits by one or more drives
under most circumstances.
Adding capacity within a Drive Partnership Group causes the drive extents
to rebalance across the new space. This process includes rebalancing
new, used, and spare space extents across all drives.
The process runs in parallel with other processes and in the background.
Balancing extents across multiple drives distributes workloads and
wear across multiple resources.
Optimize resource use, maximize throughput, and minimize response
time.
Creates free space within the pool.
When adding a single drive or less than the RAID width, the space is
available about the same time a PACO operation to the drive takes.
The system identifies the extents that must be moved off drives to the
new drives as part of the rebalance process.
As the extents are moved, their original space is freed up.
If adding a single drive and the spare space boundary is crossed, none of
that drive capacity is added to the pool usable capacity.
If expanding with a drive count that is equal to the Stripe width or less, the
process is divided into two phases:
1. The dynamic pool is expanded by a single drive, and the free space
made available to the user.
This process enables some of the additional capacity to be added
to the pool.
o Only if the single drive expansion does not increase the amount
of spare space required.
If the pool is running out of space, the new free space helps delay
the pool from becoming full.
The new free space is made available to the user If the expansion
does not cause an increase in the Spare Space the pool requires.
o When extra drives increase the spare space requirement, a
portion of the space being added is reserved equal to the size of
one drive.
o This space reservation can occur when the spare space
requirement for drive type -1 for 31 drive policy is crossed.
2. The dynamic pool is expanded by the remaining drive count for the
original expansion request.
Expanding a dynamic pool by the same number (stripe width plus the hot
spare reservation) and type of drives concludes relatively fast.
The expansion process creates extra drive extents.
From the drive extents, the system creates RAID extents and RAID
Groups and makes the space available to the pool as user space.
The added drives exceed the maximum number for the RAID stripe
width.
For example, consider a RAID 5 (4+1) dynamic pool configured
with six drives (one drive worth of hot spare capacity per 32 drives).
When adding six more drives, the pool drive count (12) exceeds the
maximum allowed for the RAID stripe width (nine).
The time for the space to be available matches the time that it takes to
expand a traditional pool.
The user and spare extents are all contained on the original disks.
There is no rebalancing.
If the number of drives in the pool has not reached the 32 drive
boundary there is no requirement to increase the spare space.
Mixing Drives
These rules apply for storage pool creation and expansion, and the use of
spare space.
When a new drive type is added, the pool must be expanded with a
minimum number of drives.
The number of drives must satisfy the RAID width plus the set hot
spare capacity.
If the number of larger capacity drives is not greater than the RAID width,
the larger drive entire capacity is not reflected in the “usable capacity.”
The example displays a RAID 5 (4+1) configuration with 1/32 drives of hot
spare reservation, using mixed drive sizes.
A storage administrator selects an 800 GB drive to add to the pool
using the UI.
In this configuration, only 400 GB of space is available on the 800 GB
drive.
The remaining space is unavailable until the drive partnership group
contains at least the same number of 800 GB drives as the RAID
width+1.
Adding drives of the same size of the largest drive on the dynamic pool
All the space within the drives is available only when the number of drives
within a drive partnership group meet the RAID width+1 requirement.
The example shows the expansion of the original RAID 5 (4+1) mixed
drive configuration by five drives.
The operation reclaims the unused space within the 800 GB drive.
After adding the correct number of drives to satisfy the RAID width of
(4+1) + 1, all the space becomes available.
Overview
The Dell Unity XT platform uses dynamic pools by default but support the
configuration of traditional pools using UEMCLI or REST API.
In physical systems running Unity OE 5.2 and later releases, traditional
pools can still co-exist with dynamic pools.
Dell UnityVSA supports the deployment of ONLY traditional storage
pools.
5
In a homogeneous pool, only one disk type (SAS-Flash, SAS, or the NL-
SAS drives) is selected during pool creation.
6
Heterogeneous pools consist of multiple disk types. A hybrid system
supports SAS-Flash, SAS, and NL-SAS drives in the same pool.
Provisioning
The storage administrator defines the settings for building the traditional
pool:
Select the drive type from the available tiers. Each tier supports drives
of a certain type and a single RAID level.
Select the RAID type (RAID 1/0, RAID 5 or RAID 6) and the stripe
width.
Identify if the pool must use the FAST Cache feature.
Optionally associate a Capability Profile for provisioning vVol
datastores.
3. These Private LUNs are split into continuous array slices that are 256
MB. Slices hold user data and metadata. (FAST VP moves slices to
the various tiers in the pool using this granularity level).
4. After the Private LUNs are partitioned out in 256 MB slices, they are
consolidated into a single pool that is known as a slice pool.
Performance
Dell recommends smaller RAID widths when configuring the same number
of drives in a traditional pool.
Smaller widths provide the best performance and availability.
For example, when configuring a Traditional Pool tier with RAID 6, use
4+2 or 6+2 as opposed to 10+2 or 14+2.
Hot Spare
Homogeneous and Heterogeneous pools with hard drives arranged in RAID Groups (RG)
All-Flash Pools
For an all-Flash pool, use only 1.6 TB SAS Flash 3 drives, and
configure them all with RAID-5 8+1
Hybrid Pools
Hybrid pools can contain HDDs (SAS and NL-SAS drives) and Flash and
can contain more than one type of drive technology in different tiers.
Use hybrid pools for applications that do not require consistently low
response times, or that have large amounts of mostly inactive data.
Hybrid pools typically provide greater capacity at a lower cost than all-
Flash pools.
The pools have lower overall performance and higher response times.
Dell recommends using only a single drive speed, size, and RAID width
within each tier of a hybrid pool.
Block storage resources that are supported by the Dell Unity XT platform
include LUNs, consistency groups, VMFS datastores and vVol (Block).
Dell Unity XT supports the provisioning of block storage capacity for ESXi
hosts.
VMFS datastores are built from Dell Unity XT LUN (Block).
Del Unity XT vVol (Block) datastores are storage containers for
VMware virtual volumes (vVols).
Dell recommends using thin storage objects, as they provide the best
capacity utilization, and are required for most features.
Thin storage objects are virtually provisioned and space efficient.
Thin storage objects are recommended when any of the following features
are used:
Data Reduction
Snapshots
Asynchronous replication
Thick storage objects reserve capacity from the storage pool and dedicate
it to that particular storage object.
Thick storage objects are not space efficient, and do not support the
use of space-efficient features.
7
Protocol Endpoints or PEs establish a data path between the ESXi hosts
and the respective vVol datastores.
Volumes
When the capacity and demand change over time, moving the
storage resource to another appliance within the cluster is
supported.
The operation is performed with a Manual Migration, Assisted
Migration, or Appliance Space Evacuation in a PowerStore cluster.
The volumes are thin provisioned to optimize the use of available
storage.
An application tag from a predefined list of categories can be
associated with each volume.
The volume is accessible through either the iSCSI targets that the host
is connected to or the FC ports the host is connected to.
It depends on which storage network is configured between the
host and the appliance.
Each volume is associated with a name and logical unit number
identifier (LUN).
Volume Groups
NAS Servers
A NAS Server is a virtual file server that provides the file resources on the
IP network, and to which NAS clients connect. The NAS server is
configured with IP interfaces and other settings that are used to export
shared directories on various file systems.
8
Pooling individual hosts together into a host group enables you to
perform volume-related operations across all the hosts in the group.
NAS Servers are configured to enable clients to access data over Server
Message Block (SMB), and Network File System (NFS) protocols.
Windows clients have access to file-based storage shared using the
SMB protocol.
Linux and UNIX clients can access file systems using the NFS
protocol.
NAS servers also enable clients to access data over File Transfer Protocol
(FTP) and Secure FTP (SFTP).
File Systems
SMB Shares and NFS Exports are exportable access points to file system
storage that NAS clients use.
Powerstore supports file system snapshots and thin clones of file systems.
It is not recommended for the same host to access the same block
storage resource using more than one protocol.
PowerStore provides access to block storage resources through Fibre
Channel, NVMe/FC, iSCSI or NVMe/TCP protocols.
Hosts must access the block resource using only one of these
protocols.
Appliance Balance
There are two paths between the host and the two nodes within the
PowerStore appliance for block storage resources access.
Resources are accessed using ALUA/ANA active/optimized or
active/non-optimized paths.
I/O is normally sent on an active/optimized path.
Dynamic node affinity is only available to block storage resources with the
node affinity not manually set by means of PSTCLI or REST API.
The system does not need to trespass any volume between nodes.
Performance Policy
The performance policy does not have any impact on system behavior
unless some volumes have been set to Low Performance Policy, and
other volumes are set to Medium or High.
During times of system resource contention, PowerStore devotes
fewer compute resources to volumes with Low Performance Policy.
Reserve the Low policy for volumes that have less-critical performance
needs.
File storage resources are accessed through NAS protocols, such as NFS
and SMB.
A NAS server can provide access to a file system using all NAS
protocols simultaneously, if configured for multiprotocol access.
A single NAS server uses compute resources from only one node of the
PowerStore appliance.
It is recommended to create at least two NAS servers (one on each
node) so that resources from both nodes contribute for the file
performance.
If one PowerStore node is busier than the other, manually move NAS
servers to the peer node to balance the workload.
All the file systems that are served by a given NAS server move
with the NAS server to the other node.
Environment
Instructions
Activity
Overview
The feature is supported only on the Unity XT Hybrid Flash Array (HFA)
models and UnityVSA.
Creating mixed pools reduces the cost of a configuration by reducing
drive counts and using larger capacity drives.
Data requiring the highest level of performance is tiered to Flash, while
data with less activity resides on SAS or NL-SAS drives.
Tiering Policy
FAST VP Tiering policies determine how the data relocation takes place
within the storage pool. Access patterns for all data within a pool are
compared against each other.
The Start High, then Auto-Tier is the recommended policy for each
newly created pool. The tier takes advantage of the Highest Available
Tier and Auto-Tier policies.
Use the Lowest Available Tier policy when cost effectiveness is the
highest priority. With this policy, data is initially placed on the lowest
available tier with capacity.
The default, FAST VP policy for all storage objects is “Start High then
Auto-tier.” This policy places initial allocations for the object in the highest
tier available. FAST-VP monitors the activity of the object to determine the
correct placement of data as it ages.
Tiering Process
Performance
FAST Cache is a feature that extends the storage system existing DRAM
caching capacity.
Overview
FAST Cache can scale up to a larger capacity than the maximum DRAM
Cache capacity.
FAST Cache consists of one or more RAID 1 pairs (1+1) of SAS Flash
2 drives.
Provides both read and write caching.
For reads, the FAST Cache driver copies data off the disks
being accessed into FAST Cache.
For writes, FAST Cache effectively buffers the data waiting to
be written to disk.
Review the supported Drives for FAST Cache.
FAST Cache improves the access to data that is resident in the SAS and
NL-SAS tiers of the pool.
It identifies a 64 KB chunk of data that is accessed frequently. The
system then copies this data temporarily to FAST Cache.
The storage system services any subsequent requests for this data
faster from the FAST Cache.
The process reduces the load on the underlying disks of the LUNs
which will ultimately contain the data.
The data is flushed out of the cache when it is no longer accessed
as frequently as other data.
Subsets of the storage capacity are copied to FAST Cache in 64
KB chunks of granularity.
Components
Policy Engine
The FAST
Cache Policy Engine is the
software which monitors
and manages the I/O flow
through FAST Cache.
The Policy
Engine keeps statistical
information about blocks on
the system and determines
what data is a candidate for
promotion.
A chunk is
marked for promotion when
FAST Cache components an eligible block is accessed
from spinning drives three
times within a short amount of time.
The block is then copied to FAST Cache, and the Memory Map is
updated.
The policies that are defined in the Policy Engine are system-defined
and cannot be modified by the user.
Memory Map
The FAST Cache Memory Map contains information of all 64 KB
blocks of data currently residing in FAST Cache.
Each time a promotion occurs, or a block is replaced in FAST Cache,
the Memory Map is updated.
The Memory Map resides in DRAM memory and on the system drives
to maintain high availability.
When FAST Cache is enabled, SP memory is dynamically allocated to
the FAST Cache Memory Map.
Operations
A shrink operation allows the removal of all but two drives from
FAST Cache.
Performance
FAST Cache can improve the performance of one or more hybrid pools
within Dell Unity XT HFA systems.
At a system level, FAST Cache reduces the load on back-end hard drives
by identifying when a chunk of data on a LUN is accessed frequently.
FAST Cache can increase the IOPS achievable from the Dell Unity XT
HFA systems.
As a result, the system has higher CPU utilization since the additional
I/O must be serviced.
Before enabling FAST Cache on additional pools or expanding the size
of an existing FAST Cache, monitor the average system CPU
utilization to determine if the system can accommodate the additional
load.
Dell recommends placing a Flash tier in the hybrid pool before configuring
FAST Cache on the pool.
Enable FAST Cache on the hybrid pool if the workload in that pool is
highly transactional and has a high degree of locality that changes rapidly.
For applications that use larger I/O sizes, have low skew, or do not
change locality quickly, it is more beneficial to increase the size of the
Flash tier rather than enable FAST Cache.
Overview
All Unity XT physical models provide Inline Data Reduction that lowers
the cost per storage that is consumed.
Data reduction provide capacity savings by reducing the space
required to store a dataset.
The feature also improves the cost per IOPS through better utilization
of system resources.
The data reduction logic occurs in buffer cache before destaging writes to
disk.
The logic discards zero blocks (zero detection) and recognizes
common patterns (deduplication) based on some of the most popular
workloads such as virtual environments.
Processor cycles are only used for the deduplication and compression
logic.
Support
Operations
Data Reduction helps reduce the Total Cost of Ownership (TCO) of a Dell
Unity XT storage system.
Performance
Data reduction increases the overall CPU load on the system when
storage objects service reads or writes of reducible data and may increase
latency.
Overview
Unity XT snapshots
Use Cases
The feature provides local data protection for the Unity XT platform.
Restore source data to a known point-in-time.
Restoration is instant and does not use extra storage space.
Test and backup operations
Support
Considerations
Performance
Overview
Base LUN family for LUN1 includes all the snapshots and Thin Clones
A Base LUN family is the combination of the Base LUN, and all its
derivative Thin Clones and snapshots.
The Base LUN family includes snapshots and Thin Clones based on
child snapshots of the storage resource or its Thin Clones.
The original or production LUN for a set of derivative snapshots, and
Thin Clones is called a Base LUN.
A snapshot of the LUN, Consistency Group, or VMFS datastore that is
used for the Thin Clone create and refresh operations is called a
source snapshot.
The original parent resource is the original parent datastore or Thin Clone
for the snapshot on which the Thin Clone is based.
Thin Clones are supported on all Dell Unity XT systems and Dell
UnityVSA.
Capabilities
Thin Clone operations Users can create, refresh, view, modify, expand,
and delete a thin clone.
Performance
With thin clones, users can make space-efficient copies of the production
environment.
Dell recommends including a Flash tier in a hybrid pool where Thin Clones
are active.
Snapshots
A snapshot saves the state of the storage resource, and all the files and
data within it, at a particular point in time.
Supported storage resources are volume, volume group, virtual
machine and file system.
Snapshots can be created manually, or by applying a protection policy.
Snapshots
Thin Clones
PowerStore systems support thin clones of NAS Server, file system, file
system snapshot, volume, volume group, or volume/volume group
snapshot.
Use Cases:
Development and test environments
Parallel processing
Online backup
System deployment
Capabilities
Snapshots
Snapshots are NOT full copies of the original data and should not be
relied on for mirrors or disaster recovery.
Volume snapshots are read-only. You cannot add to, delete from, or
change the contents of a Volume snapshot.
File system snapshots can be refreshed.
A snapshot of a snapshot cannot be taken.
Thin Clones
Performance
Comparison
Any-Any Refresh From base LUN only Yes, any Thin Clone
can be refreshed from
any snapshot.
Remote Replication
9
Recovery Point Objective (RPO) is the acceptable amount of data, which
is measured in units of time, which may be lost due to a failure.
Local Replication
The storage resources are replicated from one storage pool to another
within the same storage system.
Performance
Remote Replication
Performance
Synchronous replication transfers data to the remote system over the first
Fibre Channel port on each SP.
When planning to use synchronous replication, it may be appropriate
to reduce the number of host connections on this port.
Overview
D@RE provides protection against data being read from a lost, stolen, or
failed disk drive.
Data is encrypted using 256-bit Advanced Encryption Standard (AES)
encryption algorithms.
Encryption standard is based on Federal Information Processing
Standard (FIPS) 140-2 Level 1 validation.
Compliance is within industry or government data security regulations
that require or suggest encryption:
HIPAA (healthcare)
PCI DSS (credit cards)
GLBA (finance)
Considerations
Overview
Data Encryption protects against data tampering and data theft in the
following use cases:
Stolen drive: Drive is stolen from a system, and access the data on
the drives is attempted.
During transit: Attempts to read data during transit of any drive to
another location.
Discarded drive: Attempts to read data even if drive is broken or
discarded.
SEDs
All PowerStore drives ship with D@RE enabled and are FIPS-140-2
Level 2 certified.
Encryption is automatically activated during the initial configuration of
a cluster.
For countries where encryption is prohibited, non-encrypted
systems are available.
KMS
Overview
Unity XT series platform Host I/O Limits is a feature that limits initiator I/O
operations to the Block storage resources: LUNs, snapshots, VMFS
datastores, thin clones, and vVol datastores.
Host I/O Limits can be set on physical or virtual deployments of the Unity
platform.
Host I/O Limit is either enabled or disabled in a Unity XT or UnityVSA
system. All Host I/O Limits are active if the feature is active.
Host I/O Limits are active when policies are created and assigned to
the storage resources. The feature provides system-wide Pause and
Resume controls.
Host I/O limit policies are customizable for absolute I/Os, density-based
I/Os, and burst I/Os.
Only one I/O limit policy can be applied to an individual LUN or a LUN that
is a member of a consistency group. When an I/O limit policy is associated
with multiple LUNs, it can be either shared or nonshared.
Performance
Application Considerations
VMware Datastores
Only choose a different host I/O size if all applications that are
hosted in the NFS Datastore primarily use the selected I/O size.
When configuring vVol (File) datastores, it is recommended to create at
least two vVol-enabled NAS Servers, one on SPA and one on SPB.
AppsON
Transactional
Sequential
For workloads which require high bandwidth for sequential streaming data,
it may be beneficial to use thick storage objects in Dell Unity XT systems.
Thick storage objects fully allocate the capacity before application use,
and in a consistent manner, which can improve subsequent sequential
access.
Note that thick storage objects are not compatible with most features
(Data Reduction, Snapshots, Asynchronous Replication), so only use
thick storage objects if these features will not be utilized.
Host Configurations
Host operating systems may not apply the appropriate settings when
mounting PowerStore volumes or configuring access to Unity XT block
LUNs.
Host alignment for Unity XT block LUNs only needs to be done for host
OS which still use a 63-block disk header. If alignment is required, perform
the operation using a host-based method, and align with a 1MB offset.
When a host is attached to a PowerStore block volume, the host can use
this volume as a raw device, or it can create a local file system on the
volume first.
VMware Integration
Multipathing
iSCSI
When configuring LUNs on ESXi that are accessed via iSCSI, disable
“DelayedACK” on ESXi.
Performance Planning
Different Unity XT models have different CPU speeds and core counts,
which help to achieve different I/O performance potentials.
In general, the IOPS capability of the Unity models scales linearly from
Unity XT 380 upwards.
PowerStore models have different CPU core counts and speeds to help to
achieve different I/O performance potentials.
1. What are some factors that may prevent a storage system solution
from delivering its full potential?
a. Network not fast enough.
b. Competing workloads.
c. Number of clients and their power.
d. Load not balanced across all the front-end ports on the array.
e. All of these.
The type of data and collection method vary according to the different
solutions.
Collector
The other thing the Live Optics collector does is allow array-based
performance data to be uploaded to the Live Optics portal to create
projects for review. The Windows collector, called Optical Prime, runs
on a Windows operating system (desktop or server version). It collects
data from local and remote Windows computers, remote UNIX or Linux
computers, including XenServer and KVM, and from VMware vCenter.
Project
Web Portal
Live Optics Dashboard is the landing page that shows all tools and
projects.
Select any of the icons from the left navigation panel or right window to
access individual processes.
For example, Download Collectors prompts you to select the
operating system of the system where you want to run the Live
Optics collector.
Go to the Live Optics login page to access Live Optics.
Another option, Request Capture, sends an email to a customer
requesting that they download the collector.
Optical Prime view of collection options with Server & Virtualization selected
Once finished, projects are available from the Live Optics Dashboard.
Each project is assigned a Project ID can be used as input to Dell
Midrange or PowerStore Sizer tools. Project details can be viewed,
deleted, shared, or exported by selecting the project name. You can view
environmental and performance details from the interface.
Live Optics supports data collection for Dell Unity, PowerStore, and other
storage platforms.
Once a storage array is selected, supply a DNS or IP address,
username, and password credentials for authentication.
By default, collections are done no more than 1 week before the
current date but are configurable.
The collection downloads performance archive files from the array.
Once downloaded, the files are uploaded to the web service under the
project name.
Click the project name to share. Download the PowerPoint
presentation or delete the project.
Characterization of Workloads
Workload Attributes:
I/O Size
Read vs Write
Random vs Sequential
Working Set Size
Skew
Concurrency
Workload characteristics dramatically affect performance.
Understand the application and its workload before attempting to size a
storage system and make performance estimates.
I/O Size
The I/O Size (also called I/O Request or Transfer Size) is the amount of
data in each I/O transaction request by a host.
Some typical examples of I/O sizes are:
Read/Write
Sequential reads that find their data in the array cache consume
the least amount of resources and have the highest throughput.
Reads not found in cache, which are normal with random access,
have much lower throughput and higher response times. This is
because the data must be retrieved from disk.
Writes use more resources and are slower than reads because protection
is usually added to new data.
Random vs Sequential
Host applications fall under one of two access patterns: Random and
Sequential.
Random access is exemplified by Online Transaction Processing
(OLTP) such as a database, where data reads and modifications are
made in a scattered manner across the entire dataset.
A random workload is a workload where reads or writes are
distributed throughout the relevant address space.
Random I/O at the drive level requires the drive to seek data across
the rotating platters (in HDDs), which involves a relatively slow,
mechanical head movement.
Sequential access refers to successive reads or writes that are
logically contiguous within the relevant address space.
Working Set Size is the portion of the total data space an application
uses at a certain time, also known as active data.
The working set of an application is defined as the total address space
that is traversed, either written or read, in some finite, short time during
its operation.
Skew
Concurrency
Number of threads compare with number of I/Os per second of a multi-threaded random
write
Environment
A company must provide the infrastructure for a Microsoft SQL Server with
a use case of AI in their data center. The applications are used for
research and require high bandwidth. A Dell PowerStore is being
considered to address the storage requirement.
Instructions
Activity
Notes
Without adequate tools, sizing a storage system can be difficult. You must:
Characterize the expected client workload.
Calculate any additional operations that are needed for metadata that
is based on capacity requirements.
Consider RAID overhead for drives.
Calculate the aggregate performance of all system hardware
components.
The System Designer path from the Midrange Sizer home page:
Enables custom configurations for All-Flash or Hybrid arrays.
Provides an interface from which configurations are created by
selecting the model and custom drives.
Displays a cabinet image on the right that updates to match the pool
information entered for a configuration.
Uses workload types, block size, IOPS requirements, and storage
capacity requirements [number of drives and data reduction ratio] to
build a solution.
First select the System Type [All-Flash or Hybrid]. Note when changing
the system type after workloads have been added require the
configuration to be reset. Also when All Flash is selected a Compression
Ratio combo box is shown.
Application Details
Advanced Options
Every workload has unique inputs and rules for implementation that
are based on the best practices and recommendations for the
system being sized.
The System Engineer selects the application workload: Exchange, Oracle,
SQL, File share, VDI.
The sizer tool creates tiered pools to meet the requirements. The system
designer engine estimates the performance and a report shows the
system details, IOPS, saturation.
To open a sample of the exported file, click the .XML file link. Once
opened, double-click the sample image to expand it.
1:
PowerSizer dashboard showing the PowerStore selection for Quick Size option.
Single-SP
Dual-SP
Drive Types
The CNA controller supports both Ethernet (iSCSI or File) or the Fibre
Channel protocols depending on which SFP is inserted.
Each SP has two CNA ports supporting hot swappable Small Form-Factor
Pluggable SFP+ optical connectors.
Ethernet iSCSI/File
For block and file protocols, users can configure a 1 GbE SFP or a 10
Gb/s SFP. When an Ethernet SFP is inserted in the CNA ports, the ports
are persisted as the Ethernet protocol at first system boot, and cannot be
changed.
The SFPs are hot swappable between 1 GbE and 10 Gb/s SFPs.
If the CNA is initially persisted with 10 Gb/s SFPs, the customer can
downgrade to 1 Gb/s SFP, if necessary.
CNA ports can only be configured as a single protocol across both SPs on
the system. For example, if there are two CNA ports per SP, both must be
configured to use either a NIC or Fibre Channel connection.
Fibre Channel
For Fibre Channel connectivity, CNA ports can be configured with either
multimode or single mode SFPs. Single-mode SFPs support 16 Gb/s only.
When a Fibre Channel SFP is inserted in the CNA ports, the ports are
persisted as the FC protocol at first system boot. The setting cannot be
changed.
4-Port BaseT
Only Dell Technologies certified technicians can add I/O modules to empty
slots after the system is set up. Previously installed I/O modules are
Customer Replaceable Units (CRUs).
PowerStore back panel showing 2-port 100 GbE I/O module installed in Slot 0.
There may be one or more drive partnership groups per dynamic pool.
Every dynamic pool contains at least one drive partnership group.
Each drive is member of only one drive partnership group.
Drive partnership groups are built when a dynamic pool is created or
expanded.
When a drive partnership group for a particular drive type is full, a new
group is started.
The new group must have the minimum number of drives for the stripe
width plus hot spare capacity.
The drive count must fulfill the RAID stripe width plus spare space
reservation set for a drive type.
RAID 1/0
2+2 5 or 6
3+3 7 or 8
4+4 9 or more
RAID 5
RAID 5 4+1 6 to 9
8+1 10 to 13
12+1 14 or more
RAID 6
RAID 6 4+2 7 to 8
6+2 9 to 10
8+2 11 to 12
10+2 13 to 14
12+2 15 to 16
14+2 17 or more
The table shows each Unity XT hybrid model, the SAS Flash 2 drives
supported for that model, the maximum FAST Cache capacities and the
total Cache.
In either case, the Data Reduction algorithm occurs before the data is
written to the drives within the Pool. During the Data Reduction process,
multiple blocks are aggregated together and sent through the algorithm.
After determining if savings can be achieved or data must be written to
disk, space within the Pool is allocated if needed, and the data is written to
the drives.
Process:
1. System write cache sends data to the Data Reduction algorithm during
proactive cleaning or flushing.
2. Data Reduction logic determines any savings.
3. Space is allocated in the storage resource for the dataset if needed,
and the data is sent to the disk.
The example displays the behavior of the Data Reduction algorithm when
Advanced Deduplication is disabled.
One-Directional
Bi-directional
One-to-Many
Many-to-One
One-Directional
Bi-directional
Use cases:
Expands file replication protection domain
Increases resilience for file datasets
Expands data access
If the option is enabled, and deduplication does not detect a pattern, the
data is passed through the Advanced Deduplication algorithm.
CEPA
A mechanism in which applications can register to receive event
notification and context from PowerStore systems. CEPA runs on
Windows or Linux. CEPA delivers to the application both event notification
and associated context in one message.
Drive Extent
A drive extent is a portion of a drive in a dynamic pool. Drive extents are
either used as a single position of a RAID extent or can be used as spare
space. The size of a drive extent is consistent across drive technologies –
drive types.
KMIP
KMIP is a communication protocol that defines message formats for the
manipulation of cryptographic keys on a key management server.
Midrange Sizer
Midrange Sizer is an SSO [Single Sign-On] HTML5 based interface which
provides Dell Unity systems design capabilities with integrated best
practices, and ordering integration.
PACO
RAID Extent
A collection of drive extents. The selected RAID type and the set RAID
width determine the number of drive extents within a RAID extent.
Each RAID extent contains single drive extent from a specific number of
drives equal to the RAID width.
RAID extents can only be part of a single RAID Group and can never span
across drive partnership groups.
Spare Space
Spare space refers to drive extents in a drive extent pool not associated
with a RAID Group. Spare space is used to rebuild a failed drive in the
drive extent pool
Thin Provisioning
Thin provisioning allows multiple storage resources to subscribe to a
common storage capacity. The storage system allocates an initial quantity
of storage to the storage resource. This provisioned size represents the
maximum capacity to which the storage resource can grow without being
increased. Volumes can be between 1 MB and 256 TB in size.