0% found this document useful (0 votes)
31 views235 pages

Midrange+Storage+Performance+Planning+ +Participant+Guide (PDF) 2

The document is a participant guide for Midrange Storage Performance Planning by Dell, outlining essential guidelines and best practices for optimizing performance, efficiency, and security in midrange storage systems. It covers system configuration, drive and network connectivity, storage configuration recommendations, and data services features for Dell Unity XT and PowerStore systems. The guide also includes knowledge checks and considerations for planning based on customer performance needs.

Uploaded by

hatem hatem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views235 pages

Midrange+Storage+Performance+Planning+ +Participant+Guide (PDF) 2

The document is a participant guide for Midrange Storage Performance Planning by Dell, outlining essential guidelines and best practices for optimizing performance, efficiency, and security in midrange storage systems. It covers system configuration, drive and network connectivity, storage configuration recommendations, and data services features for Dell Unity XT and PowerStore systems. The guide also includes knowledge checks and considerations for planning based on customer performance needs.

Uploaded by

hatem hatem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 235

MIDRANGE STORAGE

PERFORMANCE
PLANNING

PARTICIPANT GUIDE

PARTICIPANT GUIDE
Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 2


Table of Contents

Midrange Storage Performance Planning 8


Introduction 8
Best Practices Guidelines for Dell Midrange of Storage Systems 9

Performance Essential Guidelines 10


System Configuration Essential Guidelines 10
Maximize Flash Drive Capacity 11
Distribute Workloads 14
Simplify Configuration 16
Design for Resilience 18
Maintain Storage System OE Up to Date 19

Knowledge Check: Performance Essential Guidelines 21


Knowledge Check: Maximize Flash Drive Capacity 21
Knowledge Check: Simplify Configuration 21

System Configuration Considerations 23


Dell Unity XT Series Platform Hardware Capabilities 23
Dell Unity XT Read and Write Bandwidth 24
Dell Unity XT Hardware Capability Guidelines 25
Dell Unity XT CPU Utilization 26
Dell UnityVSA 29
PowerStore Deployment 30
PowerStore Relative Performance 31
PowerStore Cluster 33

Knowledge Check: System Configuration Considerations 34


Knowledge Check: System Configuration 34

Back-end Connectivity 35
Dell Unity XT Onboard SAS Back-End Cabling 35
Dell Unity XT SAS I/O Module Back-End Cabling 38

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 3


PowerStore Back-End SAS Cabling 42
PowerStore Back-End NVMe Cabling 44

Knowledge Check: Back-end Connectivity 48


Knowledge Check: Back-End Connectivity 48

Drive Configuration 49
Dell Unity XT Disk Processor Enclosure (DPE) 49
Dell Unity XT Drive Slot Layout 49
Dell Unity XT System Drives 51
Dell Unity XT Rotating Drive Support 53
Dell Unity XT SAS Flash Drive Support 55
Dell Unity XT SSD Wear Leveling 57
Mixing Drive Types in Dell Unity XT 58
Dell Unity XT Supported DAEs 59
Dell Unity XT Maximum Recommended Drive Types and IOPS 60
Dell Unity XT Hot Spares 61
PowerStore Base Enclosure 62
PowerStore Base Enclosure Drive Slot Layouts 63
PowerStore Drive Offerings 65
PowerStore Supported Expansion Enclosures 66
PowerStore Appliances and Drive Configurations 67

Knowledge Check: Drive Configuration 69


Knowledge Check: Drive Configuration - Unity XT 69
Knowledge Check: Drive Configuration - PowerStore 69

Network Connectivity Considerations 70


General Network Performance and High Availability 70
Dell Unity XT Front-End Connectivity 73
Dell Unity XT Front-End Block Connectivity Performance Guidelines 74
Dell Unity XT Front-End File Connectivity Performance Guidelines 77
Dell Unity XT NAS Server Multi-tenancy 79
PowerStore Front-End Connectivity 83

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 4


PowerStore Front-End Connectivity Performance Guidelines - Fibre Channel 86
PowerStore Front-End Connectivity Performance Guidelines - Ethernet 87

Knowledge Check: Network Connectivity Considerations 91


Knowledge Check: Network Connectivity Considerations 91

Storage Configuration Recommendations 92


General Recommendations for Dell Unity XT Storage Pools 92
Dell Unity XT Storage Pool Capacity 92
Dell Unity XT RAID Configurations 94
Dell Unity XT Dynamic Pools 95
Dell Unity XT Dynamic Pools Expansion 100
Mixing Drive Sizes within Dell Unity XT Dynamic Pools 104
Dell Unity XT Traditional Pools 107
Dell Unity XT All-Flash and Hybrid Pools Considerations 110
Dell Unity XT Block Storage Resources 112
Dell Unity XT Storage Objects Recommendations 113
PowerStore Block and File Storage Resources 114
PowerStore Block Storage Resources Recommendations 118
PowerStore File Storage Resources Recommendations 119

Knowledge Check: Storage Configuration Recommendations 121


Knowledge Check: Storage Configuration Recommendations 121
Knowledge Check: Multi-Tier Dynamic Pool 121

Data Services and Array Features 123


Dell Unity XT Features: FAST VP 123
Dell Unity XT Features: FAST Cache 127
Dell Unity XT Features: Data Reduction 131
PowerStore Features: Data Efficiency 134
Dell Unity XT Features: Snapshots 135
Dell Unity XT Features: Thin Clones 138
PowerStore Features: Snapshots and Thin Clones 141
Dell Unity XT Features: Asynchronous Replication 145

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 5


Dell Unity XT Features: Synchronous Replication 149
Dell Unity XT Data at Rest Encryption (D@RE) 150
PowerStore Data Encryption 153
Dell Unity XT Features: Host I/O Limits 156
Application Considerations 158

Knowledge Check: Data Services and Array Features 161


Knowledge Check: Data Services and Array Features 161

External Host Considerations 162


Host Configurations 162
Host File Systems 163
VMware Integration 164

Knowledge Check: External Host Considerations 165


Knowledge Check: External Host Considerations 165
Plan for Customer Performance Needs 166

Performance Planning 167


Planning for Performance Considerations 167
Designing a Unity XT Solution for Performance 167
Designing a PowerStore Solution for Performance 169
Understanding Environmental Limits 170

Knowledge Check: Planning for Performance 172


Knowledge Check: Environmental Limits 172

Identifying the Environment 173


Identifying the Environment Considerations 173
Gathering Workload I/O Characteristics 174
Live Optics Dashboard 176
Live Optics Collection Options for Servers 177
Live Optics Collection for Storage 178
Sample Live Optics Hypervisor Profile 179
Sample Live Optics Storage Profile 181

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 6


Knowledge Check: Identifying the Environmental 183
Knowledge Check: Identifying the Environment Using Live Optics 183

Characterization of Workloads 184


Analysis of Workload Key Performance Metrics 184
Understanding Workload Attributes 185

Knowledge Check: Characterization of Workloads 192


Knowledge Check: Characterization of Workloads 192

Supported Sizing Tools 194


Creating Dell Unity XT System Design with Midrange Sizer 194
Midrange Sizer Data Requirements 195
Midrange Sizer – System Designer 196
Midrange Sizer – Live Optics/NAR Path 198
Midrange Sizer – Simple Performance 199
Midrange Sizer – Application Oriented 201
Midrange Sizer – Advanced Performance 202
Midrange Sizer Deliverables 203
Sizing a PowerStore Solution: PowerSizer 205

Knowledge Check: Supported Sizing Tools 207


Knowledge Check: Supported Sizing Tools 207

Appendix 209

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 7


Midrange Storage Performance Planning

Midrange Storage Performance Planning

Introduction

This course introduces the planning considerations for sizing a mid-range


storage product solution.
 Essential guidelines for improvement of system performance,
efficiency and security.
 PowerStore and Dell Unity XT system configuration considerations.
 PowerStore and Dell Unity XT back-end connectivity and drive
configuration considerations
 PowerStore and Dell Unity XT front-end connectivity, external hosts
and storage configuration recommendations.
 PowerStore and Dell Unity XT data services and array features
recommendations.
 Considerations for planning and designing a midrange solution
based on customer performance needs.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 8


Midrange Storage Performance Planning

Best Practices Guidelines for Dell Midrange of Storage


Systems

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 9


Performance Essential Guidelines

Performance Essential Guidelines

System Configuration Essential Guidelines

A solution architect must follow the best practices for each product when
planning and sizing a midrange storage solution with performance in mind.

 Essential guidelines provide the knowledge to enable good


performance on midrange storage products.
 These guidelines include specific recommendations to plan and
configure a robust, high performing storage system.

At the highest level, design for optimal performance follows these few
simple rules:

The main principles for designing a midrange storage system for performance.

Deep Dive: Review the latest best practices guide available


at the Dell Technologies Info Hub or each product support
page.
 Dell EMC Unity: Best Practices Guide
 Dell PowerStore: Best Practices Guide

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 10


Performance Essential Guidelines

Maximize Flash Drive Capacity

Overview

Dell Unity XT arrays are based on the Intel family of multicore processors.
 The processors provide up to 16 cores capable of high levels of
storage performance.
 The architecture is designed to support the latest flash technologies
such as Triple Level Cell (TLC).
 The system come in two variants: All-Flash Array (AFA) and Hybrid-
Flash Array (HFA) models.

Dell Unity XT systems are offered as AFA or HFA models.

To realize the performance potential, Dell Technologies recommends the


use of flash drives in all Unity XT systems.
 The best way to harness the power of flash is to ensure as much
dataset as possible resides on the SSD drives.
 In Unity HFA systems provision a flash drive tier in multi-tiered storage
pools.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 11


Performance Essential Guidelines

 The minimum recommended flash drive capacity is about 10% of


the pool capacity.
 To enable data reduction, the flash capacity must be equal to or
exceed 10% of the total capacity of the pool.

Capacity

Dell also recommends provisioning as much SAS-Flash drive capacity as


possible.
 Always use the largest flash drives appropriate for your solution.
 Using larger drives, reduces the number of drives.
 SSD drives are available with capacities up to 15.36 TB.

Unity XT supports SAS-Flash 2, SAS-Flash 3, and SAS-Flash 4 drives.

 Less drives typically mean less performance.

 In terms of writes-per-day (WPD), when the capacity of the flash


drive is higher, the drive endurance is lower.
 Try to use the drive size that provides a balance between physical
capacity and performance (IOPS).

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 12


Performance Essential Guidelines

Considerations

The 12Gb SAS SSD drives are in a 2.5-inch form factor with 520
byte/sector and are supported on:
 25-drive Disk Processor Enclosure (DPE)
 25-drive Disk Array Enclosure (DAE)

Some restrictions apply to the SAS-Flash drive types:


 SAS-Flash 2 drives are supported for the configuration of FAST
Cache, and multi-tiered, or All-Flash-pools.
 These drives are rated at 10 WPD.
 SAS-Flash 3 drives are supported in multi-tiered and All-Flash pools.
The drives cannot be used in FAST Cache implementations.
 These drives are rated at 3 WPD.
 SAS-Flash 4 drives can only be used in All-Flash pools. The drives
cannot be used for FAST Cache implementations.
 In the hybrid models, only the 7.68 TB SAS-Flash 4 drive is
supported.
 SAS-Flash 4 drives can be mixed with other SAS-Flash drive types
in a pool.
 These drives are rated at 1 WPD.
 SAS-Flash 4 drives can be used in the system drive slots.

AFA models do not support FAST Cache or FAST VP.

Important: For compatibility information, locate the latest


Dell Unity XT Storage Systems - Drive and OE Compatibility
Matrix at the Dell Support website.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 13


Performance Essential Guidelines

Distribute Workloads

To maximize performance and availability keep a few basic concepts in


mind when planning a midrange solution configuration.
 Eliminate individual bottlenecks by involving all hardware resources.
 Always try to distribute application workloads across multiple
resources.

Front-end Connectivity

For host connectivity, use all available front-end ports symmetrically


across storage processors.
 Spreading connectivity across ports optimizes host-to-storage access.
 Dell Unity storage systems provide multiple options for connecting
hosts on the front end.
 Unity XT 380/F DPE includes two onboard Ethernet ports, and two
CNA ports for FC or iSCSI connectivity on each SP.
 Unity XT 480/F DPE and higher models include a 4-port mezzcard
on each SP.
 All the models have two slots per SP for optional I/O modules.
 PowerStore systems also provide multiple front-end connectivity
options.

 PS 500 includes a 4-port card (MEZZ 0) and a 2-port card (MEZZ


1) per node.
 PS 1000-9000 and PS 1200-9200 models include a 4-port card
(MEZZ 0) in each node.
 All the models have two slots per node for optional I/O modules.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 14


Performance Essential Guidelines

Back-end Connectivity

When configuring drives, spread all the Flash drives across the available
back-end buses and DAES.
 All Dell Unity XT arrays provide two integrated back-end bus
connections.
 Unity XT 480/F and higher models also support a four port SAS
expansion I/O module.

Expand PowerStore SSD-based systems with NVMe SSD-based


expansion shelves.
 PS 500 models include a 4-port 25 GbE mezzcard with support for
NVMe connectivity.
 PS 1200-9200 models use an embedded module v2, 100 GbE 2-port
card on MEZZ 1.
 Both cards connect to the ENS24 expansion enclosure that supports
the NVMe SSD drives.

Workloads

For Unity XT systems:


 Build storage pools with many drives to service the I/O workload.
 This action results in more spindles working concurrently which can
boost performance in HFA models.
 Distribute LUNs and NAS servers symmetrically across storage
processors.

PowerStore systems use the Dynamic Resiliency Engine (DRE) to


manage drives in the system.
 A storage administrator does not need to perform any manual drive
configuration.
 All drives are automatically used to provide storage capacity with
improved resource utilization.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 15


Performance Essential Guidelines

PowerStore systems set with dynamic node affinity automatically


rebalance block storage resources between nodes.
 The feature defines the appliance node to be used as the
active/optimized path for host I/O.
 Node affinity maintains relatively consistent utilization, latency, and
performance between both nodes.

A single NAS server uses compute resources from only one node of the
PowerStore appliance.
 If one PowerStore node is busier than the other, manually move NAS
servers to the peer node to balance the workload.

Deep Dive: For more information about the front-end and


back-end connectivity options, review the Dell Unity Simple
Support Matrix or the PowerStore Simple Support Matrix.
Download the latest revisions of the documents from E-Lab
Navigator.

Simplify Configuration

Designing for simplicity increases system flexibility, and leads to higher,


more consistent performance.
 Always implement a well-planned, simple, clean design for a midrange
system configuration.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 16


Performance Essential Guidelines

When designing a Unity XT system solution, consider


the following:
 Fewer storage pools mean that more drives can be
devoted to any single workload in the pool, possibly
improving performance.
 Standardizing on a single capacity drive per tier,
System storage
rather than provisioning a system with multiple drive
sizes for different purposes.

 Reduces the spare space reservation and make


it possible to build larger pools.
 if needed, enables greater flexibility for future
reconfiguration.
Dell Technologies recommends that all drives in a
PowerStore system are the same size, which can
maximize the usable capacity from each drive.
To provide the greatest usable capacity from the same
number of drives, a PowerStore must be initially
installed with a minimum of:
 Ten drives for single-drive failure tolerance.
 Nineteen drives for double-drive failure tolerance.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 17


Performance Essential Guidelines

When in doubt about the impact of certain configuration


options, selecting the recommended default is typically
the best choice.
 Dell midrange storage solutions are built to deliver
optimal performance "out of the box".
 Dell Unity XT uses Multicore RAID and Multicore
System defaults
Cache designed to effectively scale across multiple
CPU cores.

 Multicore Cache is a separate component within


the stack. The cache ties all other components
together.
 There is no need to "tune" Cache as it was the
case in legacy Dell EMC storage arrays.

Design for Resilience

Design a midrange storage solution for resilience, considering that at


some point, hardware can fail.
 Midrange storage solutions architecture is built to continue providing
storage services under failure conditions.
 Consider the hardware capabilities and limitations of different
components of the midrange storage system.
 Understanding the limitations is key to designing a solution that
continues to provide good performance under such conditions.

Both midrange storage solutions architectures provide high availability


with redundant power supplies, front-end and back-end ports, SPs or
nodes.

In Unity XT systems, storage pools are configured with RAID protection.


 RAID levels determine the performance characteristics and protection.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 18


Performance Essential Guidelines

 The stripe width determines the pool fault characteristics.


 Spare space reservation (dynamic pools) or hot spares (traditional
pools) are used to replace a faulted drive.

In PowerStore systems, DRE groups the drives into resiliency sets to


protect against drive failure.
 Spare space for rebuilds is automatically distributed across all drives
within each resiliency set.
 At initial installation of the PowerStore system, DRE can be configured
with either single or double-drive failure tolerance.
 The configuration enables faster rebuilds after a drive failure.

Go to: The Configure Storage Pools manual for more details


on RAID configuration. For more information about the DRE,
go to the Dell PowerStore: Clustering and High Availability
whitepaper.

Maintain Storage System OE Up to Date

Maintain the midrange storage solution updated to the latest released


Operating Environment version.

Dell Technologies recommends running the latest OE version on the Dell


Unity XT systems.
 Dell regularly updates the Unity XT Operating Environment in order to
improve performance, enhance functionality, and provide new features.
 When performing an OE software upgrade:

 Verify that the software update file is valid and not corrupt using the
SHA256 checksum.
 Upgrade to the latest drive firmware available following the software
upgrade.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 19


Performance Essential Guidelines

New PowerStoreOS releases are applied to the system using a


nondisruptive upgrade process.
 During parts of the upgrade process, half of the system hardware
resources are unavailable.
 Dell Technologies recommends that the upgrades are performed
during planned maintenance windows.

 Alternately, perform upgrades when the system is less busy to


minimize the impact to clients.
For both storage systems, it is recommended that a preupgrade health
check is run before the software upgrade to mitigate and resolve any
issues.

Deep Dive: Review the Upgrading Unity Software and


Software Upgrade Guide available at the Dell Technologies
Info Hub or each product support page.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 20


Knowledge Check: Performance Essential Guidelines

Knowledge Check: Performance Essential


Guidelines

Knowledge Check: Maximize Flash Drive Capacity

Environment

A company needs to provide infrastructure for an Oracle database


application deployment in their data center. A Dell Unity XT 680 system
must be scale-up with a 25-slot DAE to address the storage requirements
and the expected data growth. To take advantage of the Flash power
within the project budget, additional SAS-Flash 3 drives are considered to
build the storage pool and accommodate Oracle datafiles.

Instructions

1. Go to the Dell Support web page for the specified model.


2. Locate the Dell Unity XT Storage Systems - Drive and OE
Compatibility Matrix document.
3. Review the supported SAS Flash 3 drives for the specified model.

Activity

Based on the document information, answer the question.

Knowledge Check: Simplify Configuration

1. A solutions architect is sizing a PowerStore system with drives of the


same size. The solution must be designed to have the greatest usable
capacity from the same number of drives and resilient up to two drives
failing. What is the minimum recommended number of drives the
system must be initially installed with?
a. 10
b. 19

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 21


Knowledge Check: Performance Essential Guidelines

c. 32

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 22


System Configuration Considerations

System Configuration Considerations

Dell Unity XT Series Platform Hardware Capabilities

Dell Unity XT dual-active architecture provides efficient use of all available


hardware resources and optimal performance.

 Dell Unity XT systems support dual Storage Processors with mirrored


write cache.
 Each Storage Processor (SP) contains both primary cached data for its
own LUNs and a secondary copy of the cache for its peer SP.
 Each SP contains a single CMI channel per SP (380/380F) or two CMI
channels per SP (480/480F and higher models) for inter-SP
communications.

Dell Unity XT storage system

The SP enclosure also provides components redundancy for high


availability. Each SP enclosure includes:
 Single socket Intel CPU, with six cores (380/380F) or dual-socket Intel
CPU (480/480F and higher models), with core count between 16 to 32.
 Dual inline memory modules (DIMM)
 Internal battery backup module
 Single M.2 device (380/380F) or two M.2 devices (480/480F and
higher models).
 Fan modules
 Power Supply

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 23


System Configuration Considerations

Deep Dive: For more information about the hardware


capabilities, review the Simple Support Matrix document.
The document is available at the Dell Unity Info Hub:
Product documents and Information KBA.

Dell Unity XT Read and Write Bandwidth

The CMI channels are used for communication between the Dell Unity XT
storage processors.

The bus effectively controls how much write bandwidth can be ingested by
the system.

CMI is mainly used for mirroring writes between the SPs.


 The channels are also used to pass data from nonpreferred to
preferred paths.
 Host I/O to LUN coming down nonpreferred path crossover the CMI
and is processed by the peer SP.

Consider the table with an analysis of the expected read and write
bandwidth (sequential I/O).

 The maximum capability for the Dell Unity XT 380/F is up to ~5.5 GB/s.
 In Unity XT 480/F and higher models, the maximum write bandwidth
can be up to ~9.3 GB/s, accounting for parity writes.

 Typically the bottleneck becomes the onboard SAS chip (if SAS I/O
module is not used).

Configuration Max. Read Max. Write


Bandwidth Bandwidth

Unity XT 380/F with only DPE drives 7.4 GB/s Up to 5.5


GB/s *

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 24


System Configuration Considerations

Unity XT 480/F (or higher model) with only 9.3 GB/s Up to 8.4
DPE drives GB/s

Unity XT 380/F with DPE drives and drives 10.5 GB/s Up to 5.5
on SAS bus 1 GB/s *

Unity XT 480/F (or higher model) DPE 10.5 GB/s Up to 9.3


drives and drives on SAS Bus 0 and/or Bus GB/s
1

Unity XT 480/F (or higher model) with SAS Depends on front-end ports
I/O Module
Table showing the maximum read and write bandwidth for Dell Unity XT 380/F and Dell
Unity XT 480/F (or higher models).

*On Dell Unity XT 380/F systems the performance is affected by the single
CMI per channel limit.

Dell Unity XT Hardware Capability Guidelines

Dell Unity XT systems are designed to achieve the maximum performance


possible from the onboard multicore processors.
 The architecture however results in nonlinear CPU utilization as the
workload scales.
 Models with more CPU power can deliver higher performance reducing
the IOPS bottleneck.
 Models with larger buffer caches per SP may improve performance
with Unity features such as Data Reduction.

 Buffer cache helps cache metadata I/Os and reduce backend


reads/writes.
 CPU cycles and enough memory are essential for running the data
reduction algorithms as writes comes into the array.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 25


System Configuration Considerations

Dell does not recommend continuously operating a hardware resource


near its maximum potential.
 When utilization is below the maximum level, average response times
are better.
 For example, a Dell Unity XT system reporting 50% CPU is capable
of providing more than three times the current workload.
 The system can handle activity bursts without becoming overloaded,
and performance may be maintained during hardware failure
scenarios.

 Brief spikes of high utilization are normal and expected on any


system.

Deep Dive: For more information about the hardware


specifications for each model, review the Unity XT Series
Specification document. The document is available at the
Dell Unity Info Hub: Product documents and Information
KBA.

Dell Unity XT CPU Utilization

Dell Technologies recommends that workloads are balanced across the


two SPs, such that CPU utilization is roughly equivalent on each SP.

These tabs compare the average sustained levels of CPU utilization over
long periods of time.
 For each operating range, an analysis of the system workload handling
is provided.
 The expected system behavior is explained in the case that a single
SP must service the entire workload.

 For example, coordinated reboots during upgrades, or a single SP


failure.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 26


System Configuration Considerations

Low Utilization

 Reported CPU Utilization: Below 50%


 Analysis: System is capable of accepting additional features and
workloads.
 Expected single-SP behavior: A single SP should be able to service
the entire workload while maintaining existing IOPS and response
time.
 Data Reduction: Data Reduction can be enabled on the storage
objects in this system.
 Snapshots and/or Replication: Snapshots and Replication can be
enabled on the storage objects in this system.

Normal Utilization

 Reported CPU Utilization: Between 50% and 70%


 Analysis: System is operating normally, and may be capable of
accepting additional features and workloads.
 Expected single-SP behavior: A single SP should be able to service
the entire workload while maintaining existing IOPS and response
time.
 Data Reduction: Choose the best candidate storage objects and
enable Data Reduction on only a few at a time.
 Snapshots and/or Replication: Enable Snapshots or Replication on
only a few storage objects at a time.

High Utilization

 Reported CPU Utilization: Between 70% and 90%


 Analysis: System is nearing saturation; carefully consider whether
additional features and workload should be added to this system.
 Expected single-SP behavior: A single SP should be able to maintain
the existing IOPS load; however, increases in response time may be
experienced.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 27


System Configuration Considerations

 Data Reduction: Data Reduction should not be enabled on any


additional storage objects in this system.
 Snapshots and/or Replication: Choose the best candidate storage
objects and enable Snapshots or Replication on only a few at a time.

Extremely High Utilization

 Reported CPU Utilization: Above 90%


 Analysis: System is saturated; additional workload should not be
applied to the system; consider moving some work to other systems.
 Expected single-SP behavior: A single SP is not able to maintain the
existing IOPS load.
 Data Reduction: Do not enable Data Reduction on any additional
storage objects in this system.
 Snapshots and/or Replication: Do not enable Snapshots or
Replication on any additional storage objects in this system.

Important: When planning resources for a system operating


at a CPU utilization in the high to extremely high range, be
sure to follow Dell Technologies recommended guidelines.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 28


System Configuration Considerations

Dell UnityVSA

Dell UnityVSA1 is deployed in the VMware infrastructure as a Single-SP


(one node) or Dual-SP (two nodes) system.

Because it does not have dedicated hardware, there are


recommendations specific to its installation and configuration.
 Use Dell UnityVSA HA (Dual-SP) for better system availability.
 The product offering provides redundancy by clustering 2 virtual
SPs.
 Use OVA package file from the Del Unity OE 5.1 or later release when
installing UnityVSA in a vSphere 7.0 environment.
 Use the “Thick Provision Eager Zeroed” disk format when provisioning
storage for the operating system.
 The option guarantees the disk capacity and provides the best
performance.
 Use storage for the operating system that is physically separate from
the storage that is used for storage pools.

Tip: Review the requirements for the UnityVSA


deployments. For updated details on the system
requirements, see the Dell UnityVSA Installation Guide
white paper available at the Dell Unity Info Hub: Product
documents and Information KBA.

1
Dell Unity virtual storage appliance or UnityVSA is a software-defined
storage solution which provides most of the same features as Dell Unity
XT systems.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 29


System Configuration Considerations

PowerStore Deployment

The PowerStore platform of storage systems consists of ten different


models:
 Models 1000, 1200, 3000, 3200, 5000, 5200, 7000, 9000 and 9200 are
available as PowerStore T and X.
 Model 500 (entry-level model) is only offered as PowerStore T.

All models use a common base enclosure and I/O modules.


 The models differ by CPU core count and speed, memory size, and
number of NVMe NVRAM drives.
 The hardware differences provide each model with unique
performance characteristics.

 The IOPS capability of the PowerStore models scales linearly from


PowerStore 500 up to 9200 models.
PowerStore can be scaled up (up to three expansion enclosures) and
scaled out (up to four appliances in a cluster).

Except for PowerStore 500, PowerStore systems use NVMe NVRAM


drives to provide persistent storage for cached write data.
 PowerStore 1000 up to 3200 model arrays have two NVRAM drives
per system.
 PowerStore 5000 up to 9200 model arrays have four NVRAM drives
per system.
 The extra drives mean that these systems can provide higher MBPS
for large-block write workloads.

Performance scales based on the specific hardware complement of the


model and is also impacted by the configuration type.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 30


System Configuration Considerations

Deep Dive: For more details on the PowerStore models


capabilities and deployment modes, review the Hardware
Information Guide (for PowerStore 1000, 1200, 3000, 3200,
5000, 5200, 7000, 9000, and 9200) or the Hardware
Information Guide (for PowerStore 500).
These documents are available from the PowerStore: Info
Hub - Product Documentation & Videos KBA.

PowerStore Relative Performance

PowerStore can be installed in one of three different deployment modes.


Each mode has different capabilities.

Deployment Mode External External File AppsON


Block Access Functionality
Access

PowerStore T ✓ ✓ X
Unified

PowerStore T ✓ X X
Block Optimized

PowerStore X ✓ X ✓

PowerStore has different performance characteristics depending on the


deployment mode.

PowerStore T Unified

PowerStore T Unified deployments can provide access to block and file


storage resources simultaneously. This deployment mode is the default.

In this type of deployment, CPU and memory resources are used for both
block and file IOPS.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 31


System Configuration Considerations

PowerStore T Block Optimized

If file access is not required, the PowerStore system can be installed in


block optimized mode, which disables the file capabilities.

A PowerStore T block optimized system can deliver more block IOPS than
the same model deployed as a unified system.
 The deployment can increase the amount of block workload that the
system can provide.
 The mode devotes the additional CPU and memory that is not used for
file capabilities.

PowerStore X

PowerStore X deployments run the PowerStoreOS as a virtual machine


on an ESXi hypervisor.
 This configuration allows the storage system to service external host
I/O.
 The system reserves a portion of the CPU and memory for hosting
user VMs.

 Guest VMs run directly on the PowerStore hardware.


 Fewer resources are available for serving external storage.
 Less capability for block IOPS is used because some of the
reserved compute resources.
The relative performance for storage from a PowerStore X model is
expected to be less than the performance for the same PowerStore T
model.

Deep Dive: For more details on the considerations about


the storage system installation and deployment., review the
Dell PowerStore: Introduction to the Platform white paper,
available from the PowerStore: Info Hub - Product
Documentation & Videos KBA.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 32


System Configuration Considerations

PowerStore Cluster

A PowerStore cluster delivers aggregate performance from all appliances


in the cluster.

A PowerStore cluster combines up to four PowerStore appliances into a


single manageable storage system.
 PowerStore T and X appliances cannot be mixed within a single
cluster.
 A cluster must contain only PowerStore T appliances, or only
PowerStore X appliances.
 All appliances in a cluster should be physically located in the same
data center and must be connected to the same LAN.

Recommendations

Dell Technologies recommends that all appliances in a cluster are of the


same model with similar physical capacities.
 The configuration provides consistent performance across the cluster.

Any host that is connected to a PowerStore cluster must have equivalent


connectivity to all appliances in the cluster.
 Volumes can be migrated between appliances in a cluster.
 A single volume is serviced by only one appliance at any given time.

When deploying multiple appliances for file access, plan to have multiple
clusters.
 Migration of storage resources between cluster appliances is
applicable to block storage resources only.
 File resources cannot migrate to a different appliance in a cluster.
 In a PowerStore T Unified mode deployment, file services are
restricted to the cluster’s primary appliance.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 33


Knowledge Check: System Configuration Considerations

Knowledge Check: System Configuration


Considerations

Knowledge Check: System Configuration

Environment

A company is interested in a PowerStore X solution to service external


hosts I/O and run production VMs on premises. The virtual machines host
write-intensive database applications for the accounting and finance
department. The IT manager is concerned with system capability to
handle the VM workloads and the provisioning of volumes using iSCSI or
NVMe/FC to Windows and Linux hosts.

Instructions

1. Go to the PowerStore: Info Hub - Product Documentation & Videos


KBA.
2. Locate the PowerStore Virtualization Infrastructure Guide
document.
3. Review the Performance Best Practices for PowerStore X model
clusters section.

a. Verify the PowerStore and vSphere limitations.

Activity

Based on the document information, answer the question.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 34


Back-end Connectivity

Back-end Connectivity

Dell Unity XT Onboard SAS Back-End Cabling

SAS Ports

Dell Unity XT Storage Processors use the SAS ports to move data to and
from the back-end drives.

Dell Unity XT systems have two onboard 12 Gb SAS ports in each of the
SPs within the Disk Processor Enclosure (DPE).
 The onboard SAS ports provide sufficient IOPS and bandwidth
capabilities to support most workloads.
 Maximum of 250,000 IOPS per port.
 Maximum of 2,500 MB/s per port.
 Two buses are connected to mini-SAS HD ports for Disk Array
Enclosure (DAE) expansions.

DPE Storage Processor of a Unity XT 380/F model.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 35


Back-end Connectivity

SAS Buses

SAS Buses on Dell Unity XT 480/F, 680/F, and 880/F models

DPE SAS port 0 is connected internally to the SAS expander.


 The expander connects to the front facing drives in the DPE.
 The connection is labeled back-end bus 0, enclosure 0 (BE0 EA0).

On Unity XT 480/F, 680/F, and 880/F systems, one SAS internal bus is
dedicated to the DPE.
 DPE drives are directly accessed by the underlying SAS controllers
over internal connections.
 The DPE operates on bus 99 which is separate from the SAS
expansion ports.
 The DPE read and write bandwidth is higher than a Unity XT 380/F
with no expansion.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 36


Back-end Connectivity

When only the two onboard SAS ports on the DPE are available, Dell
Technologies recommends connecting DAEs in the following order:
1. DAE 1 connects to SAS Bus 1 (onboard SAS port 1).
2. DAE 2 connects to SAS Bus 0 (onboard SAS port 0).
3. DAE 3 connects to SAS Bus 1 (onboard SAS port 1).

Connect any additional DAE using the same rotation.

Disk Array Enclosures

Follow these simple guidelines when cabling SAS buses to back-end


DAEs:
 Maximum number of enclosures per bus is 10.
 Maximum number of drive slots per bus is 250.
 Up to specific system limitations for drive slots.
 Extra 12 Gb SAS backend I/O modules might be required to reach
maximum drive counts.
 For best performance, evenly distribute DAEs across the available
back-end buses.

DAEs can be added to the system while the operating system is active up
to the DAE and drive slot limit for the storage system.
 DAEs or drive slots over the system limit are not allowed to come
online.
 Consider the maximum number of drives supported for each storage
system model.

 SAS-Flash drives have the highest performance potential of the


three drive tiers.
 Dell Technologies recommends spreading Flash drives across all
available buses, if possible, to ensure the best IOPS and service
times.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 37


Back-end Connectivity

Tip: DAEs do not add performance by themselves. Adding a


DAE typically means more drives, which may result in greater
performance.

Dell Unity XT SAS I/O Module Back-End Cabling

SAS I/O Module

A 4-port 12 Gb SAS I/O module can be provisioned on Dell Unity XT


480/F or higher models, to provide extra back-end buses.

The mirrored I/O modules provide 8 x 4 lane or 4 x 8 lane 12 Gb/s SAS


ports per array for BE connection.

The additional I/O module is required in these cases:


 System requires high-bandwidth performance (greater than 10 Gb).
 System is attaching more than 19 DAEs.
 System contains more than 500 drives.

Important: The SAS I/O Module consumes an expansion slot


that could otherwise be used for front-end connectivity.

SAS Buses

The Unity XT DPE onboard SAS ports are set on the default buses 0 and
1.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 38


Back-end Connectivity

DPE Storage Processor of Unity XT 480/F and higher models.

The four ports on the SAS I/O Module are on designated buses 2, 3, 4,
and 5.

4-port 12 Gb SAS I/O Module showing the assigned buses.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 39


Back-end Connectivity

Backend Cabling

The first expansion DAE is cabled to DPE SAS port 1 to begin back-end
bus 1 as enclosure 0 (BE1 EA0).
 The rest of the DAEs in the bus are daisy-chained where they are
intertwined.
 The first DAE is daisy-chained to the seventh DAE designated as BE1
EA1, and so on.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 40


Back-end Connectivity

SAS backend bus designations.

When provisioning the 4-port SAS I/O Module, Dell Technologies


recommends connecting DAEs in the following order:
 DAE 1 connects to SAS Bus 1 (onboard port 1).
 DAE 2 connects to SAS Bus 2 (I/O Module port 0).
 DAE 3 connects to SAS Bus 3 (I/O Module port 1).
 DAE 4 connects to SAS Bus 4 (I/O Module port 2).
 DAE 5 connects to SAS Bus 5 (I/O Module port 3).
 DAE 6 connects to SAS Bus 0 (onboard port 0).

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 41


Back-end Connectivity

PowerStore Back-End SAS Cabling

Embedded I/O Module

PowerStore 1000 through 9000 models come with an embedded I/O


module v1 in each of its nodes.
 The I/O Personality Module (IOPM) has two onboard 12 Gb SAS ports
for attaching SAS Expansion Enclosures.
 The base enclosure supports two redundant connections to the
expansion enclosure, Link Control Card (LCC) A and LCC B.
 The maximum number of expansion enclosures that are supported is
3.

Embedded Module v1 with the two SAS Ports

The embedded module does not support 2-port card (100 GbE QSFP) –
no NVMe Expansion Enclosure support.
 To support the addition of NVMe Expansion shelves, the module must
be converted to Embedded I/O Module v2.

Recommendations:
 Use 2-meter cables to connect the base enclosure to the expansion
enclosure easily.
 Use 1-meter cables to connect expansion enclosures to other
expansion enclosures.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 42


Back-end Connectivity

Single Expansion Shelf

Back-End SAS cabling is the same, regardless of PowerStore model.

Cable both I/O modules on the base enclosure to the Link Control Card
(LCC) on the first expansion enclosure.
 Connect node A, SAS port B to LCC A, port A on the expansion
enclosure.
 Connect node B, SAS port B to LCC B, port A on the expansion
enclosure.
 Connect node A, SAS port A to LCC B, port B on the expansion
enclosure.
 Connect node B, SAS port A to LCC A, port B on the expansion
enclosure.

Two Expansion Shelves

1. Cable both I/O modules on the base enclosure to the Link Control Card
(LCC) on the first expansion enclosure.
a. Connect node A, SAS port B to LCC A, port A on the expansion
enclosure.
b. Connect node B, SAS port B to LCC B, port A on the expansion
enclosure.
2. Cable both I/O modules on the base enclosure to the LCCs on the last
expansion enclosure in the stack:
a. Connect node A, SAS port A to LCC B, port B on the last expansion
enclosure.
b. Connect node B, SAS port A to LCC A, port B on the last expansion
enclosure.
3. Cable expansion enclosure to expansion enclosure:

a. Connect LCC A, port B on the first expansion enclosure to LCC A,


port A on the next expansion enclosure.
b. Connect LCC B, port B on the first expansion enclosure to LCC B,
port A on the next expansion enclosure.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 43


Back-end Connectivity

Deep Dive: Follow the cabling guidelines in the Installation


and Service Guide on the PowerStore Hub.

PowerStore Back-End NVMe Cabling

PowerStore 500

PowerStore 500 with three NVMe Expansion Enclosures

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 44


Back-end Connectivity

In PowerStore 500 models, the 4-port Mezz card (MEZZ 0) in each node
provides two NVMe ports.
 Ports 2 and 3 are used for back-end connectivity to an ENS24
Expansion enclosure.
 Each base enclosure supports two redundant connections to an
expansion enclosure.
 The maximum number of expansion enclosures that are supported is
3.

Recommendations:
 To avoid performance issues, cables cannot be longer than 3 meters.

 Appliances that are purchased with NVMe expansion enclosures


include 2-meter length cables.
Cable options:
 QSFP28 to SFP28 – 25 GbE, 2-lane at 25 G per lane, passive, 2
meters.
 QSFP28 to SFP28 – 25 GbE, 2-lane at 25 G per lane, passive, 3
meters.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 45


Back-end Connectivity

PowerStore 1200-9200

PowerStore 1200 with 3 NVMe Expansion Enclosures

PowerStore 1200, 3200, 5200, and 9200 models come with an embedded
I/O module v2 in each of its nodes.
 The I/O Personality Module (IOPM) has a one 2-port card slot that is
primarily used for 100 GbE (QSFP) backend NVMe Expansion
Enclosure connectivity.
 The base enclosure supports two redundant connections to the
expansion enclosure.
 The maximum number of expansion enclosures that are supported is
3.

Recommendations:
 To avoid performance issues, cables cannot be longer than 3 meters.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 46


Back-end Connectivity

 Appliances that are purchased with NVMe expansion enclosures


include 2-meter length cables.
Cable options:
 QSFP28 to QSFP28 – 100 GbE, 4-lane at 25 G per lane, passive, 2
meters.
 QSFP28 to QSFP28 – 100 GbE, 4-lane at 25 G per lane, passive, 3
meters.

Deep Dive: Follow the cabling guidelines in the Installation


and Service guide for the specific models on the
PowerStore Hub.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 47


Knowledge Check: Back-end Connectivity

Knowledge Check: Back-end Connectivity

Knowledge Check: Back-End Connectivity

1. A solution architect is designing a Dell Unity XT 880 solution for a


company that might require a system scale up in the short term to
accommodate the production data growth. How many expansion
enclosures must the system exceed to require the use of a 12 Gb/s
I/O module?
a. 10
b. 19
c. 32

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 48


Drive Configuration

Drive Configuration

Dell Unity XT Disk Processor Enclosure (DPE)

The Dell Unity XT Disk Processor Enclosure (DPE) is a 2U chassis that


houses two Storage Processors (SPs), drives and I/O connections and
modules.

Unity XT Disk Processor Enclosure

A different version of the DPE is compatible with:


 Unity XT 380/380F models
 Unity XT 480/480F, 680/680F, 880/880F models.

The Unity XT DPE houses 25 drive slots.


 The drives are populated in the system from left to right.

The system recognizes the DPE as Bus 99 Enclosure 0.


 The twenty-five drives are internally recognized as Bus 99 Enclosure 0
Drive 0 - Bus 0 Enclosure 0 Drive 24.

There are LEDs on the front of the DPE for both the enclosure and drives
to indicate status and faults.

Dell Unity XT Drive Slot Layout

The Dell Unity XT 380/380F model uses a different physical chassis than
the Dell Unity XT 480/480F and higher models.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 49


Drive Configuration

Unity XT 380/380F models

Unity XT 380/380F, houses twenty-five drive slots supporting 2.5-inch


SAS, and SAS Flash drives.

Front view of a Unity XT 380 DPE with twenty-five 2.5-inch drives

Unity XT 480/480F and higher models

The Unity XT three high-end system models are 480/480F, 680/680F, and
880/880F.

The DPE on these models houses 25x drive slots supporting 2.5-in. SAS,
and SAS Flash drives.

Front view of a Unity XT 680F DPE with twenty-five 2.5-inch drives

Important: The first four drive slots are reserved for system
drives which contain data that is used by the operating
environment (OE) software. Space is reserved for the system
on these drives, and the remaining space is available for
storage pools. These drives should not be moved within the
DPE or relocated to another enclosure.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 50


Drive Configuration

Dell Unity XT System Drives

The first four drives of Dell Unity XT systems are system drives (DPE disk
0 through DPE disk 3). Capacities from these drives store configuration
and other critical system data.

The available capacity from each of these drives is about 107 GB less
than from other drives.

Front view of a Unity XT 380 DPE with twenty-five 2.5-inch drives

Using System Drives for Data

 System drives can be added to storage pools like any other drive, but
offer less usable capacity due to the system partitions.
 To reduce the capacity difference when adding the system drives to a
pool, use a smaller RAID width for pools which contain the system
drives.

 For example, choose RAID 5 (4+1) for a pool containing the system
drives, instead of RAID 5 (12+1).

Considerations

 When used in storage pools, the system drives are counted in


determining how many hot spare drives are reserved.
 Unbound system drives cannot be used as hot spare drives.

 Due to the capacity difference of the system drives, the system


does not add them to storage pools.
 However, only after all other nonspare drives of the same type have
been used are the drives added.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 51


Drive Configuration

In Small and Large Configurations

 For smaller configurations (low drive counts), using the system


drives in storage pools is encouraged to maximize system capacity
and performance.
 For larger configurations (high drive counts and many storage
objects), do not use the system drives in storage pools.

 Heavy client workloads to these drives may interfere with the ability
of the system to capture configuration changes and result in slow
management operations.

Conclusion

 Consider not using the system drives in storage pools for large
configurations with high drive counts and many storage objects.
 System drives should not be used for systems which do not allow
remote access by support.

Important: There are no restrictions on using the system


drives in storage pools on Unity All-Flash arrays.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 52


Drive Configuration

Dell Unity XT Rotating Drive Support

SAS 10 K

SAS Drives

Dell Unity XT supports SAS 10 K RPM drives.

The SAS 10 K RPM drives:


 Have a 2.5-inch (6.35 cm) form factor and uses a 4 K block size.
 Are available in 600 GB, 1.2 TB, and 1.8 TB capacities.
 Can be used in the 25-drive DAE or the 80-drive DAE.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 53


Drive Configuration

NL-SAS

NL-SAS Drives

Dell Unity XT supports NL-SAS drives.

The NL-SAS 7.2 K RPM drives:


 Have a 3.5” form factor, uses a 4 K block size.
 Are available in 4 TB, 6 TB, or 12 TB capacities.
 Have a 12 Gb/s backend interface.

Important: For compatibility information, locate the latest


Dell Unity XT Storage Systems - Drive and OE Compatibility
Matrix document at Dell Unity Info Hub.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 54


Drive Configuration

Dell Unity XT SAS Flash Drive Support

SAS Flash 2

Dell Unity XT supports SAS Flash 2 drives.

SAS FLASH 2 drives:


 Use the eMLC technology and are rated at 10 writes per
day [WPD].
 Have a 2.5” form factor. Unity XT supports only the 400
GB size.
 Have a 520-byte block size and show up in Unisphere as
SAS Flash 2 drives.
 Are supported in FAST Cache, Mixed, or All-Flash-pools.

400 GB
SAS Flash 3 SAS Flash
2 drive

800 GB, 1.6 TB, and 3.2 TB SAS Flash 3 drives

Dell Unity XT supports SAS Flash 3 drives.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 55


Drive Configuration

SAS Flash 3 drives:


 Use the eMLC technology and are rated to 3 WPD.
 Have a 2.5” form factor and Unity XT support capacities of 800 GB, 1.6
TB, and 3.2 TB.
 Have a 520-byte block size and are seen as SAS FLASH 3 in
Unisphere.

Some restrictions as to where these drives can be used include:


 The 800 GB, 1.6 TB, and 3.2 TB Flash 3 drives are supported in Mixed
and All-Flash pools.
 SAS Flash 3 drives cannot be used in FAST Cache.

SAS Flash 4

1.92 TB, 3.84 TB, 7.68 TB, and 15.36 TB SAS Flash 4 drives

Dell Unity XT supports SAS Flash 4 drives.

SAS Flash 4 drives:


 Use Triple Level Cell [TLC] technology and are rated at 1 WPD.
 Are in a 2.5” form factor with Unity XT supporting 1.92 TB, 3.84 TB,
7.68 TB, and 15.36 TB capacities.
 Have a 520-byte block size and are seen as SAS FLASH 4 in
Unisphere.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 56


Drive Configuration

Some restrictions as to where these drives can be used include:


 SAS Flash 4 drives can be mixed in other SAS flash drive types in a
pool.
 SAS Flash 4 drives can be used in the system drive slots.
 SAS Flash 4 drives can only be used in All-Flash pools.
 SAS Flash 4 drives cannot be used in Mixed pools or FAST Cache
implementations.

Tip: In the hybrid models, only the 7.68 TB SAS Flash 4


drive is supported.

Important: For compatibility information, locate the latest


Dell Unity XT Storage Systems - Drive and OE Compatibility
Matrix document at Dell Unity Info Hub.

Dell Unity XT SSD Wear Leveling

SSD (Flash) cells wear out as data is written to the drive.

SSDs have a limit for the amount of program/erase (PE) cycles that can
be done before the device becomes unreliable.

Based on the WPD, SSD drives are classified into:


 High Endurance (HE)
 Medium Endurance (ME)
 Low Endurance (LE)

For example, a HE drive would be limited to around 25 WPD, whereas ME


is limited to 10 WPD and LE to 1 to 3 WPD.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 57


Drive Configuration

Dell Unity XT systems use a wear leveling technique to ensure that each
drive is operational until the end of the warranty period. Wear leveling
optimization ensures that the Flash drives do not prematurely wear out.
The SSD optimizations limit the number of writes to the drive. The drive is
given a certain quota of writes calculated based on wearing consumption
model.

Wear information of Flash drives is propagated up the software stack to


other software components like FAST VP.

Slice allocation requests take into account the wear level. In traditional
pools, wear information is propagated at the RAID Group [RG] level per
storage resource. The Dell Unity XT system determines which RAID
Group that the slice is allocated from in an attempt to balance wear across
other RAID Groups. This information is not visible to the user.

Unisphere users can view wear alerts, which are issued at 180, 90, 60,
and 30-day periods before the predicted end of life. Proactive Copy
operations (PACO) are automatically initiated at 30 days to spare drive.
Also, drive health status is updated to show faulted state and no longer
usable by the system.

Mixing Drive Types in Dell Unity XT

Mixed Pools All Flash Pools

NL-SAS SAS Flash 3

SAS SAS Flash 4

SAS Flash 2

SAS Flash 3

The mixing of different Flash drive types in the same pool is supported.
 The same drive sparing rules still apply.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 58


Drive Configuration

 For example, a SAS Flash 4 drive still requires a SAS Flash 4 drive
to be available for sparing.
 The drive types under the Mixed Pools column can be used in mixed
pools.
 The drive types under the All Flash Pools column can be used in all-
flash pools.

Dell Unity XT Supported DAEs

Dell Unity XT storage systems support two types of DAEs:


 2U, 25 Drive DAE with 2.5-inch drives
 3U, 15 Drive DAE with 3.5-inch drives

Each model supports a different number of drive slots.

System Model Drive Slots

380/380F 500

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 59


Drive Configuration

480/480F 750

680/680F 1000

880/880F 1500

Important: The maximum number of DAEs depends on the


drive type in the DPE and DAEs. For compatibility
information, locate the latest Dell Unity XT Storage
Systems - Drive and OE Compatibility Matrix document at
Dell Support.

Dell Unity XT Maximum Recommended Drive Types


and IOPS

Depending on the workload attributes that are applied (I/O size, access
patterns, queue depth), individual drives provide varying levels of
performance.

To prevent drives from becoming a bottleneck, do not exceed the


maximum recommended IOPS per drive type per drive.

Drive Type Maximum Recommended IOPS per Drive

SAS Flash (all types) 20,000

SAS 15 K 350

SAS 10K 250

NL-SAS 150

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 60


Drive Configuration

Important: If the drives in a storage pool are observed to


sustain near the maximums IOPS for extended time periods,
add more drives to the pool. Adding drives spread the load
across more resources.

Dell Unity XT Hot Spares

This information is for Tradition pools, Dynamic pools do not require


dedicated hot spares.

 Traditional storage pools automatically reserve one unbound drive


out of every 31 drives of a given type as a hot spare.
 Hot spare drives have no special configuration, they remain unbound.
 Unity XT does not allow a traditional storage pool to be created or
expanded unless:
 The available number of unbound drives is sufficient to create the
requested pool and continue to satisfy the hot spare policy.
 Consider spare drive count requirements when designing traditional
storage pool layouts.
 Reduce the required hot spare count by decreasing the number of
individual drive types within a system.

Drives for Traditional Total Requires Spare


Storage Pools Drives

System 1 9x 600 GB SAS 15 K 3 total:


25 TB usable 9x 1200 GB SAS 10 K 1x 600 GB SAS 15 K
9x 1800 GB SAS 10 K 1x 1200 GB SAS 10 K
1x 1800 GB SAS 10 K

System 2 27x 1200 GB SAS 10 K 1 total:


25 TB usable 1x 1200 GB SAS 10 K

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 61


Drive Configuration

Important: Unity systems throttle hot spare rebuild operations


to reduce the impact to host I/O. Rebuilds occur more quickly
during periods of low system utilization.

PowerStore Base Enclosure

A Base Enclosure is the fundamental chassis of the PowerStore system. It


houses two nodes, a midplane, and NVMe devices (drives).
 The enclosure has the height of 2U, 3.44 inches (8.74 cm), and a
depth of 31.2 in. (79.2 cm).
 Each node consists of CPUs, DRAM, I/O modules with ports, a power
supply, and fans.
 Nodes perform the tasks of saving and retrieving of block, file, and VM
data.

A base enclosure includes slots for 25 2.5-inch (96.35 cm) drives.

Nodes use the latest NVMe interface to connect to the internal drives.*

PowerStore base enclosure with bezel removed

1. Base Enclosure Power Status LED


2. Drives 0–20: NVMe SSD or SCM Drives
3. Drives 21–24: NVMe NVRAM SSDs (except PowerStore 500)

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 62


Drive Configuration

For systems that use two NVMe NVRAM drives, slots 21 and 22 must
remain unpopulated.
4. Drive Ready/Activity LED (Blue) on each drive.
5. Drive Fault LED (Amber) on each drive.
6. World Wide Name (WWN) Seed Tag: A blue pull-out tag that is located
between slots 7 and 8
7. Dell Serial Number Tag: A black pull-out tag that is located between
slots 16 and 17. The left side of the tag shows the Serial Number,
Service Tag, and Part Number. The right side shows the QRL code.
The QRL can be used to quickly find product info.

* In PowerStore, the storage devices are called drives for


historical reference, but they are NVMe solid-state devices.

PowerStore Base Enclosure Drive Slot Layouts

The PowerStore Base Enclosure has 25 slots that are labeled 0 to 24, and
support only NVMe devices.
 Data drive slots in the base enclosure can be populated with NVMe
SSD or NVMe SCM drives (For data) in any combination.
 Models—except for the PS 500 model have drive slots for NVMe
NVRAM devices (For caching and vaulting).
 A minimum of six data drives must be used.

PS 500 Model

On the PowerStore 500 model, slots 0 through 24 are used for data
storage. There are no NVRAM drives.

Write caching, and vaulting are performed by the internal M.2


module.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 63


Drive Configuration

PowerStore 500 disk view

PS 1000 - 3200 Models

On the PowerStore 1000, 1200, 3000, and 3200 models, the last two slots
(23 and 24) are populated with two NVMe NVRAM devices.

On these models, the two NVMe NVRAM devices are used for write
caching and vaulting.

NVRAM slots 21 and 22 must remain unpopulated on these models:

PowerStore 1000, 1200, 3000 and 3200 disk view

PS 5000 - 9200 Models

PowerStore 5000, 5200, 7000, 9000, and 9200 have higher performance
requirements.

These models are populated with four NVMe NVRAM devices in all the
NVRAM slots, 21 through 24.

On these models, the four NVMe NVRAM devices are used for write
caching and vaulting.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 64


Drive Configuration

PowerStore 5000, 5200, 7000, 9000, and 9200 disk view

PowerStore Drive Offerings

PowerStore supports four types of drives: NVMe SSD (Flash), NVMe


Storage Class Memory (SCM) SSD, NVMe NVRAM, and SAS SSD
(Flash).

The Base Enclosure supports three  The SAS Expansion Enclosure


device types: supports SAS SSD (All Slots).
 NVMe SSD (Data drive slots)  The NVMe Expansion Enclosure
 NVMe SCM (Data drive slots) supports NVMe SSD (All Slots).

 NVMe NVRAM (NVRAM slots)

 Supported device capacities:

 Four capacities of NVMe Flash for use in Base Enclosure and


NVMe expansion Enclosures
 Three capacities of SAS Flash for use in SAS Expansion
Enclosures
 One capacity of NVMe NVRAM for use in Base Enclosure
 One capacity of NVMe SCM Flash
o Systems containing NVMe SSDs and NVMe SCM drives
support expansion enclosures.
o Systems having only NVMe SCM drives do not support
expansion enclosures.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 65


Drive Configuration

o In mixed SSD/SCM combinations, the SCM drives are used


for metadata tiering (done automatically by the system).
o In mixed SSD/SCM combinations, SCM drives do not add
to capacity for user data.
 All current devices are 100% FIPS certified and labeled.

Go to: See Dell Encryption/Dell Data Protection Encryption


FIPS Compliance article and a Complete list of supported
drive types.

PowerStore Supported Expansion Enclosures

ENS24 Expansion Enclosure is an NVMe expansion enclosure.


 The ENS24 uses NVMe over Fabric (NVMe/oF) standards. Delivers an
end-to-end NVMe solution.
 Slots for 24 2.5-inch, NVMe drives.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 66


Drive Configuration

 Performance equal to the local PCIe NVMe drives with a 10-


microsecond delay.
 Requires that the base enclosure includes the v2 embedded module
and a 100 GbE 2-port card.
 Not supported on systems that include SAS expansion enclosures.
 Does not support NVMe SCM drives and is not supported with SCM-
only base enclosures.

ESS25 Expansion Enclosure is a SAS expansion enclosure.


 Slots for 25 2.5-inch, 12-Gb/s SAS drives.
 12-Gb/s SAS interface using two link control cards (LCC); A and B for
communication between the nodes and expansion enclosures.
 The SAS expansion enclosure is not supported on systems that
include NVMe expansion enclosures.

PowerStore Appliances and Drive Configurations

PowerStore can be configured with NVMe solid-state devices (SSDs) or


NVMe storage class memory (SCM) drives for user data.
 SSD-based systems can be expanded with additional drives to
increase the amount of available storage capacity.
 PowerStore 1000, 3000, 5000, 7000, and 9000 models can be
expanded with SAS SSD-based expansion shelves.
 All PowerStore models starting in PowerStoreOS 3.0 can be expanded
with NVMe SSD-based expansion shelves.
 The PowerStore models must meet the necessary hardware
prerequisites that are defined in the Dell PowerStore: Introduction
to the Platform white paper.
 To maximize the usable capacity from each drive, select drives that are
the same size within a system.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 67


Drive Configuration

Per 500 1200 3200 5200 9200


Appliance

Processors Two Intel Four Intel Four Intel Four Intel Four
Xeon Xeon Xeon Xeon Intel
CPUs 24 CPUs 40 CPUs 64 CPUs 96 Xeon
cores, 2.2 cores, 2.4 cores, 2.1 cores, 2.2 CPUs
GHz GHz GHz GHz 112
cores,
2.2 GHz

Memory 192 GB 384 GB 768 GB 1152 GB 2560 GB

Max Drives 97 93 93 93 93

NVRAM N/A 2 2 4 4
Drives

Base 2U enclosure with dual active/active nodes and twenty-five


Enclosure (25) 2.5” NVMe drive slots.

Expansion 2U enclosures with twenty-four (24) 2.5” NVMe drives slots,


Enclosures up to three per appliance.

For more information about PowerStore, go to the Dell


PowerStore home page.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 68


Knowledge Check: Drive Configuration

Knowledge Check: Drive Configuration

Knowledge Check: Drive Configuration - Unity XT

Environment

A company is interested in a low-cost Dell Unity XT solution that can


support its relatively small operation. The system must accommodate
scale-out capabilities to support the expansion of the business operations
and productivity in the long-term. The solutions architect recommends a
Dell Unity XT 480 system to address the customer requirements at the
initial stage with the perspective to perform DIP upgrades in the future.

Instructions

1. Go to the Dell Unity Info Hub: Product documents and information KB


article.
2. Open the Dell Unity XT: Introduction to the Platform document.
3. Review the Data-in-Place Conversions considerations.

Activity

Based on the document information, answer the question.

Knowledge Check: Drive Configuration - PowerStore

1. A solutions architect is sizing a PowerStore system. The complete


solution must be as economical as possible. What is the minimum
number of drives the system must be initially installed with?
a. 10
b. 6
c. 8

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 69


Network Connectivity Considerations

Network Connectivity Considerations

General Network Performance and High Availability

Fibre Channel, and Ethernet networks play a large role in determining the
performance potential of Dell midrange storage solutions.

Some general considerations must be observed for network performance


and high availability according to the best practices.

High Availability

In general, front-end ports should be connected and configured


symmetrically across:
 The two storage processors (SPs) in a Unity XT DPE.
 The two nodes in a PowerStore Base Enclosure.

This configuration facilitates high availability and continued connectivity if


there is SP or node failure.
 For example, a NAS server that is associated to port 0 of the first I/O
Module on SPA.

 The NAS sever should also be associated to port 0 of the first I/O
Module on SPB.
 For this configuration, access is available to the same networks.
Dell Technologies recommends using redundant switch hardware
between the midrange storage system and external clients.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 70


Network Connectivity Considerations

Create smaller failure domains2.


 Smaller failure domains reduce the risk of disruption over a large
section of a network and ease the troubleshooting process.
 The size of a failure domain and its potential impact depends on the
device or service that is malfunctioning.

 For example, a router potentially experiencing problems would


generally create a more significant failure domain than a network
switch would.

Load Balancing

For best performance, Dell Technologies recommends using all front-end


ports that are installed in the system.

Using all ports distributes the workload across as many resources as


possible.
 For example, if configuring the 4-port Fibre Channel I/O Module on a
Unity XT system, zone different hosts to different ports.

 In this case, all eight ports across the two SPs are used. Do not
zone all hosts to the first port of each I/O Module.

Fibre Channel Fabrics

Consider these guidelines when zoning PowerStore appliances to external


hosts through FC switches.

2
A failure domain encompasses a section of a network that is negatively
effected when a critical device or network service experiences problems.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 71


Network Connectivity Considerations

For Fibre Channel connectivity, configure dual redundant fabrics.


 Each PowerStore node and each external host must have connectivity
on each of the fabrics.
 The configuration minimizes the number of hops between host and
PowerStore.

For performance, load balancing, and redundancy:


 Each host should have at least two paths to each PowerStore node.
(four ports per PowerStore appliance.)
 Four ports per PowerStore appliance.
 It is recommended that a host should have no more than eight paths
per volume.

Ethernet Networks

Consider these guidelines for Ethernet connectivity of PowerStore


appliances.

For Ethernet switches:


 Use multiple switches that are connected with VLTi and LACP or
equivalent technologies.
 Each PowerStore node should have connectivity to all linked switches.

PowerStore T models support the configuration of LACP bonds:


 The first two ports of the embedded module 4-port card on each
PowerStore node are bonded together during the system deployment.
 For highest performance and availability from these ports, it is
recommended also to configure link aggregation across the
corresponding switch ports.

When using Ethernet connectivity for block access To PowerStore (iSCSI


or NVMe over TCP).
 Each host must have at least two paths to each PowerStore node (four
paths per appliance).
 Dell Technologies recommends that a host does not have more than
eight active paths per volume.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 72


Network Connectivity Considerations

Go to: The PowerStore Info Hub or the Unity Info Hub


pages and review the Host Configuration guides for the
different hosts, and midrange storage arrays.

Dell Unity XT Front-End Connectivity

A Dell Unity XT Disk Processor Enclosure (DPE) provides multiple options


for front-end connectivity through:
 Onboard ports directly on the DPE.
 Optional I/O Modules installed on expansion slots. Review the list of
supported I/O modules.

Unity XT 380/F Model

A Unity XT 380/380F system DPE has three options of ports for front-end
connectivity.

Rear view of a Unity XT 380/380F 12 Gb/s 25 drives DPE showing front-end connectivity
port options.

Components per Storage Processor enclosure:


1. 2x Onboard 10 GBaseT ports for front-end connectivity
 The ports support 10 GbE or 1 GbE connectivity for Block iSCSI
and File IP.
 These ports support custom MTU frames.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 73


Network Connectivity Considerations

2. 2x Converged Network Adapter (CNA) ports for 8/16 Gb FC or 10 GbE


Optical iSCSI
 Review the supported SFPs for Fibre Channel or Ethernet
iSCSI/File connectivity.
3. 2x optional I/O modules for front-end connectivity

Unity XT 480/F and Higher Models

The DPE on Unity XT 480/480F and higher models has two options of
ports for front-end connectivity.

Rear view of a Unity XT 680 DPE showing front-end connectivity port options.

Components per Storage Processor enclosure:


1. 4-Port Mezz Card for optional 25GbE/10GbE or 10GBaseT front-end
connectivity
 The 25 GbE SFP Mezz card supports speeds of 25 Gb/s, 10 Gb/s,
and 1 Gb/s.
 The 10 GbE BaseT Mezz card supports speeds of 10 Gb/s and 1
Gb/s.
2. 2x I/O module slots for front-end connectivity

Dell Unity XT Front-End Block Connectivity


Performance Guidelines

Fibre Channel

Fibre Channel ports can negotiate to lower speeds.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 74


Network Connectivity Considerations

When configuring a Dell Unity XT system for Fibre Channel connectivity:


 CNA ports and 4-port 16 Gb FC I/O modules can be configured with 8
Gb or 16 Gb SFPs.
 CNA ports are only available in Unity XT 380 or 380F systems and
support speeds of 4 Gb/s, 8 Gb/s, and 16 Gb/s (Auto Negotiable).
 4-port 32 Gb FC I/O modules can be configured with 16 Gb SFPs or 32
Gb SFPs.
 32 Gb SFPs support speeds of 32 Gb/s, 16 Gb/s, and 8 Gb/s.
 32 Gb FC is recommended for the best performance.
 Different SFPs can be mixed within the same I/O module. Matching the
peer is recommended.

Dell Technologies recommends single-initiator zoning when creating zone


sets.
 For high availability purposes, a single host initiator should be zoned to
at least one port from SPA and one port from SPB.
 For load balancing on a single SP, the host initiator can be zoned to
two ports from SPA and two ports from SPB.
 When zoning more host initiators, zone them to different SP ports
when possible, to spread the load across all available SP ports.

Use multipathing software on hosts connected over Fibre Channel.


 Dell PowerPath, coordinates with the Dell Unity XT system to provide
path redundancy and load balancing.

iSCSI

Dell Unity XT systems support iSCSI connections on multiple 1 Gb/s and


10 Gb/s port options.
 10GBASE-T ports can autonegotiate to 1 Gb/s speeds.
 10 Gb/s is recommended for the best performance.

If possible, configure Jumbo frames (MTU 9000) on all ports in the end-to-
end network, to provide the best performance.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 75


Network Connectivity Considerations

To achieve optimal iSCSI performance, use separate networks and


VLANs to separate iSCSI traffic from normal network traffic.
 Configure standard 802.3x Flow Control (Pause or Link Pause) on all
iSCSI Initiator and Target ports that are connected to the dedicated
iSCSI VLAN.

Dell Unity supports 10 GbE and 1GBase-T ports that provide iSCSI
offload.
 The CNA ports (when configured as 10 GbE or 1GBase-T) and the 2-
port 10 GbE I/O Module ports provide iSCSI offload.
 Using these modules with iSCSI can reduce the protocol load on SP
CPUs by 10-20%, so that those cycles can be used for other services.

Use multipathing software on hosts connected over iSCSI.


 Dell PowerPath coordinates with the Dell Unity XT system to provide
path redundancy and load balancing.

Port Performance

The table provides maximum expected IOPS and bandwidth values from
different Unity XT ports used for Block front-end connectivity.
 The capability of a port does not guarantee that the system can reach
that level, nor does it guarantee that performance scales with
additional ports.
 System capabilities are highly dependent on other configuration
parameters.

Port Maximum IOPS per Port Maximum MB/s per Port

16 Gb FC CNA or 4-port I/O 45,000 1,500


Module

8 Gb FC CNA 45,000 750

10 GbE iSCSI CNA or 2-port 25,000 900


I/O Module

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 76


Network Connectivity Considerations

10 GbE iSCSI 4-port I/O 30,000 1,100


Module

10 GBase-T iSCSI CNA, On- 30,000 1,100


board, or 4-port I/O Module0

1 GBase-T iSCSI CNA, 3,000 110


onboard, or 4-port I/O
Module

Dell Unity XT Front-End File Connectivity Performance


Guidelines

File Connectivity

Dell Unity supports NAS, NFS, FTP, and SMB connections on multiple 1
Gb/s and 10 Gb/s port options.
 10G BASE-T ports can auto-negotiate to 1 Gb/s speeds.
 10 Gb/s is recommended for the best performance.

If possible, configure Jumbo frames (MTU 9000) on all ports in the end-to-
end network, to provide the best performance.

Dell recommends configuring standard 802.3x Flow Control (Pause or


Link Pause) on all storage ports, switch ports, and client ports that are
used for NAS connectivity.

LACP

Dell recommends configuring LACP across multiple NAS ports on a single


SP.
 This configuration provides path redundancy between clients and NAS
server.
 LACP creates a link aggregation with multiple active links.
 LACP can also improve performance with multiple 1GBase-T
connections, by aggregating bandwidth.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 77


Network Connectivity Considerations

 LACP can be configured across ports on-board the SP, or across ports
on the same I/O Module.
 LACP can be configured across any Ethernet ports that have the same
speed, duplex, and MTU.
 LACP cannot be enabled on ports that are also used for iSCSI
connections.

FSN

Combine FSN and LACP with redundant switches to provide the highest
network availability.

FSN provides redundancy by configuring a primary link and a standby link.


 The standby link is inactive unless the entire primary link fails.

If FSN is configured with links of different performance capabilities:


 It is recommended to configure the highest performing link as the
primary.

 Example: Link aggregation of 10 Gb/s ports, and a stand-alone 1


Gb/s port.

Load Balancing

For load-balancing, Dell recommends creating at least two NAS servers


per Dell Unity XT system (one on each SP).
 NAS servers are assigned to a single SP. All file systems serviced by
the NAS server have I/O processed on the SP where the NAS server is
resident.
 Assign file systems to each NAS server such that front-end workload is
roughly the same for each SP.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 78


Network Connectivity Considerations

Port Performance

The table provides maximum expected IOPS and bandwidth values from
different Unity XT ports used for File front-end connectivity.
 The capability of a port does not guarantee that the system can reach
that level, nor does it guarantee that performance scales with
additional ports.
 System capabilities are highly dependent on other configuration
parameters.

Port Maximum IOPS per Port Maximum MB/s per Port

10 GbE NAS CNA or 2-port 60,000 1.100


I/O Module

10 GbE NAS 4-port I/O 60,000 1,100


Module

10 GBase-T NAS onboard 60,000 1,100


or 4-port I/O Module

1G Base-T NAS CNA, 6,000 110


onboard, or 4-port I/O
Module

Dell Unity XT NAS Server Multi-tenancy

Overview

The Dell Unity XT Operating Environment enables network isolation for


File-based tenants with IP Multitenancy for NAS Servers.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 79


Network Connectivity Considerations

Three NAS servers with isolated namespaces available to tenants

IP Multitenancy ensures that tenant visibility, and management are


restricted to the assigned resources only.
 The feature enables the assignment of isolated, file-based storage
partitions to the NAS Servers on a Storage Processor.
 Each tenant has its own network namespace: Network interfaces,
VLAN domain, routing table, IP firewall, DNS, and so on.
 Isolates network traffic at the kernel level on the SP.

IP Multitenancy enables service providers, with multiple customers on a


single system, to isolate storage resources for tenants.
 Service Providers can provide tenants with their own DNS server or
other administrative servers.
 Each tenant can have their own authentication and security validation
from protocol layer.

The feature is managed from Unisphere, UEMCLI commands, and REST


API calls.

Theory of Operations

A tenant is created with a name, one or more VLAN IDs, and a Universally
Unique Identifier (UUID).

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 80


Network Connectivity Considerations

A tenant created with three isolated NAS servers and access defined by associated host
configuration profiles.

The UUID is automatically created by default by the system, or it can be


manually entered.

Once a tenant is created, NAS servers must be created for each tenant
VLAN.
 Host configurations that provide access to hosts, subnets, and
netgroups can then be created and associated with the tenants.
 These host configurations are used to control the access of NFS and
SMB clients to shared file systems.
 Access to SMB file systems is controlled through a file and directory
access permissions set using Windows Directory controls.

The associated VLANs separate the tenant traffic, providing tenant data
separation and increasing security. The tenant traffic is separated at the
Linux Kernel layer.

Each tenant has one or multiple NAS servers, however, each NAS server
can be associated with only one tenant.

Up to 50 tenants can be configured on a Dell Unity XT system using the


Unisphere GUI.

VLAN Tagging

Each tenant is associated with one or more VLANs by default.


 A NAS server is responsible for interpreting the VLAN tags and
processing the packets appropriately.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 81


Network Connectivity Considerations

 A VLAN capable network switch must be used.


 The VLAN switch ports for servers are configured to include VLAN tags
on packets that are sent to a NAS server.

Three NAS servers with the same IP address isolated by VLAN tagging

The association of the NAS servers with each tenant provides the wanted
isolated network, providing each tenant with its own IP namespaces.

 NAS server IP addresses can then be configured without concerning


about IP overlapping issues.
 This is helpful if there are service providers that need to
accommodate the different tenants data path IP access
requirements in the customer storage network.
 Sometimes the IPv4 addresses must conform to a tenant required IP
address schema for the access and management of storage objects.

 This requirement may mean that the same IP addresses may be


used for different tenants.
Example: A tenant can have 10.10.10.10 as an NAS server IP address
while other tenants can have the same IP address.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 82


Network Connectivity Considerations

Requirements and Capabilities

IP multi-tenancy is implemented by adding a tenant to a Dell Unity XT


system. Then, one or more VLANs are associated with the tenant. A NAS
server should be created for each of the tenant VLANs as required.

Here are some considerations when configuring IP multi-tenancy on the


storage system:
 There is a one-to-many relationship between tenants and NAS servers.
 A tenant can be associated with multiple NAS servers, but a NAS
server can only be associated with one tenant
 Up to 50 tenants can be configured on a Dell Unity XT system.
 It is recommended that a separate pool is created for each tenant and
then that pool be associated with all tenant NAS servers.
 NAS servers can be associated with tenants when created.
 The NAS server association to a tenant cannot be changed to
another tenant or removed from the tenant.
 During replication, data for a tenant is transferred over the service
provider network, not the tenant network.
 Spike in traffic for one tenant can negatively impact the response time
for other tenants.

PowerStore Front-End Connectivity

Each PowerStore node contains an I/O Personality Module (IOPM) that


can hold one 4-port card for front-end connectivity and internal
communication.

A PowerStore Base Enclosure also supports the installation of four


optional I/O modules for front-end connectivity. Review the supported I/O
module types.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 83


Network Connectivity Considerations

PowerStore 500

PowerStore 500 MEZZ card

The 4-port card on MEZZ 0 is used for cluster interconnect, front-end


connectivity, and back-end connection to a NVMe expansion enclosure.

Only the 4-port 25 GBE SFP-based Mezz card is supported.


 The card supports 1 GbE SFP to RJ45, 10 GbE SFP, 25 GbE SFP28.

 25 GbE passive TwinAx.


 10 GbE active or passive TwinAx.
The Mezz card includes 4x TwinAx cables to be used to connect the
appliance to a Top of Rack switch.

A 2-port card on MEZZ 1 is used for front-end connectivity and replication.


The 2-port card is a fixed 10 GbE optical card.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 84


Network Connectivity Considerations

PowerStore 1000-9000

Embedded Module v1 with 4-port Mezz card for front-end connectivity.

The v1 Embedded Module ships with PowerStore 1000 through 9000


models.

The 4-port card on MEZZ 0 is used for connection of the Base Enclosure
to an intercluster switch.

There are two types of 4-port Mezz cards.


 4-port 25 GbE SFP-based card, which supports:
 1 GbE SFP to RJ45
 10 GbE or 25 GbE SFP
 25 GbE passive TwinAx
 10 GbE active or passive TwinAx
 4-port 10GBaseT card. The 4-port 10GBaseT embedded module
serves Ethernet traffic at the supported speeds of 1 GbE and 10 GbE.
 The first two ports of the 4-port card on the Embedded Module should
be connected to a pair of the 10GbE/25GbE Ethernet switches. One
port to each switch.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 85


Network Connectivity Considerations

PowerStore 1200-9200

Embedded Module v2 with 4-port card and optional 2-port 100 GBe QSFP card

 The Embedded Module v2 ships only with PowerStore 1200-9200


models.

 The 4-port card on MEZZ 0 is used for for intercluster connectivity


and front-end connectivity.
 One 2-port card on MEZZ 1 is primarily used for 100 GbE (QSFP)
backend NVMe Expansion Enclosure connectivity.

PowerStore Front-End Connectivity Performance


Guidelines - Fibre Channel

Fibre Channel Ports

PowerStore supports Fibre Channel connectivity through ports on optional


I/O modules.
 The Fibre Channel I/O module ports support speeds for 32 Gb/s, 16
Gb/s, 8 Gb/s, and 4 Gb/s.
 The speed depends on the SFP used and the switchport or HBA
that is connected.
 It is recommended to use the highest speed supported by the
environment.

 Higher speeds allow for greater MBPS and IOPS capabilities.


The Fibre Channel I/O module is 16-lane PCIe Gen3. Select the
appropriate I/O module slots on the PowerStore nodes for installation.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 86


Network Connectivity Considerations

I/O Module Slots

On PowerStore 1000 through PowerStore 9200 models the I/O module


slot 0 is 16-lane, while the I/O module slot 1 is 8-lane.
 It is recommended to use I/O module slot 0 first unless the system
contains a 100 GbE I/O module.
 If FC I/O modules are installed in both I/O module slots:
 it is recommended to cable the ports in I/O module slot 0 first, due
to the PCIe difference.
 The PCIe lanes in I/O module slot 1 are a limiting factor for total
MBPS.

 The limitation occurs only when all four ports on the Fibre Channel
I/O module are operating at 32 Gb/s.
Both I/O module slots on PowerStore 500 are 8-lane PCIe, and therefore
there is no slot preference.

NVMe/FC

The NVMe over Fibre Channel (NVMe/FC) protocol provides connectivity


using the same Fibre Channel ports, but can decrease the transport
latency between PowerStore and the host.

Note that all parts of the network, including switches and HBAs, must
support NVMe over Fibre Channel.

PowerStore Front-End Connectivity Performance


Guidelines - Ethernet

Ethernet Ports

PowerStore supports Ethernet connectivity through ports on installed


Mezz cards or optional I/O modules.
 PowerStore optical Ethernet ports support speeds of up to 25 Gb/s,
based on the SFP that is used.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 87


Network Connectivity Considerations

 Copper Ethernet ports support speeds of up to 10 Gb/s.


 It is recommended to use the highest speed supported by the
environment.

 Higher speeds allow for greater MBPS and IOPS capabilities.


PowerStore 1000 through 9200 models support a 2-port 100 GbE I/O
module.
 The I/O module supports speeds of up to 100 Gb/s.
 The 4-port 100 GbE I/O module can only be installed on I/O module
slot 0.

Jumbo frames (MTU 9000) are recommended for increased network


efficiency.
 Jumbo frames must be supported on all parts of the network between
PowerStore and the host.

iSCSI

To increase system MBPS capabilities, map additional Ethernet ports for


iSCSI.

Enable Jumbo frames for iSCSI by setting the Cluster MTU to 9000, and
setting the storage network MTU to 9000.

The embedded module 4-port card and the optional network I/O modules
are 8-lane PCIe Gen3.

When more than two 25 GbE ports are used, the cards are
oversubscribed for MBPS. To maximize MPBS scaling in the system, it is
recommended to:
 Cable and map the first two ports of all cards in the system first.
 Then cable and map other ports as needed.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 88


Network Connectivity Considerations

For PowerStore T Unified deployments configured for both iSCSI and file
access, the recommendations are:
 Use different physical ports for NAS and iSCSI.
 Log in host iSCSI initiators to iSCSI targets on the ports specifically
planned for iSCSI.

NVMe/TCP

The NVMe over TCP (NVMe/TCP) protocol provides connectivity using


the same physical Ethernet ports as iSCSI.
 NVMe/TCP can be enabled on the same Storage Network as iSCSI.
 Different storage networks can be created to isolate iSCSI and
NVMe/TCP traffic (recommended).

For PowerStore T Unified deployments configured for both NVMe/TCP


and file access, the recommendations are:
 Use different physical ports for NAS and NVMe/TCP.
 Log in the host NVMe/TCP NQN to NVMe/TCP targets on the ports
specifically planned for NVMe/TCP.

NAS

Dell recommends the use of bonded ports for NAS connectivity.

Clusters running PowerStoreOS 3.0 support user-defined link


aggregations.
 User-defined link aggregations are only supported for NAS server
interfaces.
 The configuration combines two to four different physical Ethernet
ports for file access only.

 The ports must be on the same node and operate at the same
speed.
 A mirror link aggregation will automatically be created on the peer
node.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 89


Network Connectivity Considerations

For highest performance and availability from any aggregated ports:


 It is recommended the configuration of link aggregation across the
corresponding switch ports.

Enable Jumbo Frames for NAS by settings the cluster MTU to 9000.

If the storage system is also providing block access through iSCSI or


NVMe/TCP, or asynchronous replication over Ethernet:
 Use different physical ports for NAS than the ports which are tagged
for replication or storage networks.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 90


Knowledge Check: Network Connectivity Considerations

Knowledge Check: Network Connectivity


Considerations

Knowledge Check: Network Connectivity


Considerations

Environment

The infrastructure of an ISP data center was upgraded with new Ethernet
and Fibre Channel switches to support 25 Gb/s and 32 Gb/s speeds. A
Dell Unity XT 680F system is configured to provide isolated NAS services
for multiple tenants and consistency groups for the invoicing application
running on a cluster of servers. An analysis of the connected hosts and
NAS clients indicate that none requires replacement of HBAs or NICs.

Instructions

1. Go to the Dell Unity Info Hub: Product documents and information KB


article.
2. Open the Dell Unity XT: Introduction to the Platform document.
3. Review the I/O Module Conversions considerations.

Activity

Based on the document information, answer the question.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 91


Storage Configuration Recommendations

Storage Configuration Recommendations

General Recommendations for Dell Unity XT Storage


Pools

Dell Unity XT supports two different types of storage pools: dynamic pools
and traditional pools.
 Pools contain groups of drives in one or more RAID configurations.
 For each RAID type, there are multiple drive count options.

 Options are selectable for a Traditional Pool or preset in a Dynamic


Pool.
The following recommendations are applicable to both types of pools.

In general, to reduce complexity and increases flexibility Dell recommends


using fewer storage pools within a Dell Unity XT system.

However, it may be appropriate to configure multiple storage pools, to:


 Separate workloads with different I/O profiles.
 Separate pools where FAST Cache is and is not active.
 Dedicate resources to meet specific performance goals.
 Separate resources for multi-tenancy. A single instance of a software
application serves multiple customers.

Dell Unity XT Storage Pool Capacity

Storage pools must maintain free capacity to operate properly.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 92


Storage Configuration Recommendations

Hybrid dynamic pool details showing its free capacity.

Dell recommends that a storage pool always has at least 10% free
capacity to maintain proper operation.

Storage pool capacity is used for multiple purposes:


 To store all data that is written into storage objects such as LUNs, file
systems, datastores, and vVols in that pool.
 To store data which is needed for snapshots of storage objects in that
pool.
 To track changes to replicated storage objects in that pool.
 To perform efficient data relocation for FAST VP.

By default, Dell Unity XT systems raise an alert if a storage pool has less
than 30% free capacity.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 93


Storage Configuration Recommendations

The system automatically invalidates any snapshots and replication


sessions if the storage pool has less than 5% free capacity.

System drives can be included within a storage pool configuration


however considerations for capacity utilization must be observed.

Dell Unity XT RAID Configurations

Dell Unity XT applies RAID protection to the storage pool to protect user
data against drive failures.

Storage pools are built using one or more individual drive groups that are
based on the RAID type and stripe width for each selected tier.
 The RAID type determines the performance characteristics of each
drive group.
 For example, a RAID 5 drive group can still operate with the loss of
one drive in a Traditional Pool or it's equivalent in a Dynamic Pool.
 The stripe width determines the fault characteristics of each drive
group.

 For example, a RAID 5 (4+1) configuration has less risk of multiple


drive faults than a RAID 5 (12+1), 13 drive configuration.
Choose the RAID type that best suits the performance, protection, and
cost needs.

RAID Characteristics
Protection
Level

RAID 1/0 RAID 1/0 provides the highest level of performance from
a given set of drive resources. RAID 1/0 also has the
lowest CPU requirements; however, only 50% of the total
drive capacity is usable. Each drive has identical data.
Data is striped across the drive pairs.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 94


Storage Configuration Recommendations

RAID 5 RAID 5 provides the best usable capacity from a set of


drive resources, but at lower overall performance and
availability than RAID 1/0. RAID 5 uses Parity.

RAID 6 RAID-6 group has higher availability than a RAID-5 group


with the same number of drives. However, the increased
redundancy comes at the expense of capacity and lower
performance.

Mixed RAID Mixed RAID configurations apply to Traditional Pools


Configurations only. If FAST VP is installed on the system, you can
create a pool with multiple storage tiers. Each tier can
have its own RAID type. Only one RAID type can be used
within a tier, but the tier can have different stripe
configurations. For example, you can mix RAID 5 (4+1)
and RAID 5 (8+1) in a tier.

Dell Unity XT Dynamic Pools

Overview

Dynamic pools are storage pools whose tiers are composed of Dynamic
Pool private RAID Groups.
 Dynamic Pools apply RAID to groups of drive extents from drives
within the pool and allow for greater flexibility in managing and
expanding the pool.
 The feature enables improved pool planning, provisioning, and
delivering a better cost per GB.

Dynamic Pools are supported on Dell Unity XT physical hardware only.


 All-Flash arrays (AFA): Dell Unity XT 380F, 480F, 680F, and 880F
models.
 Hybrid Flash arrays (HFA): Dell Unity XT 380, 480, 680, and 880
models.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 95


Storage Configuration Recommendations

 Dynamic pools on the HFA systems can be Single Tier3 or Multi-


tier4 (hybrid).
All storage pools that are created on a Dell Unity XT physical system
using Unisphere are dynamic pools by default.
 Pool management operations in Unisphere, Unisphere CLI, and REST
API are the same for both dynamic and traditional pools.

Important: The Dell UnityVSA support only the traditional


storage pools.

3
Pool consisting of a single drive type such as SAS-Flash, SAS, or NL-
SAS drives.
4
Pool consisting of a combination of SAS-Flash, SAS, and/or NL-SAS
drives.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 96


Storage Configuration Recommendations

Provisioning

Dynamic pools creation process

The storage administrator must select the RAID type (RAID 1/0, RAID 5 or
RAID 6) for the tiers that will build the dynamic pool.

The system automatically populates the RAID width which is based on the
number of drives in the system.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 97


Storage Configuration Recommendations

The example shows the configuration process for an All-Flash pool with
RAID 5 (4+1) protection.
1. The process combines the drives of the same type into a drive
partnership group.
2. At the physical disk level, the system splits the whole disk region into
identical portions of the drive called drive extents.
a. Drive extents hold a position of a RAID extent or are held in reserve
as a spare space.
b. The drive extents are grouped into a drive extent pool.
3. The drive extent pool is used to create a series of RAID extents. RAID
extents are then grouped into one or more RAID Groups.
4. The process creates a single private LUN for each created RAID
Group by concatenating pieces of all the RAID extents and striping
them across the LUN.

a. The LUN is partitioned into 256 MB slices. The system distributes


the slices across many drives in the pool.
b. The 256 MB slices are the granularity that the slice manager
operates and storage resources are allocated.

Performance

At the time of creation, dynamic pools use the largest RAID width possible
with the maximum number of drives that are specified for the stripe width.

When creating a dynamic pool, RAID type and spare space may be
selected.
 With Unisphere, the system automatically defines the RAID width to
use based on the number of drives selected.
 The selected drive count must be in compliance with the RAID
stripe width plus spare space reservation requirement for each of
the selected tiers.
 A storage administrator may set the RAID width only when creating
the pool with the UEMCLI or REST API interfaces.
 Up to two drives of spare space per 32 drives may be selected.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 98


Storage Configuration Recommendations

 If there is a drive failure, the data that was on the failed drive is
rebuilt into the spare capacity on the other drives in the pool.
 Also, unbound drives of the appropriate type can be used to
replenish a pools spare capacity, after the pool rebuild has
occurred.
With dynamic pools, there is no performance or availability advantage to
using smaller RAID widths. To maximize usable capacity with parity RAID,
it is recommended to initially create the pool with enough drives to
guarantee the largest possible RAID width.
 For RAID 5, initially create the pool with at least 14 drives.
 For RAID 6, initially create the pool with at least 17 drives.

Spare Space

Dynamic pools use spare space to rebuild failed drives within the pool.

Spare space consists of drive extents that are not associated with a RAID
Group, used to rebuild a failed drive in the drive extent pool.
 Each drive extent pool reserves a specific percentage of extents on
each disk as the spare space.
 The percentage of reserved capacity varies based on drive type and
the RAID type that is applied to this drive type.
 If a drive within a dynamic pool fails, spare space within the pool is
used.

Spare space is handled automatically when a pool is created or expanded


and is automatically balanced across all the drives within the pool.
 Spare space is required for each drive type. The minimum drive count
includes spare space allocation.
 For every 32 drives of the same drive type within a dynamic pool,
enough spare space is allocated to rebuild the largest drive in the pool.

 Up to 2 drives worth of spare space can be reserved.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 99


Storage Configuration Recommendations

Spare space is counted as part of a Pool overhead, as with RAID


overhead, and therefore is not reported to the user.
 Spare space is also not part of the usable capacity within the pool for
user data.

Drive Rebuild

When a pool drive fails, the spare space within the same Drive
Partnership Group as the failed drive is used to rebuild the failed drive.

Example of a rebuild process on a seven drives (D1-D7) pool with a faulted drive (D4).

A spare extent must be from a drive that is not already in the RAID extent
that is being rebuilt.

Free drives in the system may be consumed by the dynamic pool to


replenish spare space, which was depleted due to the drive failure.

Dell Unity XT Dynamic Pools Expansion

Considerations

A dynamic pool can expand up to the system limits by one or more drives
under most circumstances.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 100


Storage Configuration Recommendations

A new drive partnership group is created if the number of drives being


added exceeds the maximum size of 64 drives.
 Expansion is not allowed if the number of drives being added is not
enough to fulfill the minimum drive requirements.
 The minimum number of drives required to start a new partnership
group is the RAID width plus the set spare space reservation (one or
two drives).

The system automatically creates the private RAID Groups depending on


the number of drives added.
 Space becomes available in the pool after the new RAID Group is
ready.
 Expanding by the RAID width plus the hot spare reservation enables
space to be available quickly.

Deep Dive: For more information about expanding dynamic


pools, refer to Dell EMC Unity: Dynamic Pools white paper.

Rebalancing Drive Extents

Adding capacity within a Drive Partnership Group causes the drive extents
to rebalance across the new space. This process includes rebalancing
new, used, and spare space extents across all drives.

The process runs in parallel with other processes and in the background.
 Balancing extents across multiple drives distributes workloads and
wear across multiple resources.
 Optimize resource use, maximize throughput, and minimize response
time.
 Creates free space within the pool.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 101


Storage Configuration Recommendations

Adding Single Drive

When adding a single drive or less than the RAID width, the space is
available about the same time a PACO operation to the drive takes.
 The system identifies the extents that must be moved off drives to the
new drives as part of the rebalance process.
 As the extents are moved, their original space is freed up.

If adding a single drive and the spare space boundary is crossed, none of
that drive capacity is added to the pool usable capacity.

If expanding with a drive count that is equal to the Stripe width or less, the
process is divided into two phases:
1. The dynamic pool is expanded by a single drive, and the free space
made available to the user.
 This process enables some of the additional capacity to be added
to the pool.
o Only if the single drive expansion does not increase the amount
of spare space required.
 If the pool is running out of space, the new free space helps delay
the pool from becoming full.
 The new free space is made available to the user If the expansion
does not cause an increase in the Spare Space the pool requires.
o When extra drives increase the spare space requirement, a
portion of the space being added is reserved equal to the size of
one drive.
o This space reservation can occur when the spare space
requirement for drive type -1 for 31 drive policy is crossed.
2. The dynamic pool is expanded by the remaining drive count for the
original expansion request.

 Once this process is concluded, the expansion Job is complete.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 102


Storage Configuration Recommendations

Warning: Be aware that a single drive expansion takes time


to complete as RAID extents are rebalanced and space is
created.

Adding Multiple Drives

Expanding a dynamic pool by the same number (stripe width plus the hot
spare reservation) and type of drives concludes relatively fast.
 The expansion process creates extra drive extents.
 From the drive extents, the system creates RAID extents and RAID
Groups and makes the space available to the pool as user space.
 The added drives exceed the maximum number for the RAID stripe
width.
 For example, consider a RAID 5 (4+1) dynamic pool configured
with six drives (one drive worth of hot spare capacity per 32 drives).
When adding six more drives, the pool drive count (12) exceeds the
maximum allowed for the RAID stripe width (nine).
 The time for the space to be available matches the time that it takes to
expand a traditional pool.

 The user and spare extents are all contained on the original disks.
There is no rebalancing.
 If the number of drives in the pool has not reached the 32 drive
boundary there is no requirement to increase the spare space.

Important: When extra drives increase the spare space


requirement, a portion of the space being added is reserved
equal to the size of one drive. This space reservation can
occur when the spare space requirement for drive type -1
for 31 drive policy is crossed.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 103


Storage Configuration Recommendations

Mixing Drive Sizes within Dell Unity XT Dynamic Pools

Mixing Drives

Although not a recommended best practice:


 Drives of the same type but different capacities can be mixed within a
dynamic pool.
 Drives can be placed within the same drive partnership group.

These rules apply for storage pool creation and expansion, and the use of
spare space.

However, different drive types including SAS-Flash drives with different


writes per day, cannot be in the same RAID Group.

When a new drive type is added, the pool must be expanded with a
minimum number of drives.
 The number of drives must satisfy the RAID width plus the set hot
spare capacity.

Adding Drive with Different Capacity

If the number of larger capacity drives is not greater than the RAID width,
the larger drive entire capacity is not reflected in the “usable capacity.”

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 104


Storage Configuration Recommendations

Adding a different capacity drive to a dynamic pool

The example displays a RAID 5 (4+1) configuration with 1/32 drives of hot
spare reservation, using mixed drive sizes.
 A storage administrator selects an 800 GB drive to add to the pool
using the UI.
 In this configuration, only 400 GB of space is available on the 800 GB
drive.
 The remaining space is unavailable until the drive partnership group
contains at least the same number of 800 GB drives as the RAID
width+1.

Adding More Drives to Pool

Depending on the number of drives of each capacity, dynamic pools may


or may not use the entire capacity of the larger drives.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 105


Storage Configuration Recommendations

Adding drives of the same size of the largest drive on the dynamic pool

All the space within the drives is available only when the number of drives
within a drive partnership group meet the RAID width+1 requirement.

The example shows the expansion of the original RAID 5 (4+1) mixed
drive configuration by five drives.
 The operation reclaims the unused space within the 800 GB drive.

Reclaiming Available Space

After adding the correct number of drives to satisfy the RAID width of
(4+1) + 1, all the space becomes available.

Available space is reclaimed to the dynamic pool after the expansion

Observe that although these examples are possible scenarios, best


practices for building pools with the same drive sizes and types should be
followed whenever possible.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 106


Storage Configuration Recommendations

Dell Unity XT Traditional Pools

Overview

The Dell Unity XT platform uses dynamic pools by default but support the
configuration of traditional pools using UEMCLI or REST API.
 In physical systems running Unity OE 5.2 and later releases, traditional
pools can still co-exist with dynamic pools.
 Dell UnityVSA supports the deployment of ONLY traditional storage
pools.

Traditional pools are storage pools whose tiers are composed of


Traditional RAID Groups.
 Traditional RAID Groups are based on Traditional RAID with a single
associated RAID type and RAID width.
 The RAID Groups are built from drives of a certain type, and RAID
protection is applied to the discreet groups of drives within the storage
pool.
 Traditional pools can only be expanded by adding RAID Groups to the
pool.
 Traditional RAID Groups are limited to 16 drives.

Traditional pools can be Homogeneous, 5made up of only one type of


drive or Heterogeneous, 6made up of more than one type of drive.

5
In a homogeneous pool, only one disk type (SAS-Flash, SAS, or the NL-
SAS drives) is selected during pool creation.
6
Heterogeneous pools consist of multiple disk types. A hybrid system
supports SAS-Flash, SAS, and NL-SAS drives in the same pool.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 107


Storage Configuration Recommendations

Traditional pools can be managed in Unisphere, Unisphere CLI, and


REST API.

Provisioning

Traditional pools creation process

The storage administrator defines the settings for building the traditional
pool:
 Select the drive type from the available tiers. Each tier supports drives
of a certain type and a single RAID level.
 Select the RAID type (RAID 1/0, RAID 5 or RAID 6) and the stripe
width.
 Identify if the pool must use the FAST Cache feature.
 Optionally associate a Capability Profile for provisioning vVol
datastores.

The example shows the configuration process for a heterogeneous


traditional pool with two tiers and two RAID levels.
1. A RAID Group is built from SAS Flash drives (RG1) with RAID 1/0
(2+2) protection.
 The pool also includes another RAID Group built from HDDs (RG2)
with RAID 6 (4+2) protection.
2. A RAID Group Private LUN is created for each RAID Group.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 108


Storage Configuration Recommendations

3. These Private LUNs are split into continuous array slices that are 256
MB. Slices hold user data and metadata. (FAST VP moves slices to
the various tiers in the pool using this granularity level).
4. After the Private LUNs are partitioned out in 256 MB slices, they are
consolidated into a single pool that is known as a slice pool.

Performance

Homogeneous pools are recommended for applications with limited skew,


such that their access profiles can be random across a large address
range.
 Multiple LUNs with similar profiles can share pool resources.
 These LUNs provide more predictable performance based on the disk
type employed.

Each tier in a heterogeneous pool can have a different RAID configuration


set.
 Data in a particular LUN can reside on some or all of the different disk
types.
 The native storage tiering feature is able to relocate slices across
different disk types in a heterogeneous pool.
 This process ensures that the hottest data resides on the highest
performance drives.

For traditional pools, Dell generally recommends these RAID protection


levels:
 Configure RAID 5 for drives in Extreme Performance and Performance
tiers.
 Configure RAID 6 for drives in the Capacity tier.

Dell recommends smaller RAID widths when configuring the same number
of drives in a traditional pool.
 Smaller widths provide the best performance and availability.
 For example, when configuring a Traditional Pool tier with RAID 6, use
4+2 or 6+2 as opposed to 10+2 or 14+2.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 109


Storage Configuration Recommendations

Be aware that RAID 1/0 has less usable capacity.


 Consider choosing a 1+1 configuration when using RAID 1/0.
 A 1+1 configuration provides better performance and flexibility with the
same availability and usable capacity as larger RAID widths.

Hot Spare

Traditional pools use a dedicated hot spare.


 The storage system reserves one spare drive per 31 drives.
 The system replaces a faulted drive in a pool with a drive of the same
type, and the same or greater size as the faulted disk.

Homogeneous and Heterogeneous pools with hard drives arranged in RAID Groups (RG)

Consider the spare drive count requirements when designing traditional


storage pool layouts.

Dell Unity XT All-Flash and Hybrid Pools


Considerations

Follow the guidelines and considerations for All-Flash pools or hybrid


pools uses.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 110


Storage Configuration Recommendations

All-Flash Pools

 All-Flash pools provide the highest level of performance in Dell EMC


Unity
 Use an all-Flash pool when the application requires the highest
storage performance at the lowest response time
 Snapshots and replication operate most efficiently in all-Flash pools
 Data Reduction is only supported in all-Flash pools
 FAST Cache and FAST VP are not applicable to all-Flash pools
 Use only a single drive size and a single RAID width within an all-Flash
pool

 For an all-Flash pool, use only 1.6 TB SAS Flash 3 drives, and
configure them all with RAID-5 8+1

Hybrid Pools

Hybrid pools can contain HDDs (SAS and NL-SAS drives) and Flash and
can contain more than one type of drive technology in different tiers.

Use hybrid pools for applications that do not require consistently low
response times, or that have large amounts of mostly inactive data.
 Hybrid pools typically provide greater capacity at a lower cost than all-
Flash pools.
 The pools have lower overall performance and higher response times.

Consider provisioning a Flash tier to the hybrid pool to enable pool


performance efficiencies.
 The Flash tier improves response times when using Snapshots and/or
Replication.
 Pool performance can be improved by increasing the amount of
capacity in the Flash tier.
 More of the active dataset resides on and is serviced by the Flash
drives.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 111


Storage Configuration Recommendations

Dell recommends using only a single drive speed, size, and RAID width
within each tier of a hybrid pool.

Dell Unity XT Block Storage Resources

Block storage resources that are supported by the Dell Unity XT platform
include LUNs, consistency groups, VMFS datastores and vVol (Block).

A LUN or logical unit represents a quantity of block storage that is


allocated for a host.
 A LUN can be allocated to more than one host if the access is
coordinated through a set of clustered hosts.

A Consistency Group is an addressable instance of LUN storage that


can contain one or more LUNs (up to 50).
 Consistency Groups are associated with one or more FC or iSCSI
hosts.
 Snapshots that are taken of a Consistency Group apply to all LUNs
associated with the group.

Dell Unity XT supports the provisioning of block storage capacity for ESXi
hosts.
 VMFS datastores are built from Dell Unity XT LUN (Block).
 Del Unity XT vVol (Block) datastores are storage containers for
VMware virtual volumes (vVols).

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 112


Storage Configuration Recommendations

 SCSI protocol endpoints7 use any iSCSI interface or Fibre Channel


connection for I/O.

Dell Unity XT Storage Objects Recommendations

Dell Unity XT storage systems supports thin or thick storage objects. By


default, Dell Unity XT creates thin storage objects.

Dell recommends using thin storage objects, as they provide the best
capacity utilization, and are required for most features.
 Thin storage objects are virtually provisioned and space efficient.

Thin storage objects are recommended when any of the following features
are used:
 Data Reduction
 Snapshots
 Asynchronous replication

Thick storage objects reserve capacity from the storage pool and dedicate
it to that particular storage object.
 Thick storage objects are not space efficient, and do not support the
use of space-efficient features.

7
Protocol Endpoints or PEs establish a data path between the ESXi hosts
and the respective vVol datastores.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 113


Storage Configuration Recommendations

If enabling a space-efficient feature on a thick storage object, Dell


recommends first migrating the thick storage object to a thin storage
object.
 Enable the feature during the migration (for Data Reduction) or after
migration has completed (for snapshots and asynchronous replication).

For better capacity utilization, Dell recommends configuring storage


objects that are at least 100 GB in size, and preferably at least 1 TB in
size.
 Besides needing the capacity for storing data, storage objects also
require capacity for metadata overhead.
 The overhead percentage is greater on smaller storage objects.

Go to: Review the thin provisioning best practices for SMB


file sharing, and NFS file sharing.

PowerStore Block and File Storage Resources

PowerStore supports Volumes, and Volume Groups (Block storage), NAS


Servers and File Systems (File storage), and VMware storage containers.

Volumes

A volume is a single unit that represents a specific quantity of block


storage that can be stand-alone or may be associated with a Volume
Group. Manage partitions of block storage resources so that host systems
can mount and use these resources over IP, Fibre Channel, or NVMe
connections.

 One volume can be created at a time, or up to 100 volumes


simultaneously.
 Resource Balancer automatically determines on which appliance
within the cluster the volume is provisioned.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 114


Storage Configuration Recommendations

 When the capacity and demand change over time, moving the
storage resource to another appliance within the cluster is
supported.
 The operation is performed with a Manual Migration, Assisted
Migration, or Appliance Space Evacuation in a PowerStore cluster.
 The volumes are thin provisioned to optimize the use of available
storage.
 An application tag from a predefined list of categories can be
associated with each volume.

 During internal migration between appliances, the application tag is


migrated.
 Application tag attributes are replicated to the destination volume.
Host configurations provide access to general-purpose block-level storage
through network-based iSCSI, Fibre Channel, and NVMe connections.

 The volume is accessible through either the iSCSI targets that the host
is connected to or the FC ports the host is connected to.
 It depends on which storage network is configured between the
host and the appliance.
 Each volume is associated with a name and logical unit number
identifier (LUN).

Volume Groups

Volume group is a logical container for a group of volumes. A volume can


only be a member of one volume group at a time. A volume group
provides a single point of management for multiple storage resources that
work together as a unit.

Volume Groups consolidate new or existing volumes for combined


management, and data protection.
 Application tags cannot be assigned to volumes groups.
 Volume groups can contain volumes with different application
types.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 115


Storage Configuration Recommendations

 When a group contains volumes with different application tag


attributes, application tag type shows as Mixed.
 A Protection policy applied to volume group impact all members of the
volume group.
 The individual volumes within the group cannot not have separate
policies.
 PowerStore is capable of applying write-order consistency to a volume
group to protect all of its members.
 The system treat the volume group as a single entity when it is
protected.
 The write order is preserved among members when snapshots of
the volume group are taken.
 A single replication session is created for the entire volume group,
no matter how many volumes it contains.
 Hosts and host groups8 are mapped or unmapped at the member
volumes level.

 Hosts or host groups can be selected based on the storage


protocols (SCSI or NVMe).

NAS Servers

A NAS Server is a virtual file server that provides the file resources on the
IP network, and to which NAS clients connect. The NAS server is
configured with IP interfaces and other settings that are used to export
shared directories on various file systems.

8
Pooling individual hosts together into a host group enables you to
perform volume-related operations across all the hosts in the group.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 116


Storage Configuration Recommendations

NAS Servers are configured to enable clients to access data over Server
Message Block (SMB), and Network File System (NFS) protocols.
 Windows clients have access to file-based storage shared using the
SMB protocol.
 Linux and UNIX clients can access file systems using the NFS
protocol.

NAS servers also enable clients to access data over File Transfer Protocol
(FTP) and Secure FTP (SFTP).

NAS servers support three-way NDMP backup of user data, Kerberos


authentication, CEPA, and Virus protection.

File Systems

A File System is a manageable storage object for file-based storage with


a specific size and file access protocols. File systems are associated with
one or more shares for client access.

SMB Shares and NFS Exports are exportable access points to file system
storage that NAS clients use.

Powerstore supports file system snapshots and thin clones of file systems.

VMware Storage Containers

PowerStore supports the provisioning of storage containers.


 These containers store VMware virtual volumes (vVols).
 Storage containers have a 1:1 mapping with vVol datastores.

PowerStore communicates with the vCenter server through the VASA


protocol.
 VASA provides visibility of the PowerStore storage system
characteristics in vCenter, including storage container properties and
data services.
 Communication is established through Protocol Endpoints (PE),
which establish a data path between the ESXi hosts and storage
containers.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 117


Storage Configuration Recommendations

Deep Dive: For more information, see the Configuring


Volumes guide on the PowerStore Info Hub.

PowerStore Block Storage Resources


Recommendations

These are the recommendations for PowerStore block storage resources


configuration and use.

It is not recommended for the same host to access the same block
storage resource using more than one protocol.
 PowerStore provides access to block storage resources through Fibre
Channel, NVMe/FC, iSCSI or NVMe/TCP protocols.
 Hosts must access the block resource using only one of these
protocols.

Appliance Balance

There are two paths between the host and the two nodes within the
PowerStore appliance for block storage resources access.
 Resources are accessed using ALUA/ANA active/optimized or
active/non-optimized paths.
 I/O is normally sent on an active/optimized path.

PowerStore automatically chooses one of the nodes for the


active/optimized path, when the volume is mapped to the host, to maintain
a balanced workload across the nodes.
 This PowerStore characteristic is called node affinity and can be
viewed in the UI, and modified through CLI or REST API.
 The changes take effect immediately and are non-disruptive if the host
is correctly configured for multipathing.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 118


Storage Configuration Recommendations

Dynamic Node Affinity

The node affinity of block storage resources is dynamically rebalanced


between nodes to maintain relatively consistent utilization, latency, and
performance between both nodes of an appliance.

Dynamic node affinity is only available to block storage resources with the
node affinity not manually set by means of PSTCLI or REST API.

If the node affinity was manually set:


 The volume must be unmapped and then remapped to the host.
 The operation will reset the affinity to the system selected.
 Only multipathing is impacted by the operation.

 The system does not need to trespass any volume between nodes.

Performance Policy

All block storage resources in a PowerStore system have a defined


performance policy. By default, this policy is set to Medium.

The performance policy does not have any impact on system behavior
unless some volumes have been set to Low Performance Policy, and
other volumes are set to Medium or High.
 During times of system resource contention, PowerStore devotes
fewer compute resources to volumes with Low Performance Policy.
 Reserve the Low policy for volumes that have less-critical performance
needs.

PowerStore File Storage Resources Recommendations

File storage resources are accessed through NAS protocols, such as NFS
and SMB.
 A NAS server can provide access to a file system using all NAS
protocols simultaneously, if configured for multiprotocol access.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 119


Storage Configuration Recommendations

A single NAS server uses compute resources from only one node of the
PowerStore appliance.
 It is recommended to create at least two NAS servers (one on each
node) so that resources from both nodes contribute for the file
performance.
 If one PowerStore node is busier than the other, manually move NAS
servers to the peer node to balance the workload.

 All the file systems that are served by a given NAS server move
with the NAS server to the other node.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 120


Knowledge Check: Storage Configuration Recommendations

Knowledge Check: Storage Configuration


Recommendations

Knowledge Check: Storage Configuration


Recommendations

Environment

A company needs to provide infrastructure for an Oracle database


application deployment in their data center. A Dell Unity XT 680 system
must be scale-up with a 25-slot DAE to address the storage requirements
and the expected data growth. To take advantage of the Flash power
within the project budget, additional SAS-Flash 3 drives are considered to
build the storage pool and accommodate Oracle datafiles.

Instructions

1. Go to the Dell Support web page for the specified model.


2. Locate the Dell Unity XT Storage Systems - Drive and OE
Compatibility Matrix document.
3. Review the supported SAS Flash 3 drives for the specified model.

Activity

Based on the document information, answer the question.

Knowledge Check: Multi-Tier Dynamic Pool

1. A solution architect is sizing a Dell Unity XT solution with a multi-tiered


dynamic pool built with SAS, and NL-SAS drives. The Performance
tier is configured with RAID 5 and the Capacity tier is configured with
RAID 6. Both tiers are configured with 1/32 drives of hot spare
capacity. What is the minimum drive count from each tier to build the
pool?
a. Five SAS drives and six NL-SAS drives

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 121


Knowledge Check: Storage Configuration Recommendations

b. Six SAS drives and seven NL-SAS drives


c. Seven SAS drives and eight NL-SAS drives
d. Eight SAS drives and nine NL-SAS drives

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 122


Data Services and Array Features

Data Services and Array Features

Dell Unity XT Features: FAST VP

Overview

The Unity XT Operating Environment includes the Fully Automated


Storage Tiering for Virtual Pools (FAST VP) as a storage efficiency
feature.

The feature is supported only on the Unity XT Hybrid Flash Array (HFA)
models and UnityVSA.
 Creating mixed pools reduces the cost of a configuration by reducing
drive counts and using larger capacity drives.
 Data requiring the highest level of performance is tiered to Flash, while
data with less activity resides on SAS or NL-SAS drives.

FAST VP helps to reduce the Total Cost of Ownership (TCO) by


maintaining performance and efficiently using the configuration of a pool.
 For efficiency, FAST VP uses low cost spinning drives for less active
data.
 The most active data is placed on the highest performing drives
according to the storage resource’s tiering policy.

Tiering Policy

FAST VP Tiering policies determine how the data relocation takes place
within the storage pool. Access patterns for all data within a pool are
compared against each other.

The four available FAST VP Tiering policies are displayed here.


 Use the Highest Available Tier policy when quick response times are
a priority.
 The Auto-Tier policy automatically relocates data to the most
appropriate tier based on the activity level of each data slice.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 123


Data Services and Array Features

 The Start High, then Auto-Tier is the recommended policy for each
newly created pool. The tier takes advantage of the Highest Available
Tier and Auto-Tier policies.
 Use the Lowest Available Tier policy when cost effectiveness is the
highest priority. With this policy, data is initially placed on the lowest
available tier with capacity.

The default, FAST VP policy for all storage objects is “Start High then
Auto-tier.” This policy places initial allocations for the object in the highest
tier available. FAST-VP monitors the activity of the object to determine the
correct placement of data as it ages.

Tiering Process

FAST VP tracks data in a storage pool at a granularity of 256 MB (a slice).


 The feature ranks slices according to their level of activity and how
recently that activity took place.
 The ranking process is automatic and requires no intervention.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 124


Data Services and Array Features

FAST VP enables the system to retain the most frequently accessed or


important data on fast, high-performance disks.
 Slices that are heavily and frequently accessed are moved to the
highest tier of storage, typically SAS Flash drives.
 Relocation of slices occurs according to a schedule that is user-
configurable, or can be manually started.

The feature moves the less frequently accessed data to lower-


performance, cost-effective disks.
 The least accessed data is moved to lower performing, higher capacity
storage, typically the NL-SAS drives.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 125


Data Services and Array Features

Performance

FAST VP accelerates performance of a specific storage pool by


automatically moving data within that pool to the appropriate drive
technology.
 FAST VP dynamically matches the performance requirements to sets
of drives: SAS Flash, SAS, NL-SAS.
 The movement of data is based on the data access patterns. Dell Unity
FAST VP monitors the data access patterns within pools on the
system.

FAST VP is most effective if data relocation occur during or immediately


after normal daily processing.
 Dell recommends scheduling FAST VP relocation to occur before
backups or nightly batch processing.
 For applications which are continuously active, consider configuring
FAST VP relocation to run constantly.

Dell recommends maintaining at least 10% free capacity in storage pools,


so that FAST VP relocation can occur efficiently. FAST VP relocation
cannot occur when the storage pool has no free space.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 126


Data Services and Array Features

Dell Unity XT Features: FAST Cache

FAST Cache is a feature that extends the storage system existing DRAM
caching capacity.

Overview

FAST Cache can scale up to a larger capacity than the maximum DRAM
Cache capacity.

 FAST Cache consists of one or more RAID 1 pairs (1+1) of SAS Flash
2 drives.
 Provides both read and write caching.
 For reads, the FAST Cache driver copies data off the disks
being accessed into FAST Cache.
 For writes, FAST Cache effectively buffers the data waiting to
be written to disk.
 Review the supported Drives for FAST Cache.

FAST Cache improves the access to data that is resident in the SAS and
NL-SAS tiers of the pool.
 It identifies a 64 KB chunk of data that is accessed frequently. The
system then copies this data temporarily to FAST Cache.
 The storage system services any subsequent requests for this data
faster from the FAST Cache.

 The process reduces the load on the underlying disks of the LUNs
which will ultimately contain the data.
 The data is flushed out of the cache when it is no longer accessed
as frequently as other data.
 Subsets of the storage capacity are copied to FAST Cache in 64
KB chunks of granularity.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 127


Data Services and Array Features

Components

Policy Engine
 The FAST
Cache Policy Engine is the
software which monitors
and manages the I/O flow
through FAST Cache.
 The Policy
Engine keeps statistical
information about blocks on
the system and determines
what data is a candidate for
promotion.
 A chunk is
marked for promotion when
FAST Cache components an eligible block is accessed
from spinning drives three
times within a short amount of time.
 The block is then copied to FAST Cache, and the Memory Map is
updated.
 The policies that are defined in the Policy Engine are system-defined
and cannot be modified by the user.

Memory Map
 The FAST Cache Memory Map contains information of all 64 KB
blocks of data currently residing in FAST Cache.
 Each time a promotion occurs, or a block is replaced in FAST Cache,
the Memory Map is updated.
 The Memory Map resides in DRAM memory and on the system drives
to maintain high availability.
 When FAST Cache is enabled, SP memory is dynamically allocated to
the FAST Cache Memory Map.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 128


Data Services and Array Features

 When an I/O reaches FAST Cache to be completed, the Memory Map


is checked.
 The I/O is either redirected to a location in FAST Cache or to the pool
to be serviced.

Operations

FAST Cache operations are nondisruptive to applications and users. It


uses internal memory resources and does not place any load on host
resources.

 Host read/write operation


 During FAST Cache operations, the application gets the
acknowledgment for an I/O operation after it is serviced by FAST
Cache. FAST Cache algorithms are designed such that the
workload is spread evenly across all the Flash drives that have
been used for creating the FAST Cache.
 FAST Cache promotion
 During normal operation, a promotion to FAST Cache is initiated
after the Policy Engine determines that 64 KB block of data is being
accessed frequently. For consideration, the 64 KB block of data
must have been accessed by reads and/or writes multiple times
within a short amount of time.
 FAST Cache flush
 A FAST Cache Flush is the process in which a FAST Cache page
is copied to the HDDs and the page is freed for use. The Least
Recently Used [LRU] algorithm determines which data blocks to
flush to make room for the new promotions.
 FAST Cache cleaning

 FAST Cache performs a cleaning process which proactively copies


dirty pages to the underlying physical devices during times of
minimal back-end activity.
Expansion and shrinking of a FAST Cache are possible by adding or
removing drives.
 Each RAID 1 pair is considered a FAST Cache object.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 129


Data Services and Array Features

 FAST Cache is expanded in pairs of drives and can be expanded up to


the system maximum.
 Free drives of the same size and type currently used in FAST
Cache must exist within the system.
 A FAST Cache shrink operation can be initiated at any time and is
issued in pairs of drives.

 A shrink operation allows the removal of all but two drives from
FAST Cache.

Performance

FAST Cache can improve the performance of one or more hybrid pools
within Dell Unity XT HFA systems.

At a system level, FAST Cache reduces the load on back-end hard drives
by identifying when a chunk of data on a LUN is accessed frequently.

FAST Cache can increase the IOPS achievable from the Dell Unity XT
HFA systems.
 As a result, the system has higher CPU utilization since the additional
I/O must be serviced.
 Before enabling FAST Cache on additional pools or expanding the size
of an existing FAST Cache, monitor the average system CPU
utilization to determine if the system can accommodate the additional
load.

Dell recommends placing a Flash tier in the hybrid pool before configuring
FAST Cache on the pool.

Enable FAST Cache on the hybrid pool if the workload in that pool is
highly transactional and has a high degree of locality that changes rapidly.

For applications that use larger I/O sizes, have low skew, or do not
change locality quickly, it is more beneficial to increase the size of the
Flash tier rather than enable FAST Cache.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 130


Data Services and Array Features

Dell Unity XT Features: Data Reduction

Overview

All Unity XT physical models provide Inline Data Reduction that lowers
the cost per storage that is consumed.
 Data reduction provide capacity savings by reducing the space
required to store a dataset.
 The feature also improves the cost per IOPS through better utilization
of system resources.

The data reduction logic occurs in buffer cache before destaging writes to
disk.
 The logic discards zero blocks (zero detection) and recognizes
common patterns (deduplication) based on some of the most popular
workloads such as virtual environments.
 Processor cycles are only used for the deduplication and compression
logic.

Unity XT systems support Data Reduction with and without Advanced


Deduplication enabled on Traditional or Dynamic hybrid pools.
 To support Data Reduction, the pool must contain a flash tier and the
total usable capacity of the flash tier must meet or exceed 10% of the
total pool capacity.

Support

Data reduction is supported on all thin provisioned storage resources that


are created from All-Flash or Hybrid Flash pools.
 LUNs, and LUNs within a Consistency Group
 Thin clones
 File systems
 VMFS and NFS datastores

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 131


Data Services and Array Features

The Data Reduction logic applies only on supported storage resources.

The feature is enabled using the supported management interfaces and


affects all new incoming writes.

Data Reduction includes Advanced Deduplication, an optional feature


which expands the deduplication capabilities.

Data reduction is only supported on the physical Unity systems. The


feature is not available for the UnityVSA systems.

Important: Data reduction is only supported on the


physical Unity systems. The feature is not available for the
UnityVSA systems.

Operations

Data Reduction helps reduce the Total Cost of Ownership (TCO) of a Dell
Unity XT storage system.

Data reduction consists of zero detection, deduplication, and compression.

Data Reduction is achieved using the following methods:


 Deduplication uses algorithms to analyze, perform pattern detection,
and attempts to store only a single instance of a data pattern.
 Zero Detection logically detects and discards consecutive zeros,
saves only one instance, and uses pointers.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 132


Data Services and Array Features

 Compression encodes data using fewer bits than the original


representation.
 Advanced Deduplication deduplicates data blocks within a given
storage resource that do not contain internally-defined data patterns.

When Data Reduction is selected on Dell Unity XT systems, it enables


Deduplication, Zero Detection, and Compression.

Advanced Deduplication requires Data Reduction to be enabled on the


resource but can be enabled or disabled independently of the Data
Reduction setting.

Performance

Data reduction increases the overall CPU load on the system when
storage objects service reads or writes of reducible data and may increase
latency.

Before enabling data reduction on a storage object, ensure that the


system has available resources to support data reduction.

Enable data reduction on a few storage objects at a time.


 Then monitor the system to be sure it is still within recommended
operating ranges, before enabling data reduction on more storage
objects.

For new storage objects, or storage objects that will be populated by


migrating data from another source, before writing any data it is
recommended to:
 Create the storage object with data reduction enabled and advanced
deduplication.
 This provides maximum space savings with minimal system impact.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 133


Data Services and Array Features

Go to: The About Data Reduction and Advanced


Deduplication section of the Dell Unity XT Family
Configuring Pools documentation.
For a deeper dive, review the Dell Unity XT: Data Reduction
whitepaper.

PowerStore Features: Data Efficiency

PowerStore provides data-reduction capabilities such as zero-detect,


compression, and deduplication.
 Zero Detection logically detects and discards consecutive zeros,
saves only one instance, and uses pointers.
 Deduplication uses algorithms to analyze, perform pattern detection,
and attempts to store only a single instance of data. In general
deduplication can work at the block, bit-level, or file level. In
PowerStore, deduplication works at the block level at 4KB granularity.
 Compression uses physical hardware to encode data using fewer bits
than the original representation. The compression hardware offloads
the compression operations from the appliance processors to save
CPU cycles.

These features work together optimize capacity and improve storage


efficiency. The reduction in physical amount of storage required to save
data results in a lower total operational cost.

Data reduction is integrated into the PowerStore architecture and is


always active. PowerStore controls these features and no administration is
required.

 During periods of high write activity, PowerStore may defer the


deduplication of data, and devote those resources to servicing the
client workload.
 During periods of low activity, PowerStore will use excess resources to
re-examine any data written during these periods for duplicates, to
regain any space savings that were not initially realized.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 134


Data Services and Array Features

Go to: For a deeper dive, review the PowerStore Data


Efficiencies whitepaper.

Dell Unity XT Features: Snapshots

Overview

Snapshots provide point-in-time copies of block and file resources.

 The supported source of snapshots are LUNs, consistency group


LUNs, file systems, and VMware VMFS and NFS datastores.

Snapshot images are created manually or through a schedule.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 135


Data Services and Array Features

Unity XT snapshots

Use Cases
 The feature provides local data protection for the Unity XT platform.
 Restore source data to a known point-in-time.
 Restoration is instant and does not use extra storage space.
 Test and backup operations

 Snapshots support read-only or read/write data access.

Support

The Snapshots feature Is supported on all physical Unity XT models and


UnityVSA.
 Is fully managed from Unisphere, UEMCLI commands, and REST API
calls.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 136


Data Services and Array Features

 Supports creation of hierarchical snapshots (snapshots of snapshots).


 Up to 10 levels deep
 Supports configurations to autodelete upon expiration times
 Upon specific expiration time
 For defined storage consumption thresholds
 The feature is the foundation for asynchronous replication

Considerations

Snapshots that are taken of a consistency group apply to all LUNs


associated with the group.

 A LUN with one or more existing snapshots cannot be added to a


consistency group.
 No LUNs can be added to a consistency group that has one or more
snapshots.
 No LUNs can be removed from a consistency group with one or more
snapshots.

Restoring a consistency group snapshot results in all the members being


restored.

Performance

Dell recommends including a Flash tier in a hybrid pool where snapshots


are active.

Consider the overhead of snapshots when planning both performance and


capacity requirements for the storage pool.
 Snapshots increase the overall CPU load on the system, and increase
the overall drive IOPS in the storage pool.
 Snapshots also use pool capacity to store the older data being tracked.
 Tracking increases the amount of capacity that is used in the pool until
the snapshot is deleted.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 137


Data Services and Array Features

Before enabling snapshots on a storage object, Dell recommends


monitoring the system to ensure that existing resources can meet the
additional workload requirements.
 Enable snapshots on a few objects at a time, and then monitor the
system to be sure it is still within recommended operating ranges.

Dell recommends staggering snapshot operations (creation, deletion, and


so on.)
 Staggering operations is accomplished by using different snapshot
schedules for different sets of storage objects.

It is also recommended to schedule snapshot operations after any FAST


VP relocation has completed.
 Snapshots deletions are performed by the system asynchronously;
when a snapshot is being deleted, it is marked “Destroying.”
 If the system is accumulating “Destroying” snapshots over time, it may
be an indication that existing snapshot schedules are too aggressive.

Taking snapshots less frequently provide more predictable levels of


performance.

Dell Unity throttles snapshot delete operations to reduce the impact to


host I/O.
 Snapshot deletes occur more quickly during periods of low system
utilization.

Caution: Snapshots are not a substitute for storage backup


operations. They are not full copies of the original data.
Snapshots are partially derived from the source storage
resource. If the source becomes inaccessible, its derivative
snapshots are also inaccessible.

Dell Unity XT Features: Thin Clones

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 138


Data Services and Array Features

Overview

Base LUN family for LUN1 includes all the snapshots and Thin Clones

A Thin Clone is a read/write copy of a thin block storage resource that


shares blocks with the parent resource.
 Thin Clones created from a thin LUN, Consistency Group, or the
VMware VMFS datastore form a hierarchy.
 Thin Clones use snapshot technology to provide space-efficient clones
of Block objects.
 The snapshots and Thin Clones for a LUN, consistency group, or
VMware vStorage VMFS datastore form a hierarchy.

A Base LUN family is the combination of the Base LUN, and all its
derivative Thin Clones and snapshots.
 The Base LUN family includes snapshots and Thin Clones based on
child snapshots of the storage resource or its Thin Clones.
 The original or production LUN for a set of derivative snapshots, and
Thin Clones is called a Base LUN.
 A snapshot of the LUN, Consistency Group, or VMFS datastore that is
used for the Thin Clone create and refresh operations is called a
source snapshot.

The original parent resource is the original parent datastore or Thin Clone
for the snapshot on which the Thin Clone is based.

Thin Clones are supported on all Dell Unity XT systems and Dell
UnityVSA.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 139


Data Services and Array Features

Capabilities

Thin Clones are created from attached read-only or unattached snapshots


with no auto-deletion policy and no expiration policy set.

Thin Clone Capabilities

Thin Clone operations Users can create, refresh, view, modify, expand,
and delete a thin clone.

Data Services All data services remain available on the parent


resource after the thin clone creation.

Space Savings Only changed data consumes space.

Data Services Most LUN data services can be applied to thin


clones: host I/O limits, host access configuration,
manual/scheduled snapshots, replication.

Maximum number of 16 thin clones per Base LUN


thin clones per Base
LUN

Snapshots per LUN 256 snapshot per LUN

Snapshots + Thin 256 snapshots + thin clones per LUN


Clones per LUN

Performance

With thin clones, users can make space-efficient copies of the production
environment.

Thin clones are based on pointer-based technology, which means a thin


clone does not consume much space from the storage pool. The thin
clones share the space with the base resource, rather than allocate a copy
of the source data for itself, which provide benefits to the user.

Dell recommends including a Flash tier in a hybrid pool where Thin Clones
are active.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 140


Data Services and Array Features

When implementing, consider the overhead of snapshots when planning


performance and capacity requirements for a storage pool which have
Thin Clones.

Data available on the source snapshot is immediately available to the Thin


Clone. The Thin Clone references the source snapshot for this data. Data
resulting from changes to the Thin Clone after its creation is stored on the
Thin Clone.

PowerStore Features: Snapshots and Thin Clones

Snapshots

PowerStore arrays also provide local data protection trough Snapshots.

A snapshot saves the state of the storage resource, and all the files and
data within it, at a particular point in time.
 Supported storage resources are volume, volume group, virtual
machine and file system.
 Snapshots can be created manually, or by applying a protection policy.

Snapshots provide local data protection. If the storage resource is


corrupted or deleted, the data can be restored to an earlier point in time
from the snapshot.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 141


Data Services and Array Features

Snapshots

Thin Clones

PowerStore systems support thin clones of NAS Server, file system, file
system snapshot, volume, volume group, or volume/volume group
snapshot.

The thin clone is not a full backup of the original resource.


 It is a read/write copy of the storage resource that shares blocks with
the parent resource.
 Access to the original resource is maintained.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 142


Data Services and Array Features

Base Volume Family hiearchy

Use Cases:
 Development and test environments
 Parallel processing
 Online backup
 System deployment

Capabilities

Snapshots
 Snapshots are NOT full copies of the original data and should not be
relied on for mirrors or disaster recovery.
 Volume snapshots are read-only. You cannot add to, delete from, or
change the contents of a Volume snapshot.
 File system snapshots can be refreshed.
 A snapshot of a snapshot cannot be taken.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 143


Data Services and Array Features

Hosts or NAS clients have no direct access to a snapshot. To access the


data in a snapshot, clone the snapshot and:
 For a clone of a volume snapshot: Map to a host.
 For a clone of a file system snapshot: Create an NFS export or SMB
share.

Thin Clones

With thin clones, you can establish hierarchical snapshots to preserve


data over different stages of data changes within a Base Volume Family.
 Data available on the source snapshot at the moment of thin clone
creation is immediately available to the thin clone. The thin clone
references the source snapshot for this data.
 Data resulting from changes to the thin clone after its creation is stored
on the thin clone. Changes to the thin clone do not affect the source
snapshot, because the source snapshot is read-only.

Performance

All storage resources in PowerStore are thinly provisioned and space


efficient, including snapshots and thin clones.

Creation of a snapshot or thin clone requires only a quick duplication of


pointers.

After this action, they behave as independent storage resources and do


not impact the performance of the source resource.

Comparison

Description Snapshots Thin Clones

Space-Efficient Data Yes Yes

Creation Time Instantaneous Instantaneous

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 144


Data Services and Array Features

Delete Limitations Automatic deletion of Any copy in the tree can


source snapshot of a be deleted. Base LUN
Thin Clone not allowed cannot be deleted.

Topology Snap-of-snap Nested hierarchy of


snap and Thin Clones

Any-Any Refresh From base LUN only Yes, any Thin Clone
can be refreshed from
any snapshot.

Restore Yes, Snap to base LUN Yes, must create a snap


first, then restore
primary.

Use Cases Data Protection Test/Dev

Dell Unity XT Features: Asynchronous Replication

The Dell Unity XT asynchronous replication feature create synchronized


redundant data on a local or remote system.

Remote Replication

Asynchronous replication is primarily used to replicate data over long


distances.
 The remote replication traffic can be throttled to reduce the rate at
which data is copied.

Asynchronous replication does not impact host I/O latency.


 The host writes are acknowledged after they are saved to the local
storage resource.
 Write operations are not immediately replicated to a destination
resource. All writes are tracked on the source.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 145


Data Services and Array Features

 The data difference, or delta, is replicated during the next


synchronization cycle.
Fundamental to the asynchronous remote replication is connectivity and
communication between the source and destination systems.
 A data connection is required to carry the replicated data, and it is
formed from replication interfaces.
 These are IP-based connections which are established on each
system.
 A communication channel is also required to manage the replication
session.

 The management channel is established on a Replication


Connection.
 The channel defines the management interfaces and credentials for
the source and destination systems.
Asynchronous replication supports a configurable Recovery Point
Objective (RPO)9.
 The RPO time, or data delta, affects the amount of data that is
replicated during the next synchronization.
 RPO also represents the amount of potential data loss if a disaster
scenario were to occur.

 The minimum and maximum values are 5 minutes and 1440


minutes (24 hours).

9
Recovery Point Objective (RPO) is the acceptable amount of data, which
is measured in units of time, which may be lost due to a failure.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 146


Data Services and Array Features

Local Replication

Asynchronous replication is also used to replicate storage resources


locally within the same Dell Unity XT system.
 The management and data replication paths are all internal within the
single system.

The storage resources are replicated from one storage pool to another
within the same storage system.

 The replication method also uses snapshots on both source and


destination pools to track changes and data transfers.
 The feature is helpful if a storage resource must be moved due to pool
capacity reasons, or when changing the type of storage the resource
uses.
 For example, a resource could be moved from a pool having
performance disks to a pool having capacity disks for archival
reasons.
 Local replication supports all storage resource types that can be
replicated asynchronously.

 LUNs and Consistency Groups


 Thin Clones
 VMware VMFS and NFS datastores
 NAS servers and file systems

Performance

Dell recommends including a Flash tier in a hybrid pool where


asynchronous replication is active. Creating a Flash tier applies to both the
source and the destination pools.

Dell recommends configuring multiple replication interfaces per SP and


distributing replication sessions across them.
 Configure LACP to aggregate bandwidth for a replication interface.
 Also, configure Jumbo frames (MTU 9000) when possible.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 147


Data Services and Array Features

Consider the overhead of snapshots when planning performance and


capacity requirements for a storage pool which have replicated objects.
 Asynchronous replication takes snapshots on the source and
replicated storage objects.
 Snapshots are required to create the point-in-time copy, determine
the changed data to transfer, and maintain consistency during the
transfer.
 Avoid setting smaller RPO values on replication sessions.

 Smaller RPOs result in more snapshot operations, and do not make


replication sessions transfer data more quickly.
 Choosing larger RPOs, or manually synchronizing during non-
production hours, may provide more predictable levels of
performance.
When possible, fill the source storage object with data before creating the
replication session.
 Filling the storage object before creating the replication session is the
fastest way to populate the destination storage.
 The data is transmitted to the destination storage object during the
initial synchronization.

Tip: Asynchronous replication is available for all physical


Dell Unity XT systems and the Dell UnityVSA systems. The
Dell Unity XT asynchronous replication feature is supported
in many different topologies.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 148


Data Services and Array Features

Dell Unity XT Features: Synchronous Replication

Remote Replication

Synchronous replication is a data protection solution which replicates data


to a remote system.
 The feature ensures that each block of data that is written is saved to
the local and remote systems before the write is acknowledged to the
host.
 Since each write must be saved locally and remotely, added
response time occurs during each transaction.
 This response time increases as distance increases between
remote images.
 Synchronous replication ensures zero data loss (RPO set to zero)
between the local source and remote replica in disaster conditions.

 Synchronous replication architecture uses Write Intent Logs (WIL)


on each of the systems that are involved in the replication.
 WIL hold fracture logs that are designed to track changes to the
source storage resource should the destination storage resource
become unreachable.
 When the destination becomes reachable again, synchronization
between the source and replica automatically recovers using the
fracture log.
Synchronous replication has a distance limitation based on latency
between systems.
 This limitation is generally 60 miles (100 KMs) between sites.
 Latency of less than 10 milliseconds for the link between the local and
remote systems is recommended.

 A data connection is required to carry the replicated data.


 The data connection is established using Fibre Channel
connectivity between the replicating systems.
 A communication channel is also required to manage the replication
session.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 149


Data Services and Array Features

 For synchronous replication, part of the management is provided


using Replication Interfaces..
 The management communication between the replicating systems
is established on a Replication Connection.

Performance

Dell recommends including a Flash tier in a hybrid pool where


synchronous replication is active. Creating a Flash tier applies to both the
source and the destination pools.

Synchronous replication transfers data to the remote system over the first
Fibre Channel port on each SP.
 When planning to use synchronous replication, it may be appropriate
to reduce the number of host connections on this port.

When possible, create the synchronous replication session before filling


the source storage with data.
 Filling the source object alleviates the need to perform initial
synchronization of the replication session.
 Typically, this is the fastest way to populate the destination storage
object with synchronous replication.

Tip: Synchronous replication is only available for physical


Dell Unity XT systems. The Dell Unity XT synchronous
replication feature is supported in two topologies. Dell Unity
XT file remote protection supports the combination of
synchronous and asynchronous remote replication sessions
together.

Dell Unity XT Data at Rest Encryption (D@RE)

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 150


Data Services and Array Features

Overview

Data At Rest Encryption in SAS Controllers

The Dell Unity XT series platform offers controller-based Data At Rest


Encryption (D@RE).
 D@RE uses hardware that is embedded in all the SAS I/O modules
and Storage Processors.
 The encryption and decryption occur with minimal impact on data
services, such as replication and deduplication.
 D@RE is simpler, lower cost, and more maintainable than self-
encrypting drives.

 The feature is drive vendor and drive type agnostic, eliminating


drive-specific vendor overhead.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 151


Data Services and Array Features

D@RE provides protection against data being read from a lost, stolen, or
failed disk drive.
 Data is encrypted using 256-bit Advanced Encryption Standard (AES)
encryption algorithms.
 Encryption standard is based on Federal Information Processing
Standard (FIPS) 140-2 Level 1 validation.
 Compliance is within industry or government data security regulations
that require or suggest encryption:

 HIPAA (healthcare)
 PCI DSS (credit cards)
 GLBA (finance)

External Key Management

The Dell Unity XT series supports External Key Management to provide


extra security when using D@RE. These products use a single
management station for data center keys.

External key management technology is required for Financial and


Payment Card Industry, the Military, and Federal compliance. Data is
protected from access if the storage system is stolen, or while a planned
relocation to a different site is happening.

D@RE External Key Manager product support:


 Dell CloudLink
 Thales CipherTrust Manager and Vormetric
 Unbound Key control
 Gemalto/SafeNet
 IBM Security Key Lifecycle Manager

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 152


Data Services and Array Features

Considerations

Dell recommends ordering Dell Unity XT systems as encryption-enabled,


when appropriate for your environment.
 Data at Rest Encryption (D@RE) does not impact performance.
 Encryption can only be enabled at the time of system installation with
the appropriate license.

When encryption is enabled, an internal key manager generates and


manages encryption keys.
 Dell recommends making external backups of the encryption keys.
 Backups should be taken after system installation or immediately
following any change in the system’s drives.

 For example, creating or expanding a storage pool, adding new


drives, replacing a faulted drive.
Securely decommissioning arrays is accomplished by deleting pools.
 This process deletes all drive encryption keys and most often
eliminates the necessity to shred disk drives.

No data-in-place upgrades are supported and changing the encryption


state requires a destructive reinitialization.

PowerStore Data Encryption

Overview

PowerStore uses Data At Rest Encryption (D@RE) to protect against


data tampering and data theft. D@RE guards against reading content
from any of the drives, even if the drives are removed from the
PowerStore or are physically disassembled.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 153


Data Services and Array Features

Data Encryption protects against data tampering and data theft in the
following use cases:
 Stolen drive: Drive is stolen from a system, and access the data on
the drives is attempted.
 During transit: Attempts to read data during transit of any drive to
another location.
 Discarded drive: Attempts to read data even if drive is broken or
discarded.

Components required for encryption:


 Self-Encrypting Drives (SEDs). An SED performs Advanced Encryption
Standard (AES) 256-bit encryption on all data that is stored on that
drive.
 Key Management Service (KMS)

Caution: Due to the possibility of data loss, the keystore file


must be backed up and saved before and after adding or
removing any drive in the system.

SEDs

Encryption is enabled for all PowerStore systems except in countries


where encryption is not allowed, or is restricted by the United States
federal government. PowerStore encrypts the data as close to its origin as
possible by using SEDs. SEDs have dedicated hardware on each drive to
encrypt and decrypt data.

 All PowerStore drives ship with D@RE enabled and are FIPS-140-2
Level 2 certified.
 Encryption is automatically activated during the initial configuration of
a cluster.
 For countries where encryption is prohibited, non-encrypted
systems are available.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 154


Data Services and Array Features

 In countries where encryption is allowed, there is no way to


disable data encryption.
 When a new appliance joins an existing encrypted cluster, a check is
run to ensure that the appliance is capable of encryption.

Important: Adding non-SED drives to an appliance is not


support ed. Adding an unencrypted appliance to an
encrypted cluster is not supported.141

KMS

Encryption key management may be:


 Internal to PowerStore:
 PowerStore uses an embedded Key Management Service (KMS)
that resides in the Base System Container (BSC) of the active
node of each appliance. The BSC on the active nodes work
together and automate the management of all encryption keys.
 Each appliance has an independent keystore. All keys are
aggregated to the primary appliance in a cluster. A collective cluster
backup of the keys can be run from the primary appliance.
 External to PowerStore:

 Uses Key Management Interoperability Protocol (KMIP).


 Key Encryption Key (KEK) is moved off the PowerStore appliance
to an external key management server.
 External key management establishes a single management station
for data center keys.

Caution: If the KMS fails or the keystore file cannot be read,


encrypted data cannot be retrieved. Back up the keystore
before and after you add or remove drives.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 155


Data Services and Array Features

Dell Unity XT Features: Host I/O Limits

Overview

Host I/O Limit set by throughput, bandwidth or both

Unity XT series platform Host I/O Limits is a feature that limits initiator I/O
operations to the Block storage resources: LUNs, snapshots, VMFS
datastores, thin clones, and vVol datastores.

Host I/O Limits can be set on physical or virtual deployments of the Unity
platform.
 Host I/O Limit is either enabled or disabled in a Unity XT or UnityVSA
system. All Host I/O Limits are active if the feature is active.
 Host I/O Limits are active when policies are created and assigned to
the storage resources. The feature provides system-wide Pause and
Resume controls.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 156


Data Services and Array Features

Host I/O limit policies are customizable for absolute I/Os, density-based
I/Os, and burst I/Os.

Only one I/O limit policy can be applied to an individual LUN or a LUN that
is a member of a consistency group. When an I/O limit policy is associated
with multiple LUNs, it can be either shared or nonshared.

The storage administrator can set limits by:


 Throughput in IOPS (I/O operations per second)
 Bandwidth in KBPS (Kilobytes per second) or MPBS (Megabytes per
second)
 Combination of both limits

Important: If both throughput and bandwidth thresholds are


set in a policy, the system limits the traffic according to the
threshold that is reached first.

Performance

Follow the recommendations if considering using the feature.

 Dell recommends setting Host I/O Limits on workloads which may


monopolize pool resources and starve other applications of their
required performance.
 Large-block applications can monopolize I/O.
 Small-block applications get starved for access.
 There are several use cases where Host I/O Limits can be effective.
 For example, limiting the bandwidth available to large-block
applications which may be increasing the latency on other small-
block workloads.
 Configure Host I/O Limits on LUNs or datastores that are consuming a
large portion of the system’s resources and reducing the performance
of other resources on the system.
 Another use case for Host I/O Limits is placing limits on snapshots.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 157


Data Services and Array Features

 The capability is applicable to attached snapshots which are being


used for backup and testing purposes.
 Host I/O Limits can be applied to the snapshots to prioritize host
activity towards the parent LUNs and datastores.

Application Considerations

Dell best practices guidelines must be followed to address PowerStore


and Unity application uses considerations.

VMware Datastores

In Unity XT systems when provisioning file storage for an ESXi host.


 Dell recommends creating VMware NFS Datastores instead of
general-purpose NFS file systems.
 VMware NFS Datastores are optimized to provide better
performance with ESXi.
 When creating VMware NFS Datastores, Dell recommends using the
default 8K host I/O size.

 Only choose a different host I/O size if all applications that are
hosted in the NFS Datastore primarily use the selected I/O size.
When configuring vVol (File) datastores, it is recommended to create at
least two vVol-enabled NAS Servers, one on SPA and one on SPB.

PowerStore is tightly integrated with VMware applications.


 For other recommended configurations for VMware ESXi and vSphere,
see the document Dell PowerStore: VIrtualization Integration.

AppsON

When PowerStore X models are used with AppsON (hosting VMs on


PowerStore), other configuration settings are recommended to provide
optimal performance.
 These configurations include creating additional internal iSCSI targets,
increasing internal iSCSI queue depths, and enabling Jumbo frames.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 158


Data Services and Array Features

 These changes can be applied as part of the Initial Configuration


Wizard, or manually.
 These changes should be applied before provisioning any storage
resources.

Deep Dive: For detailed configuration steps, see the


Knowledge Base article HOW17288 or the document Dell
PowerStore: VIrtualization Integration.

Transactional

Dell Unity XT systems require high concurrency to deliver the maximum


performance (IOPS).
 This is naturally achieved when connecting many hosts with many
LUNs and having multiple active paths to the same storage object.
 For systems that will be configured with only a few hosts and/or LUNs,
host HBA settings may need to be adjusted to increase concurrency.
 Consult the documentation for your OS or HBA on how to adjust LUN
queue depth settings.

Sequential

For workloads which require high bandwidth for sequential streaming data,
it may be beneficial to use thick storage objects in Dell Unity XT systems.
 Thick storage objects fully allocate the capacity before application use,
and in a consistent manner, which can improve subsequent sequential
access.
 Note that thick storage objects are not compatible with most features
(Data Reduction, Snapshots, Asynchronous Replication), so only use
thick storage objects if these features will not be utilized.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 159


Data Services and Array Features

Go to: PowerStore is well integrated with the most widely


used enterprise applications. For best practice
recommendations for specific applications, see the
solutions-focused white papers available on the PowerStore
Info Hub.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 160


Knowledge Check: Data Services and Array Features

Knowledge Check: Data Services and Array


Features

Knowledge Check: Data Services and Array Features

1. Which Dell Unity XT HFA feature is recommended to improve the


performance of hybrid pools used for applications that use larger I/O
sizes?
a. FAST VP
b. Host I/O Limits
c. FAST Cache
d. Data Reduction

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 161


External Host Considerations

External Host Considerations

Host Configurations

Some configuration changes might be necessary to access PowerStore


volumes efficiently.

Host operating systems may not apply the appropriate settings when
mounting PowerStore volumes or configuring access to Unity XT block
LUNs.

For optimal performance, check that the appropriate configuration


changes have been applied to all hosts that are connected to a
PowerStore.

The PowerStore Host Configuration Guide has recommendations for


the following:
 MPIO settings: Path checker and timeout values.
 iSCSI settings: Time-out and queue depth values; disabling delayed
ACK.
 Fibre Channel settings: Queue depth values.
 Network settings: Jumbo frames and flow control.
 Unmap operations.
 VMware ESXi claim rules.

Host alignment for Unity XT block LUNs only needs to be done for host
OS which still use a 63-block disk header. If alignment is required, perform
the operation using a host-based method, and align with a 1MB offset.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 162


External Host Considerations

Deep Dive: For other recommended configurations review


the PowerStore Host Configuration Guide. For Unity XT
systems view the appropriate Host Configuration Guide
on the Dell support product family page to determine if
alignment is required for the operating system, and how to
perform the operation.

Host File Systems

When a host is attached to a PowerStore block volume, the host can use
this volume as a raw device, or it can create a local file system on the
volume first.

When a local file system is being created, it is recommended to disable


SCSI unmap.
 When PowerStore creates a volume, all space is already unmapped.
 The host-based unmap is redundant and generates unnecessary load
on PowerStore.

When creating a local file system, it is recommended to use a file system


block size (allocation unit) of 4 KB, or a larger size that is an even multiple
of 4 KB.

It is typically not necessary to perform alignment when creating a local file


system. If alignment is performed, it is recommended to use an offset of 1
MB.

Deep Dive: See the document PowerStore Host


Configuration Guide on Dell.com/powerstoredocs for
commands to disable unmap on your host operating system.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 163


External Host Considerations

VMware Integration

Multipathing

VMware ESXi hosts using iSCSI/FC/FC/FCoE storage might experience


latency issues with no signs of latency on the SAN side.

To address the issue Dell recommends configuring ESXi with Round


Robin Path selection Plug-in with IOPS limit of 1.

Go to: See VMware Knowledge Base article 2069356.

iSCSI

ESX/ESXi hosts might experience read or write performance issues with


iSCSI storage access.

When configuring LUNs on ESXi that are accessed via iSCSI, disable
“DelayedACK” on ESXi.

Go to: See VMware Knowledge Base article 1002598.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 164


Knowledge Check: External Host Considerations

Knowledge Check: External Host Considerations

Knowledge Check: External Host Considerations

1. An architect is designing a PowerStore solution to provision more than


100 volumes to physical Windows 2019 servers and ESXi 7.0 hosts.
What is a recommendation for attached volumes that will be formatted
as local file system?
a. Disable SCSI unmap
b. Disable "DelayedACK"
c. Perform 1MB offset host alignment

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 165


Knowledge Check: External Host Considerations

Plan for Customer Performance Needs

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 166


Performance Planning

Performance Planning

Planning for Performance Considerations

Performance planning should be a part of a pre-sale, proof of concept or a


post-sale, preproduction system readiness effort.
 Proper performance planning and testing determines the capability of
the system to successfully deliver the needed performance in the
application environment.
 Performance planning is a process that involves understanding
application workload I/O characteristics such as throughput (MB/s),
bandwidth, IOPS, and latency across the entire data center.
 When planning for performance:

 Have clear goals in mind.


 Set customer expectations up front. Setting expectations:
o Is key to a successful design.
o Helps to prevent frustration and second-guessing during the
planning process.
o Ensures that all parties are thinking alike. Setting expectations
that are too high, or not setting any expectations, can lead to
disappointment.
 Understand system and environmental limits, configuration best
practices, and storage metrics.
o Network and environmental factors can have a larger impact on
performance than the storage system itself.
o The environment must be able to achieve the wanted
performance, independent of the storage system.

Designing a Unity XT Solution for Performance

Different Unity XT models have different CPU speeds and core counts,
which help to achieve different I/O performance potentials.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 167


Performance Planning

In general, the IOPS capability of the Unity models scales linearly from
Unity XT 380 upwards.

Unity Model Comparison

Model Processor Speed* Memory* Maximum Storage


and Cores Drives
(Per SP)

Dell Unity 1x Intel E5- 1.7 64 GB 500


XT 2603 v4 6 GHz /SP
380/380F cores

Dell Unity 2x Intel Xeon 1.8 96 GB 750


XT Silver 4108 8 GHz /SP
480/480F cores

Dell Unity 2x Intel Xeon 2.1 192 GB 1000


XT Silver 4116 12 GHz /SP
680/680F cores

Dell Unity 2x Intel Xeon 2.1 384 GB 1500


XT Gold 6130 16 GHz /SP
880/880F cores

Dell N/A N/A N/A N/A


UnityVSA

Dell Unity N/A N/A N/A N/A


Cloud
Edition
*Values are per node

Designing solutions with these models involves sizing the solutions to


arrive at a configuration that supports the desired workloads at the
required performance level.

Dell UnityVSA is a Software Defined Storage (SDS) solution that runs


atop the VMware ESXi Server platform. Dell UnityVSA provides a storage
option for environments that do not require purpose-built storage hardware
such as test/development or remote office/branch office (ROBO)
environments.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 168


Performance Planning

Dell Unity Cloud Edition is a virtualized storage appliance with support


for VMware Cloud (VMC) on Amazon Web Services (AWS). Dell Unity
Cloud Edition can be deployed in a VMware Cloud SDDC to provide
native file services such as NFS and SMB. It also enables disaster
recovery between on-premised hardware-based Dell Unity systems and
VMware Cloud-based appliances. Dell Unity Cloud Edition has two options
with a capacity limit up to 350 TB: A 2 core 12 GB memory and a 12
core 96 GB memory Dual-SP deployment.

Deep Dive: For more information about the hardware


specifications for each model, review the Unity XT Series
Specification document. The document is available at the
Dell Unity Info Hub: Product documents and Information
KBA.

Designing a PowerStore Solution for Performance

PowerStore models have different CPU core counts and speeds to help to
achieve different I/O performance potentials.

In general, the IOPS capability of the PowerStore models scales linearly


from PowerStore 500 onwards.

PowerStore Model Comparison

Model Core Count* Speed* Memory* Maximum


Storage
Drives (per
Appliance)

PowerStore 12 Core 2.2 GHz 96 GB 97


(T) 500 (Single-
socket CPU)

PowerStore 10 Core 2.4 GHz 192 GB 93


(T) 1200 (Dual-socket
CPU)

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 169


Performance Planning

PowerStore 16 Core 2.1 GHz 384 GB 93


(T/X) 3200 (Dual-socket
CPU)

PowerStore 24 Core 2.2 GHz 576 GB 93


(T/X) 5200 (Dual-socket
CPU)

PowerStore 28 Core 2.2 GHz 1280 GB 93


(T/X) 9200 (Dual-socket
CPU)
*Values are per node

Designing solutions with these models involves sizing the solutions to


arrive at a configuration that supports the desired workloads at the
required performance level.

Deep Dive: For more information about the hardware


specifications for each model, review the Dell PowerStore
Specification document. The document is available at the
Dell PowerStore Info Hub: Product documents and Video
KBA.

Understanding Environmental Limits

There are many different environmental factors to consider for


performance planning. A good design is one which considers all the
components and best practices that contribute to overall system
performance. It may be the data center solution is already in place, and
you may be asked to upgrade the existing environment for performance
reasons.

If the customer has no existing infrastructure in place, the task of planning


becomes more difficult. Rely on benchmark tests or other methods to give
a best guess estimate of the configuration.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 170


Performance Planning

Consider the following questions when planning the design.


 Is the network fast enough?
 Is the network dedicated or shared?
 Are there enough clients or hosts, and are they powerful enough?
 Does the customer have competing workloads?
 Is all equipment dedicated to a single application or are there other
competing workloads to consider?
 Are workloads running separately or simultaneously?
 Are hosts load-balanced across all the front-end ports on the array?

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 171


Knowledge Check: Planning for Performance

Knowledge Check: Planning for Performance

Knowledge Check: Environmental Limits

1. What are some factors that may prevent a storage system solution
from delivering its full potential?
a. Network not fast enough.
b. Competing workloads.
c. Number of clients and their power.
d. Load not balanced across all the front-end ports on the array.
e. All of these.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 172


Identifying the Environment

Identifying the Environment

Identifying the Environment Considerations

To model and validate that a solution meets business requirements,


supporting sample I/O and performance data from the environment must
be collected and modeled using certified tools.

The type of data and collection method vary according to the different
solutions.

Identify the end-to-end environment

Solutions require looking at the end-to-end view such as:


 Applications
 Data
 Value of the data to the business
 Legal requirements for the treatment of that data
 Future destiny of the data
 Performance of the infrastructure that supports the data
 How the data is made available to the business

System architects must have an understanding of the common factors that


apply across the entire infrastructure.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 173


Identifying the Environment

Gathering Workload I/O Characteristics

Live Optics is a performance automation platform that enables you to


gather, analyze, and view details of system environment workloads.
Users can request captures from customers and register partners through
the dashboard menus.

Live Optics is made up of three components, the Collector, the


Project, and the Web Portal. Channel partner can request to set up their
own “instance” of Live Optics through the Team feature. The Team feature
enables the management of all Live Optics users within a company and
sets common branding and customer registration standards.

NOTE: If you are the first person registering from your


company, a team is created and you are made the
administrator of your team.

Collector

The Collector is software the customer or you, on behalf of your


customer, downloads, and runs. It performs two functions. First, for host-
based assessments, it collects performance data from servers and
operating systems and sends the results to secure Live Optics servers.
The collector is available for Windows and Linux operating systems and
each can remotely collect data from other operating systems.

The other thing the Live Optics collector does is allow array-based
performance data to be uploaded to the Live Optics portal to create
projects for review. The Windows collector, called Optical Prime, runs
on a Windows operating system (desktop or server version). It collects
data from local and remote Windows computers, remote UNIX or Linux
computers, including XenServer and KVM, and from VMware vCenter.

Live Optics is used to promote awareness of compute performance needs


as it applies to understanding physical or virtual server workload
performance. Compute performance metrics are at the individual
machine, workgroup, or data center level.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 174


Identifying the Environment

The Download Collector view

Project

Performance data from the collector is organized into a container that is


called a Project. A Project gathers information from one or more
collectors and organizes them into a hierarchy of objects. It provides
numeric, tabular, and graphical statistics about objects within the
hierarchy. The Project is viewed within the Web Portal.

The Project view

Web Portal

The Web Portal provides an interactive interface for viewing projects.


Project data is always viewable by the customer. If you requested that the
customer run Live Optics, the results are viewable by you. You can also
share projects with others or with the sales team.

The Web Portal interface

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 175


Identifying the Environment

Live Optics Dashboard

Live Optics Dashboard

 Live Optics Dashboard is the landing page that shows all tools and
projects.
 Select any of the icons from the left navigation panel or right window to
access individual processes.
 For example, Download Collectors prompts you to select the
operating system of the system where you want to run the Live
Optics collector.
 Go to the Live Optics login page to access Live Optics.
 Another option, Request Capture, sends an email to a customer
requesting that they download the collector.

 Once the customer starts collecting information, it creates a project


in the SE (requester) dashboard.
 Both customer and SE can see the information that is collected and
processed.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 176


Identifying the Environment

Live Optics Collection Options for Servers

Optical Prime view of collection options with Server & Virtualization selected

The Server & Virtualization collection collects data from physical or


virtual servers in an environment.
 A server is a monitoring target by Optical Prime.
 If the collector is monitoring and also running on the same machine, it
can be a remote target or a local target.
 Optical Prime supports the collection of servers running VMware ESXi.
 Optical Prime monitors ESXi servers through vCenter only.
o To Optical Prime, vCenter is considered only one target no
matter how many ESXi servers are managed in vCenter.
o Optical Prime supports up to 256 targets per collector
instance.
 Optical Prime monitors a server target, or more specifically the
operating system.
o The collector pulls performance and other data from the servers
for later analysis by using the Optical Prime viewer.
 Optical Prime collects from a wide range of operating systems
including Windows, Linux, VMware, Solaris, and HP-UX. The complete
list can be found from within each Live Optical Prime Live Optics online
profile.

Once finished, projects are available from the Live Optics Dashboard.
Each project is assigned a Project ID can be used as input to Dell
Midrange or PowerStore Sizer tools. Project details can be viewed,

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 177


Identifying the Environment

deleted, shared, or exported by selecting the project name. You can view
environmental and performance details from the interface.

Live Optics Collection for Storage

Optical Prime view of collection options with Storage selected

Live Optics supports data collection for Dell Unity, PowerStore, and other
storage platforms.
 Once a storage array is selected, supply a DNS or IP address,
username, and password credentials for authentication.
 By default, collections are done no more than 1 week before the
current date but are configurable.
 The collection downloads performance archive files from the array.
 Once downloaded, the files are uploaded to the web service under the
project name.
 Click the project name to share. Download the PowerPoint
presentation or delete the project.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 178


Identifying the Environment

 The PowerPoint presentation is available under the Project ID


number in the downloads directory that is created by the user.
 From the Tools section of the Live Optics dashboard, select
Midrange Sizer.
 The sizer enables you to enter a Project ID into the Live Optics/NAR
menu and produces a Dell storage recommendation that meets the
customer's needs.

Live Optics Storage Profile with Project ID

Sample Live Optics Hypervisor Profile

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 179


Identifying the Environment

Environment view of capture results

The Live Optics Viewer:


 Is used to view the results of a collector’s performance capture.
 A function of the Live Optics web portal, the Viewer is automatically
launched when a Project is selected.
 Data is rendered into useful, easy-to-view details both graphically and
numerically.
 Data scenarios can be manipulated to display specific times,
include, or exclude objects such as servers.
 The Viewer can recalculate these scenarios and can be saved for
future use in any state.
 The Viewer is also used to share the project with other Live Optics
users.
 Use the left tree navigation to explore object level details and
performance data starting at each server down through disks/LUNs,
NICs, and cluster disk data.
 The default summary of the Viewer is the aggregation of the project
and considers the full elapsed time of the recording.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 180


Identifying the Environment

Performance view of capture results

Sample Live Optics Storage Profile

 Use the Live Optics Storage Array Profile document to analyze


workload I/O and configuration data.
 The report is packaged as a PowerPoint presentation and has a
Project ID associated with it.
 The Project ID is used as input to the Midrange Sizer.
 The Midrange Sizer produces a Dell recommended configuration that
meets the capacity and performance needs of the environment.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 181


Identifying the Environment

Live Optics Storage Profile with Project ID

Summary of Inventory, Capacity, and Workload

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 182


Knowledge Check: Identifying the Environmental

Knowledge Check: Identifying the Environmental

Knowledge Check: Identifying the Environment Using


Live Optics

1. Which Live Optics information is used to create a report in the


Midrange Sizer?
a. Project ID
b. Project Name
c. Account Name
d. Array Serial Number

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 183


Characterization of Workloads

Characterization of Workloads

Analysis of Workload Key Performance Metrics

Workload Characterization (or Workload Key Performance metrics) is


the single most important performance consideration and can dramatically
affect system performance.

The Workload Characterization describes the different “types of


workloads” that may be presented to a storage system.

Top 4 LUNs by throughput on a Unity XT storage system

 Workload Attributes:
 I/O Size
 Read vs Write
 Random vs Sequential
 Working Set Size
 Skew
 Concurrency
 Workload characteristics dramatically affect performance.
 Understand the application and its workload before attempting to size a
storage system and make performance estimates.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 184


Characterization of Workloads

Understanding Workload Attributes

I/O Size

Chart of I/O sizes compared to the number I/Os per second

The I/O Size (also called I/O Request or Transfer Size) is the amount of
data in each I/O transaction request by a host.
 Some typical examples of I/O sizes are:

 8 KB for the typical file system or an Oracle application transfer


amount
 32 KB is the typical I/O size for Microsoft Exchange.
 64 KB is the typical transfer I/O size for a backup or restore
process.
 256 KB is the typical streaming video application transfer
amount.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 185


Characterization of Workloads

Important: The I/O request sizes that are shown are


examples. I/O request sizes vary widely depending on the
application, application operation, server operating system,
storage system, and the underlying disk structure.

The I/O Size has a significant effect on performance throughput.


 Generally, the larger the I/O size, the higher the storage bandwidth.
 Most production workloads are a mix of I/O sizes.
 Larger I/Os take longer to transmit and process, however, some of the
overhead that is used to run an I/O is fixed.
 If data exists in larger chunks, it is more efficient to transmit larger
blocks.
 The same is true regarding an IP network. It takes longer to
transmit larger packets.
 A host can move more data faster, by using larger I/Os than smaller
ones.
 The response time of each large transfer takes longer than the
response time for a single smaller transfer, but the combined service
times of many smaller transactions are greater than a single
transaction that contains the same amount of data.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 186


Characterization of Workloads

Read/Write

Chart of read and write I/Os

Another workload characterization is the I/O access type, read or write.


Typically called the read/write ratio.
 Very few workloads are all reads or all writes. It is important to know
which access is the majority, because the two access types use
different amounts of storage system resources.
 Reads consume fewer resources than writes.

 Sequential reads that find their data in the array cache consume
the least amount of resources and have the highest throughput.
 Reads not found in cache, which are normal with random access,
have much lower throughput and higher response times. This is
because the data must be retrieved from disk.
Writes use more resources and are slower than reads because protection
is usually added to new data.

Typically, all writes must be cached, mirrored, and acknowledged. This


calls for a larger cache size for buffering writes.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 187


Characterization of Workloads

Random vs Sequential

Per-thread performance decreases as aggregate bandwidth increases.

Host applications fall under one of two access patterns: Random and
Sequential.
 Random access is exemplified by Online Transaction Processing
(OLTP) such as a database, where data reads and modifications are
made in a scattered manner across the entire dataset.
 A random workload is a workload where reads or writes are
distributed throughout the relevant address space.
 Random I/O at the drive level requires the drive to seek data across
the rotating platters (in HDDs), which involves a relatively slow,
mechanical head movement.
 Sequential access refers to successive reads or writes that are
logically contiguous within the relevant address space.

 Sequential access is typical during back-up and restore operations


and event logging.
 To enhance performance, intelligent storage systems detect
sequential access patterns and begin to pre-fetch data into cache.
o Pre-fetching allows the host to satisfy its I/O request from cache
rather than disk.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 188


Characterization of Workloads

 Sequential writes can also be coalesced where many smaller writes


are combined into fewer large transfers to disk or array.
 In the example, each thread is accessing a single file on the same
file system.
o The per-thread performance decreases as more threads are
being used, however, the aggregate bandwidth increases.
o More data is being accessed as the number of threads increase,
but each client sees a performance decrease because the
response time increases.

Working Set Size

Example of traversed address space in finite time period

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 189


Characterization of Workloads

 Working Set Size is the portion of the total data space an application
uses at a certain time, also known as active data.
 The working set of an application is defined as the total address space
that is traversed, either written or read, in some finite, short time during
its operation.

Skew

Skew is when a percentage of the total storage capacity in a storage


system is the target area for most IOPS served by the system.
 It is the locality of active data within the total storage capacity.
 For instance, in a payroll system:
o The current-month data is highly active.
o Year-to-date data is moderately active.
o Data for previous years is mostly inactive.
 Performance is optimized based on locality or skew rate as the storage
system cache holds the most recently used data.

 Recent written data is flushed to higher performing storage.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 190


Characterization of Workloads

Concurrency

Number of threads compare with number of I/Os per second of a multi-threaded random
write

In computing, concurrency is the ability to execute more that one


application I/O, task, or tread simultaneously.
 Midrange storage systems require I/O concurrency to deliver best
performance.
 Systems do not reach their potential unless many I/Os are serviced in
parallel.
 High thread counts per drive group are essential for hitting high
random I/O rates as they engage all drives in a group to be servicing
I/O concurrently.
 Sequential access requires multiple threads as well.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 191


Knowledge Check: Characterization of Workloads

Knowledge Check: Characterization of Workloads

Knowledge Check: Characterization of Workloads

Environment

A company must provide the infrastructure for a Microsoft SQL Server with
a use case of AI in their data center. The applications are used for
research and require high bandwidth. A Dell PowerStore is being
considered to address the storage requirement.

Instructions

1. Go to the Dell Technologies Info hub web page.


2. Search for a document named Dell PowerStore: Microsoft SQL
Server Best Practices.

Activity

Based on the document information, answer the questions below.

1. In addition to capacity, what should be an important consideration


when sizing a storage system for the described workload?
a. Large I/O Size
b. Working Set Size
c. Concurrency
d. Random Reads

2. What if the primary workload is online transaction processing?


a. Large I/O Size
b. Small I/O Size
c. Concurrency
d. Sequential Reads

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 192


Knowledge Check: Characterization of Workloads

Scavenger Hunt Activity Wrap Up

Notes

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 193


Supported Sizing Tools

Supported Sizing Tools

Creating Dell Unity XT System Design with Midrange


Sizer

Midrange Sizer tool: Home dashboard view.

Without adequate tools, sizing a storage system can be difficult. You must:
 Characterize the expected client workload.
 Calculate any additional operations that are needed for metadata that
is based on capacity requirements.
 Consider RAID overhead for drives.
 Calculate the aggregate performance of all system hardware
components.

The Midrange Sizer tool simplifies this process.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 194


Supported Sizing Tools

Enables many different paths to create a sized system configuration and


provides a performance estimate for each configuration.
 Each path uses a different estimating method that shows a result well
below the absolute maximum performance capability of the system.
 Calculated results are in accordance with the best practice
recommendations for production deployment. Results are shown as a
percentage of saturation.

The true 100% saturation level depends on many factors.


 In paths with a detailed configuration and workload inputs, it is shown
as a percentage of total system capability.
 In paths with limited inputs, the results are similarly derated from
maximums, but a specific number is not reported.

Go to: Review the Midrange Sizer software requirements.


Go to the Midrange Sizer tool page to access the
application.

Midrange Sizer Data Requirements

Midrange Sizer uses information that is gathered from the customer


environment and business requirements to create the system design.
The Solutions Architect must know how resources are used across the
servers, storage subsystems, SAN, and WAN.
 Customer production workload and performance requirements -
Knowing the IOPS of an application is critical to sizing a proper
solution.
 Some workloads which require high bandwidth for sequential
streaming data benefit from using thick storage objects.
 Thick storage objects allocate capacity on creation, before
application use, in a consistent manner.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 195


Supported Sizing Tools

 Thick storage objects improve subsequent sequential access.


 Thin objects are otherwise compatible with Data Reduction and
other features.
 Network requirements - The selection of the correct system front-end
connectivity is important.
 The number of clients sending information at any given time
impacts data transfers over Ethernet.
 If replication is used, current bandwidth, latency, and protocols are
key factors.
 Storage requirements - The number and type of drives must
accommodate the existing data storage needs and enable the solution
to be scaled over time.
 If the Unity storage system is a hybrid array, it can take advantage
of FAST VP or FAST Cache.
 Server information - Server information must be provided such as
server consolidation, server types, and protocols being used.

Midrange Sizer – System Designer

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 196


Supported Sizing Tools

System Designer path view

The System Designer path from the Midrange Sizer home page:
 Enables custom configurations for All-Flash or Hybrid arrays.
 Provides an interface from which configurations are created by
selecting the model and custom drives.
 Displays a cabinet image on the right that updates to match the pool
information entered for a configuration.
 Uses workload types, block size, IOPS requirements, and storage
capacity requirements [number of drives and data reduction ratio] to
build a solution.

The performance prediction:


 Computes the maximum drive IOPS capability that is based on classic
drive IOPS (sum of all drives) including RAID overhead.
 Considers the maximum platform IOPS capability and derates the
IOPS from the maximum internal block test results.
 Displays the lowest results and assumes that ports and buses are not
a limit nor response time.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 197


Supported Sizing Tools

Midrange Sizer – Live Optics/NAR Path

Live Optics/NAR path view

 The Live Optics/NAR path enables the creation of a performance


solution for one or more existing systems.
 The Live Optics/NAR path uses historical performance archive (NAR)
files or Live Optics data collector files.
 These files are combined in a single configuration, resulting in a
combined list of Performance Groups which are editable by the
user.
 The LUNs are automatically broken into separate Performance
Groups.
 The output is based on which LUNS work together in a pool. LUNs
are grouped based on their I/O size, access patterns, and
performance rates.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 198


Supported Sizing Tools

 This approach is best suited when a precise design is required for


an existing system or systems.
 Selecting the Live Optics/NAR option enables you to also import Live
Optics data to be used as a workload by entering a Live Optics Project
ID.

Midrange Sizer – Simple Performance

Simple Performance path view

Choosing the Simple Performance path enables the sizing of


applications with minimal input at early stages before complete workload
details are available. This method predicts the decay of data access over
time that is based on the inputs.

The Simple Performance path computes a skew value and determines


how best to build a pool to efficiently capture that data. The Simple
Performance path determines the IOPS rate achievable based on the
drives being no more than 70% busy.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 199


Supported Sizing Tools

First select the System Type [All-Flash or Hybrid]. Note when changing
the system type after workloads have been added require the
configuration to be reset. Also when All Flash is selected a Compression
Ratio combo box is shown.

Application Details

Select an application type to size, and provide a name.

Available types are:


 Small read/write mix – Application has 67% read and 33% write mix
with 8 KB I/O size.
 Medium read/write – Application has 67% read and 33% write along
with 64 KB I/O size.
 Small read heavy – Application has 80% read and 20% write with 8 KB
I/O size.

Load Specific Parameters

Enter the various Load-Specific Parameters:


 Initial Capacity (TB) – Holds the usable capacity value in terabytes on
the day of deployment.
 Yearly Growth Rate (%) – Specifies the estimated average annual
capacity growth rate as percentage.
 Days Hot – Specifies the estimated duration that newly created data
which stays highly accessed. The default value is 45 days, but that is
arbitrary. Longer durations mean that data is displayed more over the
total configuration. Shorter durations mean that the access is
concentrated and has a higher skew.
 Target IOPS – Enter a specific rate to which the current workload must
align.

Advanced Options

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 200


Supported Sizing Tools

Change the preconfigured Advanced Options [Optional]. This section


contains tier-specific parameters. It has default values set for every tier.
For Flash, it is RAID 5 (8+1), for SAS, it is RAID 5 (4+1) and for NL-SAS, it
is RAID 6 (6+2).

Midrange Sizer – Application Oriented

Application Oriented path view

The Application Oriented path enables the sizing of configurations that


are based on the workloads of the applications. The Application Oriented
path:
 Provides more granularity and control over the workload and data
within it.
 Enables you to define multiple applications and workloads within a
single configuration.

 Every workload has unique inputs and rules for implementation that
are based on the best practices and recommendations for the
system being sized.
The System Engineer selects the application workload: Exchange, Oracle,
SQL, File share, VDI.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 201


Supported Sizing Tools

The SE enters applicable parameters:


 User count, mailbox size, database size, and so on
 Free-form: Users can customize several workload parameters.

The sizer tool creates tiered pools to meet the requirements. The system
designer engine estimates the performance and a report shows the
system details, IOPS, saturation.

Midrange Sizer – Advanced Performance

Advanced Performance path view

The Advanced Performance path enables the sizing of a Dell Unity


system by creating pools which contain manually defined loads. This path
models single-tiered and multitiered pools and can estimate system
saturation and response time for those loads.

The Advanced Performance path offers several modes of operation. After


entering the system model, proposed pool design, and workload profile,
you can choose from the following processing modes:
 The SolVe ROT and SolVe Max modes predict IOPS, MB/s, and
response time at conservative rule of thumb (ROT) or max levels.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 202


Supported Sizing Tools

 The Compute mode estimates performance at the required host IOPS


value, if specified.
 The Find Best Fit For IOPS mode adjusts the model and drive counts
to achieve the required host IOPS value, if specified.
 The Stack feature enables system configurations to be built from
successive pool and load definitions.

 A pool is added on the Stack with at least one load. Successive


loads can be added to the pool until the drives in that pool are
saturated (either load capacities exceed the free capacity of the
pool, or the workload increases the drive utilization to 100%).

Note: Users must be USPEED certified to access the


Advanced Performance path.

Midrange Sizer Deliverables

Midrange Sizer integrates with MyQuotes by providing the capability to


export the saved system configuration into an .XML file. This .XML file can
be uploaded to the ordering tool to process a sales order.

To open a sample of the exported file, click the .XML file link. Once
opened, double-click the sample image to expand it.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 203


Supported Sizing Tools

1:

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 204


Supported Sizing Tools

The saved system configuration can be exported to a PDF file


that can be shared with the customer, and other users, including
the Solutions Architect.

Go to: The Unity Design: Midrange Sizer for Unity


(ESSTGD04650) training covers the tool use and
capabilities in more details.

Sizing a PowerStore Solution: PowerSizer

Sizing a PowerStore solution is performed with the PowerSizer tool.

PowerSizer dashboard showing the PowerStore selection for Quick Size option.

The online tool enables the planning and sizing of PowerStore,


PowerMax, PowerScale, PowerVault, and APEX configurations for optimal
performance.

The application provides guidance and resources to define the correct


solution for the customer requirements and performance needs.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 205


Supported Sizing Tools

Supported outputs show the average capacity utilization and storage


performance saturation across all appliances in a cluster configuration.

Go to: The PowerStore Design: PowerSizer


(ESSTGD04697) training covers the tool use and
capabilities in more details. Go to the PowerSizer tool page
to access the application.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 206


Knowledge Check: Supported Sizing Tools

Knowledge Check: Supported Sizing Tools

Knowledge Check: Supported Sizing Tools

1. Which Midrange sizing option creates custom configurations for either


an All-Flash or a Hybrid array?
a. LiveOptics/NAR
b. Application Oriented
c. System Designer
d. Quick Configuration

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 207


Appendix

UnityVSA Deployment Requirements

Single-SP

ESXi Host Minimum Requirements


Configuration

Operating System ESXi 6.0 or later

CPU Xeon E6 Series Dual-core 64-bit x86 Intel 2 GHz+


(or equivalent).

Memory 18 GB (for ESXi 6.0), or 20 GB (for ESXi 6.5+)

Network Interfaces Four 1 GbE or four 10 GbE (recommended)

RAID RAID Controller: NV Cache - 512 MB (1 GB


recommended), battery backed (recommended)

Datastore NFS and VMFS supported. Full SSD datastore


recommended.

Dual-SP

ESXi Host 2-core UnityVSA 12-core UnityVSA


Configuration

Operating ESXi 6.5 or later ESXi 6.5 or later


System

CPU Intel Xeon Silver 4110 or Intel Xeon Silver 4110 or


Higher Higher

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 209


Appendix

Memory 36 GB for ESXi 6.5+ (each 120 GB for ESXi 6.5+


host) (each host)

Network Three 10 GbE (one Three 10 GbE (one


Interfaces physical port for SP physical port for SP
management and I/O ports, management and I/O ports,
and two for inter-SP and two for inter-SP
network) network)

RAID RAID card 512 MB NV RAID card 512 MB NV


Cache, battery backed Cache, battery backed
(recommended) (recommended)

Datastore NFS and VMFS supported. NFS and VMFS supported.


Recommended one full Recommended one full
SSD shared datastore and SSD shared datastore and
a separate full-SSD local a separate full-SSD local
swap datastore. swap datastore.

Drive Types

Interface Type Writes GB Encryption Block Size


Per Day Type
(WPD)

NVMe SCM 30 750 FIPS (Type- 512


D)

NVMe SSD/Flash 1 1920 FIPS (Type- 512


D)

NVMe SSD/Flash 1 3840 FIPS (Type- 512


D)

NVMe SSD/Flash 1 7680 FIPS (Type- 512


D)

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 210


Appendix

NVMe SSD/Flash 1 15360 FIPS (Type- 512


D)

SAS SSD/Flash 1 1920 FIPS (Type- 512


D)

SAS SSD/Flash 1 3840 FIPS (Type- 512


D)

SAS SSD/Flash 1 7680 FIPS (Type- 512


D)

NVMe NVRAM Used for 8 FIPS (Type- 512


Cache D)

Dell Unity XT I/O Expansion Modules Overview


Each SP assembly has two slots (slot 0 and slot 1) to install optional
expansion I/O modules. These I/O modules are also called Subscriber
Line Interface Cards (SLICs). The table shows the available I/O modules
that are supported in the Dell Unity XT storage arrays.

Technology Ports Port Type Notes

12 Gb SAS 4 Mini-SAS HD Connects to DAEs and


supports encryption.

16 Gb FC 4 SFP+ Fibre Channel

32 Gb FC 4 SFP+ Fibre Channel

10 Gb Ethernet 4 RJ45 iSCSI and NAS

25 Gb Ethernet 4 SFP iSCSI and NAS

The maximum number of available expansion slots on a Dell Unity XT


storage array is four (two per SP).

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 211


Appendix

Deep Dive: For more information, see the Hardware


Information Guide document for the Dell Unity XT
platform (Hybrid and All-Flash) at the Dell Unity Info Hub
site.

Dell Unity XT 380/380F - DPE/SP Integrated CNA Ports


Dell Unity XT 380/380F systems support onboard Converged Network
Adapter (CNA) for front-end connectivity using a dual personality
controller.

The CNA controller supports both Ethernet (iSCSI or File) or the Fibre
Channel protocols depending on which SFP is inserted.

Each SP has two CNA ports supporting hot swappable Small Form-Factor
Pluggable SFP+ optical connectors.

Location of the CNA ports on Dell Unity XT 380/380F systems

Ethernet iSCSI/File

For block and file protocols, users can configure a 1 GbE SFP or a 10
Gb/s SFP. When an Ethernet SFP is inserted in the CNA ports, the ports
are persisted as the Ethernet protocol at first system boot, and cannot be
changed.

The SFPs are hot swappable between 1 GbE and 10 Gb/s SFPs.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 212


Appendix

If the CNA is initially persisted with 10 Gb/s SFPs, the customer can
downgrade to 1 Gb/s SFP, if necessary.

CNA ports can only be configured as a single protocol across both SPs on
the system. For example, if there are two CNA ports per SP, both must be
configured to use either a NIC or Fibre Channel connection.

Fibre Channel

For Fibre Channel connectivity, CNA ports can be configured with either
multimode or single mode SFPs. Single-mode SFPs support 16 Gb/s only.

When a Fibre Channel SFP is inserted in the CNA ports, the ports are
persisted as the FC protocol at first system boot. The setting cannot be
changed.

The supported Fibre Channel topologies and speed include:


 Support for 4/8/16 Gb/s (Auto Negotiable)
 Two node Loop topology support with 4/8 Gb/s configurations
 Point to Point topology support in two node or switched configurations
for all speeds
 All ports are available for RecoverPoint.

Synchronous replication is supported when the lowest number port is


enabled as a Sync/Replication port. If a customer wants to synchronously
replicate up to 10 km with speeds up to 16 Gb/s, one can use the single
mode SFP. For shorter distances, customers can typically replicate up to
300 through 400 meters. If there is a synchronous replication port, it can
be configured as single mode, and the remaining ports can be configured
as multimode.

Important: CNA ports are available only on the Dell Unity


XT 380/380F platform. FCoE and FCIP are not supported
protocols.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 213


Appendix

I/O Module Types


There are four types of I/O modules (SLICs) available for installation into a
Base Enclosure:

4-Port BaseT

4-port BaseT I/O module

The 4-Port BaseT module:


 Is supported on PowerStore T and X model types.
 Runs at interface speeds of 10 Gb/s and 1 Gb/s.
 Provides host access to block storage resources using the iSCSI and
NVMe/TCP protocols.
 Can provide Ethernet connectivity for SMB and NFS file access.
 Supports replication.

Only Dell Technologies certified technicians can add I/O modules to empty
slots after the system is set up. Previously installed I/O modules are
Customer Replaceable Units (CRUs).

4-Port 25 GbE SFP-based

The 4-Port 25 GbE SFP-based I/O module is supported in PowerStore T


models. The I/O module uses an optical 25 GbE or 10 GbE capable SFP+

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 214


Appendix

connection to a host or switch port. The Ethernet module supports iSCSI


and NVMe/TCP block access, NAS, and replication.

4-port 25 GbE SFP-based I/O module

The module supports the following types of SFP transceivers:

 25 GbE SFP to RJ45


 25 GbE SFP-based SFP passive TwinAx
 10 GbE or 25 GbE SFP28
 10 GbE active or passive TwinAx
 1 GbE SFP to RJ45

2-Port 100 GbE Front-End I/O Module

The 2-Port 100 GbE front-end I/O module is supported in most


PowerStore T models. The Ethernet module supports iSCSI and
NVMe/TCP block access, NAS, and replication.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 215


Appendix

PowerStore back panel showing 2-port 100 GbE I/O module installed in Slot 0.

 The module can be installed in PowerStore T deployments only.


 The module is supported on PowerStore 1000-9200 models.
 The PowerStore 500 does not support the I/O module.
 I/O module can be installed in Slot 0 only.
 The I/O module requires PCIe x16 bus width.

4-Port 32 Gb Fibre Channel

The 4-Port 32 Gb Fibre Channel I/O module is supported in PowerStore


T and PowerStore X models. The I/O module is used to serve Fibre
Channel block protocol using SAN to hosts. Each port has an optical 16
Gb/32 Gb capable SFP connection to a host or switch port.

4-port 32 Gb Fibre Channel I/O module

 The SLIC supports two different types of SFP transceivers:

 32 Gb/s (can autonegotiate to 16 Gb/s or 8 Gb/s)


 16 Gb/s (can autonegotiate to 8 Gb/s or 4 Gb/s)
The SLIC supports SCSI and NVMe-oF block connections.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 216


Appendix

Drive Partnership Group


Drive Partnership Group is a collection of drives within a dynamic pool.
Drive partnership groups are automatically configured by the system.

DPG 1 reaches its maximum capacity of 64 drives, and DPG 2 is created

There may be one or more drive partnership groups per dynamic pool.
 Every dynamic pool contains at least one drive partnership group.
 Each drive is member of only one drive partnership group.
 Drive partnership groups are built when a dynamic pool is created or
expanded.

 A drive partnership group only contains a single drive type.


 Different sizes of a particular drive type can be mixed within the
group.
Each drive partnership group can contain a maximum of 64 drives of the
same type.
 Limit the number of drives RAID extents can cross.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 217


Appendix

 When a drive partnership group for a particular drive type is full, a new
group is started.
 The new group must have the minimum number of drives for the stripe
width plus hot spare capacity.

RAID Protection Levels and Drive Counts


Depending on the selected RAID protection level, different drive counts
must be selected.

The drive count must fulfill the RAID stripe width plus spare space
reservation set for a drive type.

RAID 1/0

RAID Type RAID Stripe Width Number of Drives

RAID 1/0 1+1 3 or 4

2+2 5 or 6

3+3 7 or 8

4+4 9 or more

RAID 5

RAID Type RAID Stripe Width Number of Drives

RAID 5 4+1 6 to 9

8+1 10 to 13

12+1 14 or more

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 218


Appendix

RAID 6

RAID Type RAID Stripe Width Number of Drives

RAID 6 4+2 7 to 8

6+2 9 to 10

8+2 11 to 12

10+2 13 to 14

12+2 15 to 16

14+2 17 or more

Supported Drives and Configurations


FAST Cache is only supported on the Dell Unity XT HFA systems. The
Dell Unity XT hybrid models support 400 GB SAS Flash 2 drives only.

The table shows each Unity XT hybrid model, the SAS Flash 2 drives
supported for that model, the maximum FAST Cache capacities and the
total Cache.

Hybrid System Supported Maximum Total


System Memory SAS Flash 2 FAST Cache
Model (Cache) per Drives Cache
Array Capacity

Dell Unity XT 128 GB Only the 400 800 GB 928


380 GB SAS
Flash 2
Dell Unity XT 192 GB 1.2 TB 1.39 TB
480

Dell Unity XT 384 GB 3.2 TB 3.58 TB


680

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 219


Appendix

Dell Unity XT 768 GB 6.0 TB 6.76


880
FAST Cache specifications

Important: For compatibility information, locate the latest


Dell Unity XT Storage Systems - Drive and OE Compatibility
Matrix at the Dell Support website.

Data Reduction Theory of Operation


For Data Reduction enabled storage resources, the Data Reduction
process occurs during the System Cache proactive cleaning operations or
when System Cache is flushing cache pages to the drives within a Pool.
The data in this scenario may be new to the storage resource, or the data
may be an update to existing blocks of data currently residing on disk.

In either case, the Data Reduction algorithm occurs before the data is
written to the drives within the Pool. During the Data Reduction process,
multiple blocks are aggregated together and sent through the algorithm.
After determining if savings can be achieved or data must be written to
disk, space within the Pool is allocated if needed, and the data is written to
the drives.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 220


Appendix

Data Reduction process

Process:
1. System write cache sends data to the Data Reduction algorithm during
proactive cleaning or flushing.
2. Data Reduction logic determines any savings.
3. Space is allocated in the storage resource for the dataset if needed,
and the data is sent to the disk.

Data Reduction - Deduplication


Data is sent to the Data Reduction algorithm during proactive cleaning or
flushing of write path data.

In the example, an 8 KB block enters the Data Reduction algorithm and


Advanced Deduplication is disabled.
 The 8 KB block is first passed through the deduplication algorithm.
Within this algorithm, the system determines if the block consists
entirely of zeros, or matches a known pattern within the system.
 If a pattern is detected, the private space metadata of the storage
resource is updated to include information about the pattern, along with

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 221


Appendix

information about how to re-create the data block if it is accessed in


the future.
 Also, when deduplication finds a pattern match, the remainder of the
Data Reduction feature is skipped for those blocks which saves system
resources. None of the 8 KB block of data is written to the Pool at this
time.
 If a block was allocated previously, then the block can be freed for
reuse. When a read for the block of data is received, the metadata is
reviewed, and the block will be re-created and sent to the host.
 If a pattern is not found, the data is passed through the Compression
Algorithm. If savings are achieved, space is allocated on the Pool to
accommodate the data.
 If the data is an overwrite, it may be written to the original location if it
is the same size as before.

The example displays the behavior of the Data Reduction algorithm when
Advanced Deduplication is disabled.

Data Reduction algorithm behavior when Advanced Deduplication is disabled

Data Reduction - Advanced Deduplication


If an 8 KB block is not deduplicated by the zero and common pattern
deduplication algorithm, the data is passed into the Advanced
Deduplication algorithm.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 222


Appendix

Each 8 KB block receives a fingerprint, which is compared to the


fingerprints for the storage resource. If a matching fingerprint is found,
deduplication occurs. The private space within the resource is updated to
include a reference to the block of data residing on disk. No data is written
to disk at this time.

Storage resource savings are compounded as deduplication can


reference compressed blocks on disk. If a match is not found, the data is
passed to the compression algorithm. Advanced Deduplication only
compares and detects duplicate data that is found within a single storage
resource, such as a LUN or File System.

The fingerprint cache is a component of the Advanced Deduplication


algorithm. The fingerprint cache is a region in system memory that is
reserved for storing fingerprints for each storage resource with Advanced
Deduplication enabled. There is one fingerprint cache per storage
processor, and it contains the fingerprints for storage resources residing
on that SP.

Data Reduction algorithm behavior when Advanced Deduplication is enabled

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 223


Appendix

Through machine learning and statistics, the fingerprint cache determines


which fingerprints to keep, and which ones to replace with new
fingerprints. The fingerprint cache algorithm learns which resources have
high deduplication rates and allows those resources to consume more
fingerprint locations.
 If no fingerprint match is detected, the blocks enter the compression
algorithm.
 If savings can be achieved, space is allocated within the Pool which
matches the compressed size of the data, the data is compressed, and
the data is written to the Pool. When Advanced Deduplication is
enabled, the fingerprint for the block of data is also stored with the
compressed data on disk.
 The fingerprint cache is then updated to include the fingerprint for the
new data.

Compression does not compress data if no savings can be achieved. In


this instance, the original block of data will be written to the Pool. Waiting
to allocate space within the resource until after the compression algorithm
is complete helps to not over-allocate space within the storage resource.

Asynchronous Replication Topologies


While a system can replicate to multiple destination systems, an individual
block storage resource can only replicate to a single destination block
storage resource.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 224


Appendix

One-Directional

One-Directional asynchronous replication of two storage resources between systems

One-Directional replication is typically deployed when only one of the


systems is used for production I/O.
 The second system is a replication target for all production data and
sits idle.
 If the need arises, the DR system can be placed into production and
provide production I/O.
 Mirroring the production system configuration on the DR system is
suggested.

 Each system would have the same performance potential.


 For physical systems, this configuration would mean mirroring the
drive configurations and the pool layout.
 On Dell UnityVSA systems, this configuration would mean using
similar virtual drives and pools.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 225


Appendix

Bi-directional

Bi-Directional asynchronous replication of two storage resources between systems

The Bi-Directional replication topology is typically used when production


I/O is spread across multiple systems or locations.
 The production I/O from each system is mirrored to the peer system.
 If there is an outage, one of the systems can be promoted as the
primary production system, and all production I/O can be sent to it.
 After the outage is resolved, the replication configuration can be
changed back to its original configuration.
 This replication topology ensures that both systems are in use by
production I/O simultaneously.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 226


Appendix

One-to-Many

One-to-many asynchronous replication topology

The One-to-Many replication topology is deployed when production exists


on a single system, but replication must occur to multiple remote systems.
 This replication topology can be used to replicate data from a
production system to a remote location to provide local data access to
a remote team.
 At the remote location, Dell Unity XT Snapshots can be used to
provide host access to a local organization or test team.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 227


Appendix

Many-to-One

Many-to-one asynchronous replication topology

The Many-to-One replication topology is deployed when multiple


production systems exist, and replicating to a single system to consolidate
the data is required.
 The topology is useful when multiple production data sites exist, and
data must be replicated from these sites to a single DR data center.
 Deployment at Remote Office Branch Office (ROBO) locations is a use
case for this type of configuration.

 A Dell UnityVSA may be deployed at each ROBO site, and all


replicate back to a single All-Flash or Hybrid system.
 The use of Dell UnityVSA at ROBO locations eliminates the need
for a physical Dell Unity XT system at each site.

Synchronous Replication Topologies


For synchronous replication, two topologies can be used, either One-
Directional or Bi-Directional replication.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 228


Appendix

One-Directional

One-Directional synchronous replication of two storage resources between systems

One-Directional replication is typically deployed when only one of the


systems is used for production I/O.
 The second system is a replication target for all production data and
sits idle.
 If the need arises, the DR system can be placed into production and
provide production I/O.
 In this scenario, mirroring the production system configuration,
including the number of drives and pool layout, on the DR system is
suggested.

 Each system would have the same performance potential.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 229


Appendix

Bi-directional

Bi-Directional asynchronous replication of two storage resources between systems

The Bi-Directional replication topology is typically used when production


I/O is spread across multiple systems or locations.
 The production I/O from each system is mirrored to the peer system.
 If there is an outage, one of the systems can be promoted as the
primary production system, and all production I/O can be sent to it.
 After the outage is resolved, the replication configuration can be
changed back to its original configuration.
 This replication topology ensures that both systems are in use by
production I/O simultaneously.

Hybrid Replication Topologies


Dell Unity XT file remote protection supports the combination of
synchronous and asynchronous remote replication sessions together to
form a topology which span multiple sites.
 Synchronous sessions are kept within metro distances to limit RTT
latency.
 Multiple sessions can fan-out from a replication source.
 Sessions can cascade from a replication source in a multihop manner.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 230


Appendix

Example file replication topology

Use cases:
 Expands file replication protection domain
 Increases resilience for file datasets
 Expands data access

 For test and backup

Midrange Sizer Software Requirements


The Midrange Sizer supports multiple browsers and devices.

Minimum browser requirements:


 Mozilla Firefox v47+
 Microsoft Internet Explorer v11+
 Microsoft Edge v37+
 Google Chrome v52+
 Apple Safari v9+

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 231


Advanced Deduplication
Dynamic deduplication algorithm which reduces storage consumption by
eliminating duplicate 8 KB blocks within a storage resource.

If the option is enabled, and deduplication does not detect a pattern, the
data is passed through the Advanced Deduplication algorithm.

Advanced Deduplication uses fingerprints created for each block of data to


quickly identify duplicate data within the dataset. If a received 8 KB block
matches an existing cached fingerprint, this block is not written and a
pointer to the saved fingerprint is created.

AES 256-bit Encryption Standard


The AES 256-bit Encryption Standard is National Institute of Standards
and Technology (NIST) and Trusted Computing Group (TCG) compliant.
Supporting encryption industry standards allows PowerStore to be sold to
US Federal agencies and companies that require equipment to be TCG
compliant.

CEPA
A mechanism in which applications can register to receive event
notification and context from PowerStore systems. CEPA runs on
Windows or Linux. CEPA delivers to the application both event notification
and associated context in one message.

Drive Extent
A drive extent is a portion of a drive in a dynamic pool. Drive extents are
either used as a single position of a RAID extent or can be used as spare
space. The size of a drive extent is consistent across drive technologies –
drive types.

Drive Extent Pool


Management entity for drive extents. Tracks drive extent usage by RAID
extents and determines which drive extents are available as spares.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 232


Dynamic Pool Private LUN
A single dynamic pool private LUN is created for each dynamic RAID
Group.
The size of a private LUN is the number of RAID extents that are
associated with the private LUN. A private LUN may be as small as the
size of a single drive.

Dynamic RAID Group


A Dynamic pool RAID Group is a collection of RAID extents, and it can
span more than 16 drives. The RAID group is based on dynamic RAID
with a single associated RAID type and RAID width.
The number of RAID Groups and the size of the RAID Group can vary
within a pool. It depends on the number of drives and how the pool was
created and expanded.

Fail-Safe Network (FSN)


A Fail-Safe Network (FSN) is a high-availability feature that extends link
failover into the network by providing switch-level redundancy.

KMIP
KMIP is a communication protocol that defines message formats for the
manipulation of cryptographic keys on a key management server.

Link Aggregation Control Protocol (LACP)


The Link Aggregation Control Protocol (LACP) is included in IEEE
specification (IEEE 802.3ad) as a method to control the bundling of
several physical ports together to form a single logical channel.

Midrange Sizer
Midrange Sizer is an SSO [Single Sign-On] HTML5 based interface which
provides Dell Unity systems design capabilities with integrated best
practices, and ordering integration.

PACO

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 233


The Proactive Copy feature or PACO enables disks to actively copy the
data to the hot spare. The operation is triggered with the number of
existing media errors on the disk. PACO reduces the possibility of two bad
disks by identifying whether a disk is about to go bad and proactively
running a copy of the disk.

RAID Extent
A collection of drive extents. The selected RAID type and the set RAID
width determine the number of drive extents within a RAID extent.
Each RAID extent contains single drive extent from a specific number of
drives equal to the RAID width.
RAID extents can only be part of a single RAID Group and can never span
across drive partnership groups.

Spare Space
Spare space refers to drive extents in a drive extent pool not associated
with a RAID Group. Spare space is used to rebuild a failed drive in the
drive extent pool

Thin Provisioning
Thin provisioning allows multiple storage resources to subscribe to a
common storage capacity. The storage system allocates an initial quantity
of storage to the storage resource. This provisioned size represents the
maximum capacity to which the storage resource can grow without being
increased. Volumes can be between 1 MB and 256 TB in size.

Virtual Link Trunking interconnect (VLTi)


Virtual Link Trunking interconnect (VLT) enables VLT for the Layer 2
interconnect between switches, providing a logical single view to the
connected devices.

Virtual Volumes (vVols)


VMware Virtual Volumes (vVols) are storage objects that are provisioned
automatically by a VMware framework to store Virtual Machine (VM) data.

Virtual Volumes (vVols)

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 234


VMware Virtual Volumes (vVols) are storage objects that are provisioned
automatically by a VMware framework to store Virtual Machine (VM) data.

Midrange Storage Performance Planning

© Copyright 2023 Dell Inc Page 235

You might also like