Dell Unity Implementation and Administration: Participant Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 702

DELL UNITY

IMPLEMENTATION AND
ADMINISTRATION

PARTICIPANT GUIDE

PARTICIPANT GUIDE
Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page i


Table of Contents

System Administration ........................................................................... 15

User Interfaces and Access Control....................................................................... 16


Administrative User Interfaces - Unisphere ........................................................................ 17
Administrative User Interfaces - Unisphere CLI or UEMCLI ............................................... 20
Administrative User Interfaces - REST API ........................................................................ 23
Access Control - User Authentication ................................................................................. 24
Access Control - Default User Accounts ............................................................................ 27
Access Control - Role-Based Administration ...................................................................... 29
Centralized Management - Unisphere Central ................................................................... 30
Centralized Management - CloudIQ ................................................................................... 34

Basic System Settings............................................................................................. 40


Unisphere Settings............................................................................................................. 41
Configure Unisphere Basic Settings - Licenses .................................................................. 42
Configure Unisphere Basic Settings - System Time ........................................................... 44
Configure Unisphere Basic Settings - Schedule Time Zone ............................................... 46
Configure Unisphere Basic Settings - Domain Name Servers ............................................ 48
Configure Unisphere Basic Settings - Management Port Network Address ........................ 50
Configure Unisphere Basic Settings - Failback Policy ........................................................ 52

Support Configuration ............................................................................................. 54


Configure Support Settings - Proxy Server......................................................................... 55
Configure Support Settings - Dell Support Credentials ....................................................... 57
Configure Support Settings - Contact Information .............................................................. 59
Secure Connect Gateway (SCG) ....................................................................................... 61
Configure Support Settings - Secure Connect Gateway ..................................................... 65
Secure Connect Gateway - Readiness Check ................................................................... 67
Secure Connect Gateway Configuration - Integrated ......................................................... 73
Secure Connect Gateway Configuration - Centralized ....................................................... 80
Activity: Unisphere Tour ..................................................................................................... 83

Module 1 Course Introduction and System Administration

Page ii © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring ............................................................... 84
Unisphere Alerts and Events Monitoring ............................................................................ 85
Unisphere System Alerts.................................................................................................... 86
System Alerts Severity Levels ............................................................................................ 87
System Alerts States.......................................................................................................... 89
Manage Alerts.................................................................................................................... 90
View Alerts Details ............................................................................................................. 91
Configure Alert Notifications - Email ................................................................................... 93
Configure Alert Notifications - SNMP ................................................................................. 96
System Jobs Monitoring ................................................................................................... 100
System Logs Monitoring................................................................................................... 101
Add Remote Logging Configuration ................................................................................. 102
Edit Remote Logging Configuration ................................................................................. 105
System Administration Key Points ................................................................................... 107

Storage Resources ............................................................................... 111

Dell Unity XT Platform Supported Storage Resources ..................................................... 112


Unified Storage Pools ...................................................................................................... 114
Homogeneous and Heterogeneous Pools ........................................................................ 116
Storage Pools Management ............................................................................................. 118

Dynamic Pools ...................................................................................... 119

Dynamic Pools Overview ................................................................................................. 120


Dynamic Pool Benefits ..................................................................................................... 121
Provisioning Dynamic Pools ............................................................................................. 122
Creating Dynamic Pools................................................................................................... 123
Creating Dynamic Pools in Unisphere .............................................................................. 125
View Dynamic Pool Properties ......................................................................................... 131
Expanding Dynamic Pools ............................................................................................... 133
Expand Dynamic Pool by Set of Drives ............................................................................ 135
Expand Dynamic Pool with Single Drive .......................................................................... 137
Mixing Drive Sizes within Dynamic Pools ......................................................................... 139

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page iii


Expanding Dynamic Pools in Unisphere .......................................................................... 142
Spare Space .................................................................................................................... 143
Drive Rebuild ................................................................................................................... 144
Demonstration - Dynamic Pools ....................................................................................... 146

Traditional Pools ................................................................................... 147

Traditional Pools Overview .............................................................................................. 148


Provisioning Traditional Pools .......................................................................................... 149
Creating Traditional Pools - Process ................................................................................ 150
Creating Traditional Pools ................................................................................................ 151
Viewing Traditional Pool Properties ................................................................................. 153
Expanding Traditional Pools ............................................................................................ 155
Activity: Creating Storage Pools ....................................................................................... 156

Provision Block Storage ...................................................................... 157

Block Storage Resources................................................................................................. 158


LUN Provisioning ............................................................................................................. 160
Create LUNs .................................................................................................................... 161
View LUN Properties........................................................................................................ 168
Consistency Groups Provisioning .................................................................................... 171
Create Consistency Groups ............................................................................................. 172
View Consistency Group Properties ................................................................................. 180
Activity: Create Block Storage LUNs and Consistency Groups ........................................ 181
Block Storage Access Overview ...................................................................................... 182
Host Access Requirements .............................................................................................. 184
Host-to-Storage Connectivity ........................................................................................... 186
Host-to-Storage Connectivity Rules ................................................................................. 188
Front-End Connectivity Options ....................................................................................... 189
Front-End Fibre Channel Interfaces Management............................................................ 192
Host Fibre Channel Initiators Registration ........................................................................ 194
Host Fibre Channel Initiators Management - Unisphere ................................................... 196
Front-End iSCSI Interfaces Management - Unisphere ..................................................... 198
iSCSI CHAP Security Settings ......................................................................................... 201

Module 1 Course Introduction and System Administration

Page iv © Copyright 2022 Dell Inc.


Host iSCSI Initiator Options ............................................................................................. 203
Host iSCSI Initiator Registration Process – Windows Host............................................... 205
Host iSCSI Initiator Registration Process – Linux Host..................................................... 206
Host iSCSI Initiators Management – Unisphere ............................................................... 208
Host Configuration ........................................................................................................... 210
Hosts Configuration Management - Unisphere................................................................. 211
Host Access to Provisioned Block Storage....................................................................... 214
Host Groups Overview ..................................................................................................... 216
Host Group Configurations............................................................................................... 218
Host Groups Management - Unisphere ............................................................................ 221
Activity: Windows Host Access to Block Storage.............................................................. 226
Activity: Linux Host Access to Block Storage ................................................................... 227

Provision File Storage .......................................................................... 228

File Storage ..................................................................................................................... 229


NAS Servers .................................................................................................................... 231
NAS Servers Management .............................................................................................. 232
Create NAS Servers ........................................................................................................ 233
View NAS Server Properties ............................................................................................ 235
Ethernet Ports .................................................................................................................. 237
Activity: Create NAS Servers ........................................................................................... 239
File Systems Management............................................................................................... 240
Create File System .......................................................................................................... 241
View File System Properties ............................................................................................ 243
Activity: Create File Systems............................................................................................ 245
File Storage Access Overview ......................................................................................... 246
Create Host Configurations for NFS Clients ..................................................................... 248
NFS Shares Management ................................................................................................ 250
Creating NFS Shares ....................................................................................................... 251
Viewing NFS Share Properties......................................................................................... 252
Select Host Access .......................................................................................................... 254
Setting Host Access Levels .............................................................................................. 256
Connecting Host to Shared NFS File System................................................................... 257

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page v


Activity: NFS File Storage Access .................................................................................... 258
SMB Shares Management ............................................................................................... 259
Create SMB Shares ......................................................................................................... 260
View SMB Share Properties ............................................................................................. 261
Connect Host to Shared SMB File System ....................................................................... 263
Activity: SMB File Storage Access ................................................................................... 264

Provision VMware Datastores.............................................................. 265

VMware Storage .............................................................................................................. 266


VMware Datastores Management .................................................................................... 268
Provision VMFS Datastores ............................................................................................. 269
Provision NFS Datastores ................................................................................................ 271
View VMware Datastore Properties ................................................................................. 273
VMware Datastores Access ............................................................................................. 274
ESXi Host Configuration Profile ....................................................................................... 275
VMware Host Access to Provisioned Datastore ............................................................... 277
Discovered Storage Device in vSphere ............................................................................ 279
Automatically Created Datastores .................................................................................... 280
Activity: VMware Datastore Access .................................................................................. 281
VMware Virtual Volumes (vVols) ...................................................................................... 282
What Is Stored in vVol Datastores? ................................................................................. 284
Provisioning vVols Workflow - Storage............................................................................. 285
Provisioning vVols Workflow - Storage (Associate Capability Profiles with Storage Pools)
........................................................................................................................................ 286
Provisioning vVols Workflow - Storage (Create Storage Containers) ............................... 290
Protocol Endpoints ........................................................................................................... 292
Provisioning vVols Workflow – vSphere Environment ...................................................... 294
Provisioning vVols Workflow - vSphere Environment (Add Storage Provider) .................. 295
Provisioning vVols Workflow - vSphere Environment (Add vVol Datastores).................... 297
Provisioning vVols Workflow - vSphere Environment (Create Storage Policies) ............... 300
Provisioning vVols Workflow - vSphere Environment (Provision VMs to Storage Policies)
........................................................................................................................................ 302
VM Virtual Volumes in Unisphere ..................................................................................... 305
Demonstration - vVol Datastores ..................................................................................... 308

Module 1 Course Introduction and System Administration

Page vi © Copyright 2022 Dell Inc.


Storage Provisioning Key Points ...................................................................................... 309

FAST Cache ............................................................................................................ 313


FAST Cache Overview..................................................................................................... 314
FAST Cache Components ............................................................................................... 316
FAST Cache Operations .................................................................................................. 318
Supported Drives and Configurations............................................................................... 319
Create FAST Cache......................................................................................................... 320
Enable FAST Cache ........................................................................................................ 321
Expand FAST Cache ....................................................................................................... 323
Expand FAST Cache Management .................................................................................. 326
Shrink FAST Cache ......................................................................................................... 327
Shrink FAST Cache Management.................................................................................... 329
Delete FAST Cache ......................................................................................................... 330
Demonstration ................................................................................................................. 331

Host I/O Limits ........................................................................................................ 332


Host I/O Limits Overview.................................................................................................. 333
Host I/O Limit Use Cases ................................................................................................. 334
Host I/O Limit Policy Types .............................................................................................. 335
Host I/O Limit Policy – Examples ..................................................................................... 336
Shared Policies ................................................................................................................ 338
Shared Density-Based Host I/O Limits ............................................................................. 339
Multiple Resources Within a Single Policy ....................................................................... 340
Density-Based Host I/O Limits Values.............................................................................. 342
Burst Feature Overview ................................................................................................... 343
Burst Creation .................................................................................................................. 344
Burst Configuration .......................................................................................................... 345
Burst Calculation Example ............................................................................................... 346
Burst Scenarios ............................................................................................................... 347
Burst Scenario 1 .............................................................................................................. 348
Animation - Burst Scenario 1 ........................................................................................... 355
Burst Scenario 2 .............................................................................................................. 356

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page vii


Animation - Burst Scenario 2 ........................................................................................... 362
Policy Level Controls ....................................................................................................... 363
Policy Level Controls Defined .......................................................................................... 364
Host I/O Limits System Pause – Settings ......................................................................... 366
Host I/O Limits Policy Pause – Unisphere ........................................................................ 368
Demonstration ................................................................................................................. 371

UFS64 File System Extension and Shrink ........................................................... 372


File System Extension Overview ...................................................................................... 373
Manual UFS64 File System Extension ............................................................................. 374
Automatic UFS64 File System Extension ......................................................................... 375
Storage Space Reclamation Overview ............................................................................. 376
UFS64 Thin File System Manual Shrink........................................................................... 377
UFS64 File System Automatic Shrink .............................................................................. 379
File System Extension and Shrink Operations ................................................................. 380

File-level Retention (FLR) ...................................................................................... 381


FLR Overview .................................................................................................................. 382
FLR Capabilities and Interoperability................................................................................ 385
Process to Enable and Manage FLR................................................................................ 387
Enable FLR on a File System .......................................................................................... 389
Enable writeverify for FLR-C ............................................................................................ 390
Define FLR Retention Periods.......................................................................................... 391
Set File State - NFS ......................................................................................................... 393
Set File State - FLR Toolkit for SMB ................................................................................ 395
Set File State - Automated ............................................................................................... 397
Scalability, Performance and Compliance Key Points ...................................................... 398

Data Reduction ....................................................................................................... 400


General Data Reduction Overview ................................................................................... 401
Data Reduction Overview in Dell Unity XT ....................................................................... 402
Considerations ................................................................................................................. 403
Data Reduction Theory of Operation ................................................................................ 404
Data Reduction - Deduplication........................................................................................ 405

Module 1 Course Introduction and System Administration

Page viii © Copyright 2022 Dell Inc.


Data Reduction - Advanced Deduplication ....................................................................... 407
Read Operation ............................................................................................................... 409
Enable Data Reduction and Advanced Deduplication on Supported Storage Resources . 410
Verify Pool Flash Capacity Utilization ............................................................................... 413
Identify Data Reduction Savings in Storage Resources ................................................... 414
Configuring Data Reduction on an Existing Storage Resource......................................... 416
Data Reduction and Advanced Deduplication with Consistency Groups .......................... 418
Data Reduction and Advanced Deduplication Using Local LUN Move ............................. 419
Expand an All-Flash Pool with Data Reduction Enabled Storage Resources ................... 421
Considerations About Expansion of Dynamic Pools with Data Reduction Enabled
Resources........................................................................................................................ 426
Identify Flash Tier Free Space Considerations................................................................. 427
Flash Tier Free Space Considerations - Metadata Relocation ......................................... 428
Viewing Storage Resource Properties - LUN ................................................................... 430
Understanding Savings Reporting .................................................................................... 431
Storage Resource Level - Block ....................................................................................... 432
Data Reduction Savings - Pool Level ............................................................................... 435
Data Reduction Savings - System Level .......................................................................... 436
Calculating Data Reduction Savings ................................................................................ 437
Data Reduction Savings - Savings Ratio .......................................................................... 438
Data Reduction Savings - Saving Percentage.................................................................. 439
Data Reduction and Advanced Deduplication with Replication......................................... 440
Data Reduction and Advanced Deduplication with Native File and Block Import .............. 442

FAST VP .................................................................................................................. 443


FAST VP Overview .......................................................................................................... 444
Tiering Policies ................................................................................................................ 446
Supported RAID Types and Drive Configurations ............................................................ 448
FAST VP Management .................................................................................................... 449

Thin Clones ............................................................................................................ 452


Thin Clones Overview ...................................................................................................... 453
Thin Clone Capabilities .................................................................................................... 454

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page ix


Recommended Uses for Thin Clones............................................................................... 456
Technical Comparison – Snapshots and Thin Clones ...................................................... 457
Theory of Operations: Create Thin Clones ....................................................................... 458
Theory of Operations: Refresh Thin Clones - 1 of 2 ......................................................... 459
Theory of Operations: Refresh Thin Clones - 2 of 2 ......................................................... 460
Theory of Operations: Refresh Base LUN ........................................................................ 461
Thin Clone Considerations ............................................................................................... 462
LUN Refresh Operation - 1 of 2........................................................................................ 463
LUN Refresh Operation - 2 of 2........................................................................................ 464
Data Reduction and Advanced Deduplication with Snapshots and Thin Clones ............... 465

File System Quotas ................................................................................................ 466


File System Quotas overview ........................................................................................... 467
File System Quotas Configuration .................................................................................... 468
Quota Usage.................................................................................................................... 470
Quota Limit ...................................................................................................................... 471
Storage Efficiency Key Points .......................................................................................... 474

Local LUN Move ..................................................................................................... 476


Local LUN Move Overview ............................................................................................... 477
What Gets Moved ............................................................................................................ 478
Local LUN Move Process................................................................................................. 479
Local LUN Move Requirements ....................................................................................... 482
Local LUN Move Capabilities ........................................................................................... 483
Local LUN Move Session Configuration ........................................................................... 484
Monitoring Move Session ................................................................................................. 486
LUN Cancel Move Operation ........................................................................................... 487

Local NAS Server Mobility .................................................................................... 488


Local NAS Server Mobility Overview ................................................................................ 489
Local NAS Server Mobility Capabilities ............................................................................ 490
Moving a NAS Server to Peer SP .................................................................................... 491
Monitoring a NAS Server Move ........................................................................................ 492
Demonstration: Local NAS Server Move .......................................................................... 493

Module 1 Course Introduction and System Administration

Page x © Copyright 2022 Dell Inc.


Data Mobility Key Points .................................................................................................. 494

Snapshots Overview .............................................................................................. 496


Snapshots Overview ........................................................................................................ 497
Snapshot Redirect on Write Architecture ......................................................................... 498
Combined (LUN and File System) Snapshot Capabilities ................................................. 500
Apply a Snapshot Schedule ............................................................................................. 501
Snapshot Settings for a Pool ............................................................................................ 502
Creating Snapshots ......................................................................................................... 503
Snapshot Operations ....................................................................................................... 505
Snapshot Schedules ........................................................................................................ 506

LUNs and Consistency Groups Snapshots ......................................................... 511


LUN Consistency Group Snapshots ................................................................................. 512
Multiple LUN Snapshots................................................................................................... 513
Creating LUN Snapshots - Create LUNs Wizard .............................................................. 514
Creating LUN Snapshots - LUN Properties ...................................................................... 515
LUN Snapshot Restore - Process .................................................................................... 516
LUN Snapshot Restore - Operation ................................................................................. 519
Attach LUN Snapshot to Host - Process .......................................................................... 520
Attach LUN Snapshot to Host - Operation ........................................................................ 522
Detach LUN Snapshot from Host - Process ..................................................................... 524
Detach LUN Snapshot from Host - Operation .................................................................. 526
LUN Snapshot Copy - Process ........................................................................................ 527
LUN Snapshot Copy - Operation...................................................................................... 529
Accessing a LUN Snapshot ............................................................................................. 530
Activity: LUN Snapshots................................................................................................... 532

File System Snapshots .......................................................................................... 533


Multiple File System Snapshots ....................................................................................... 534
Creating File System Snapshots - Create File System Wizard ......................................... 535
Creating File System Snapshots - File System Properties ............................................... 536
File System Snapshot Restore - Process ......................................................................... 537

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page xi


File System Snapshot Restore - Operation ...................................................................... 540
File System Snapshot Copy - Process ............................................................................. 541
File System Snapshot Copy - Operation .......................................................................... 543
Accessing a File System Read/Write Snapshot ............................................................... 544
Accessing a File System Read-Only Snapshot ................................................................ 546
Activity: File System Snapshots ....................................................................................... 550

Native vVol Snapshots .......................................................................................... 551


Native vVol Snapshots - Overview ................................................................................... 552
Creating a vVol Snapshot in Unisphere............................................................................ 553
Restore a vVol from a Snapshot in Unisphere .................................................................. 558
Data Protection with Snapshots Key Points ..................................................................... 561

Replication Overview ............................................................................................. 565


Replication Overview ....................................................................................................... 566
Asynchronous Local Replication Overview....................................................................... 568
Remote Replication Overview .......................................................................................... 569
Creating Replication Sessions ......................................................................................... 571
NAS Server and File System Remote Replication ............................................................ 572

Synchronous Replication Overview ..................................................................... 573


Synchronous Replication Architecture.............................................................................. 574
Synchronous Remote Replication Topologies .................................................................. 576
Synchronous Replication Process – 7 Steps .................................................................... 577
Animation - Synchronous Replication Process ................................................................. 582
Synchronous Replication States ...................................................................................... 583
Synchronous Replication of File Snapshots ..................................................................... 585
Synchronous Replication Capabilities .............................................................................. 587

Synchronous Replication Configuration ............................................................. 588


Synchronous Replication Creation Process ..................................................................... 589
Synchronous Remote Replication Communication........................................................... 591
Determining Synchronous FC Ports ................................................................................. 592
Replication Interfaces – Synchronous .............................................................................. 594

Module 1 Course Introduction and System Administration

Page xii © Copyright 2022 Dell Inc.


Replication Connections – Synchronous .......................................................................... 596
Verifying Synchronous Replication Communications ....................................................... 597
Synchronous Session – Resource Creation Wizard ......................................................... 598
Synchronous Session – Resource Properties .................................................................. 599
Synchronous Session – Destination Resources ............................................................... 601
Synchronous Session – Summary ................................................................................... 602
Synchronous Session – Results....................................................................................... 603
Synchronously Replicated Snapshot Schedules .............................................................. 604

Asynchronous Replication Overview ................................................................... 605


Asynchronous Local Replication Architecture .................................................................. 606
Asynchronous Remote Replication Architecture .............................................................. 607
Asynchronous Remote Replication Topologies ................................................................ 608
Asynchronous Replication Process - 8 Steps ................................................................... 610
Animation - Asynchronous Replication Process ............................................................... 617
Asynchronous Replication of Snapshots .......................................................................... 618
Architecture for Asynchronous Replication of Snapshots ................................................. 620
Asynchronous Replication Capabilities ............................................................................ 622

Asynchronous Replication Configuration ........................................................... 623


Asynchronous Replication Creation Process.................................................................... 624
Asynchronous Remote Replication Communication ......................................................... 625
Replication Interfaces – Asynchronous ............................................................................ 627
Replication Connection – Asynchronous .......................................................................... 628
Verifying Replication Communications ............................................................................. 630
Asynchronous Session – Resource Creation Wizard ....................................................... 631
Asynchronous Session – Resource Properties ................................................................ 633
Asynchronous Session – Destination Resources ............................................................. 635
Asynchronous Session – Summary ................................................................................. 636
Asynchronous Session – Results ..................................................................................... 637

Replication Operations .......................................................................................... 638


Unisphere Resource Filtering ........................................................................................... 639

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page xiii


System Level Replication Operations............................................................................... 642
System Level Pause and Resume ................................................................................... 643
System Level Failover ...................................................................................................... 646
Session Operations.......................................................................................................... 648
Source and Destination Operations.................................................................................. 650
Replication and NAS Server Interfaces ............................................................................ 652
Grouped NAS Server and File System Session Operations ............................................. 653
Replication Operations - Failover with Sync ..................................................................... 654
Replication Operations - Failover with Sync Process ....................................................... 655
Replication Operations - Planned Failover ....................................................................... 658
Replication Operations - Planned Failover Process ......................................................... 659
Replication Operations - Unplanned Failover ................................................................... 662
Replication Operations - Unplanned Failover Process ..................................................... 663
Replication Operations - Resume .................................................................................... 665
Replication Operations - Resume Process ....................................................................... 666
Replication Operations - Failback .................................................................................... 669
Replication Operations - Failback Process ....................................................................... 670
Demonstration: Synchronous Remote Replication ........................................................... 673

Replica Access ....................................................................................................... 674


Block Resource Remote Replica Data Access ................................................................. 675
File Resource Remote Replica Data Access .................................................................... 676
Create a Proxy NAS Server for Replica File Data Access ................................................ 678
Data Protection with Replication Key Points..................................................................... 679

Appendix ............................................................................................... 683

Supported Configurations for Data Reduction and Advanced Deduplication .................... 692

Glosary .................................................................................................. 698

Module 1 Course Introduction and System Administration

Page xiv © Copyright 2022 Dell Inc.


System Administration

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 15


User Interfaces and Access Control

User Interfaces and Access Control

Module 1 Course Introduction and System Administration

Page 16 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

Administrative User Interfaces - Unisphere

Overview

Enter the IP address of the Dell


Unity management interface

Enter the user credentials to log


into the storage system

Unisphere login

The Unisphere user interface is a web-based software that is built on the HTML5
technology with support on a wide range of browsers1.

Unisphere enables the configuration, and management of a single Dell Unity


storage system (physical models or UnityVSA) from one single interface.

1The supported browsers are: Google Chrome v33 or later, Microsoft Edge, Mozilla
Firefox v28 or later, and Apple Safari v6 or later.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 17


User Interfaces and Access Control

The interface provides an overall view of what is happening in the environment plus
an intuitive way to manage the storage array.

Launch Unisphere by entering the IP address of the storage system management


port in the URL address of a supported web browser.

Provide the user and password to log into the system.

Interface Navigation

Sub Menus Top Menu

Navigation
Pane Main Page

Unisphere interface showing the System View content page

The Unisphere interface has four main areas which are used for navigation and
visualization of the content:

• The Navigation Pane has the Unisphere options for storage provisioning, host
access, data protection and mobility, system operation monitoring, and support.
• The Main Page is where the pertinent information about options from the
navigation pane and a particular submenu is displayed. The page also shows
the available actions that can be performed for the selected object.

Module 1 Course Introduction and System Administration

Page 18 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

• The Top Menu has links for system alarms, job notifications, help menu, the
Unisphere preferences, global system settings, and the CloudIQ.
• The Sub Menus provide various tabs, links, and more options for the selected
item from the navigation pane.

Dashboard

View Block

Unisphere Dashboard with menu and submenu options, and view blocks

The Unisphere main dashboard provides a quick view of the system health and
storage health status.

A storage administrator can configure new dashboards by selecting Add and


providing a name.

The customized dashboards can be renamed and deleted using the dashboard
submenu.

Dashboard view blocks provide a summary of system storage usage, system


alerts, and storage resources health status. The view blocks can be added to a
dashboard, renamed or removed.

To add view blocks, open the selected dashboard submenu and select Customize.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 19


User Interfaces and Access Control

Administrative User Interfaces - Unisphere CLI or UEMCLI

Overview

The Unisphere CLI or UEMCLI enables a storage administrator to script some of


the most commonly performed tasks in the Dell Unity storage system.

Unisphere CLI enables you to run commands on a Dell Unity storage system from
a host with the Unisphere CLI client installed.
• Unisphere CLI supports provisioning and management of network block and
file-based storage.
• The Unisphere CLI client can be downloaded from the online support website
and installed on a Microsoft Windows or UNIX/Linux computer.

The application is intended for advanced users who want to use commands in
scripts for automating routine tasks.

The routine tasks include:


• Configuring and monitoring the system
• Managing users
• Provisioning storage
• Protecting data
• Controlling host access to storage

Command Syntax

The command syntax begins with the executable uemcli, using switches and
qualifiers as described here.

uemcli [<switches>] <object path> [<object qualifier>]


<action> [<action qualifiers>]

Where:
• Switches: Used to access a system, upload files to the system, and manage
security certificates.

Module 1 Course Introduction and System Administration

Page 20 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

• Object: Type of object on which to perform an action, such as a user, host,


LDAP setting.
• Object qualifier: Unique identifiers for objects in the system. The format is -
<identifier> <value>.
• Action: Operations that are performed on an object or object type. Examples of
actions are create and set.
• Action qualifier: Parameters specific to actions. Examples of action qualifiers
are -passwd and -role.

Example

Example of an UEMCLI command output

In the example, the Unisphere CLI command displays the general settings for a
physical Unity storage system.
• Access a Dell Unity 300F system using the management port with IP address
192.168.1.230.
− The first time a Unity system is accessed the command displays the system
certificate. (Not shown here.)

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 21


User Interfaces and Access Control

− The storage administrator has the choice to accept it only for the session or
accept and store it for all sessions.
• Log into the system with the provided local admin user credentials.
• Retrieves the array's general settings and outputs its details on the screen.

Module 1 Course Introduction and System Administration

Page 22 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

Administrative User Interfaces - REST API

REST API is a set of resources, operations, and attributes that interact with the
Unisphere management functionality.

Example of REST API script

A storage administrator can perform some automated routines on the array using
web browsers, and programming and scripting languages.

Deep Dive: For more details, read the Unisphere Management REST
API Programmer’s Guide available on the online support website.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 23


User Interfaces and Access Control

Access Control - User Authentication

There are two user authentication scopes for Dell Unity storage systems: Local
User Accounts or Domain-mapped User Accounts.

Local User Account

Local Users Management in Unisphere

Storage administrator can create local user accounts through the User
Management section of the Unisphere UI Settings window.

These user accounts are associated with distinct roles. Each account provide a
user name and password authentication only for the system on which they were
created.

User accounts do not enable the management of multiple systems unless identical
credentials are created on each system.

Module 1 Course Introduction and System Administration

Page 24 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

LDAP

Configuration of LDAP Server access In Unisphere Settings

With the domain-mapped user accounts method, access to a Lightweight Directory


Access Protocol (LDAP) domain must be configured in the Directory Services.

Once the configuration is set, a storage administrator can create LDAP users or
LDAP Groups in the User Management section of the Unisphere Settings window.

These accounts use the user name and password that is specified on an LDAP
domain server. Integrating the system into an existing LDAP environment provides
a way to control user and user group access to the system through Unisphere CLI
or Unisphere.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 25


User Interfaces and Access Control

The concept of a storage domain does not exist for Dell Unity systems.
There is no Global authentication scope.
The user authentication and system management operations are
performed over the network using industry standard protocols.
• Secure Socket Layer (SSL)
• Secure Shell (SSH)

Module 1 Course Introduction and System Administration

Page 26 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

Access Control - Default User Accounts

Dell Unity storage systems have factory default management and service user
accounts. Use these accounts when initially accessing and configuring Unity.

These accounts can access both Unisphere and Unisphere CLI interfaces but have
distinct privileges of operations they can perform.

During the initial configuration process, it is mandatory to change the passwords for
the default admin and service accounts.

Account Username Password Privileges


Type

Management admin Password123# Perform management and monitoring


tasks that are associated with the
storage system and its storage
resources. Depending on the role
type, these accounts have
administrator privileges for resetting
default passwords, configuring system
settings, creating user accounts, and
allocating storage.

Service service service Perform specialized service


operations such as collecting system
service information, restarting
management software, resetting the
system to factory defaults, and so on.
You cannot create or delete storage
system service accounts. You can
reset the service account password
from Unisphere.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 27


User Interfaces and Access Control

Tip: You can reset the storage system factory default account
passwords by pressing the password reset button on the storage
system chassis. Read the Unisphere Online Help and the Hardware
Information Guide for more information.

Module 1 Course Introduction and System Administration

Page 28 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

Access Control - Role-Based Administration

For environments with more than one person managing the Unity system, multiple
unique administrative accounts can be created.
• Different roles can be defined for those accounts to distribute administrative
tasks between users.
• Unisphere accounts combine a unique user name and password with a specific
role for each identity.
• The specified role determines the types of actions that the user can perform
after login.

This table describes each of the supported Dell Unity management user roles.

Management Allowed Operations


Role

Administrator Full administrative privileges for storage configuration and


(default) operations.
• Perform the system initial configuration, edit system settings,
and manage user accounts.
• Create, modify, and delete storage resources, and upgrade
system software.

Storage View the storage system data, edit the Unisphere settings, use
Administrator the Unisphere tools, and create, modify, and delete storage
resources.

Security Operator privileges for storage operations plus full security


Administrator privileges for managing the Dell Unity user accounts.

Operator View Unisphere system and storage status information.

VM View and monitor basic storage components of the Dell Unity


Administrator storage system through vCenter with VASA.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 29


User Interfaces and Access Control

Centralized Management - Unisphere Central

Overview
Hosts

Supported Dell storage arrays


Remote computer accessing Unisphere
Central

Virtual Appliance running


Unisphere Central Server

ESXi Host

Unisphere Central theory of operations

Storage administrators can monitor Dell Unity systems (physical models and
UnityVSA) using Unisphere Central.

Unisphere Central is a centralized application that enables administrators to


remotely monitor the status, activity, and resources of multiple storage systems that
reside on a common network.

The Unisphere Central server is a vApp deployed on a VMware environment from


an OVF template downloaded from the support web site.
• When deploying the OVF template, you can assign an IP address for the
Unisphere Central server.
• This operation can be performed within vCenter or in the console of the VM on
an ESXi host.

Storage administrators can remotely access the application from a client host, and
check their storage environment.

Module 1 Course Introduction and System Administration

Page 30 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

Interface Navigation

Navigation
Pane Main Page

Unisphere Central interface showing all the monitored Unity systems

Administrators use a single interface to rapidly access the systems that need
attention or maintenance.
• Unisphere Central server obtains aggregated status, alerts and host details
from the monitored systems.
• The server also collects performance and capacity metrics, and storage usage
information.

The Navigation Pane on the left has the Unisphere Central options for filtering and
displaying information about the monitored storage systems.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 31


User Interfaces and Access Control

The application displays all information for options selected from the navigation
pane on the Main Page. In the example, the Systems page shows the storage
systems the instance of Unisphere Central monitors.

The Unisphere Central user interface is built on the HTML5 technology with support
on a wide range of browsers2.

Configuration

To start monitoring a Dell Unity system, the storage administrator must configure
the storage array to communicate with Unisphere Central.

Unisphere Settings steps to enable system monitoring through Unisphere Central

2The compatible web browsers are: Google Chrome v33 or later, Microsoft Edge,
Mozilla Firefox v28 or later, and Apple Safari v6 or later.

Module 1 Course Introduction and System Administration

Page 32 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

Open the Unisphere Settings window, and then go to the Management section.
1. Select Unisphere Central.
2. Select Configure this storage system for Unisphere Central.
3. Enter the IP address of the Unisphere Central server.
− If the security policy on the Unisphere Central server was set to manual, no
further configuration is necessary.
4. Select the Use additional security information from Unisphere Central if the
security policy on the server is set to Automatic.
− Then retrieve the security information from the server.
5. Enter the security information configured in the Unisphere Central server, and
click Apply to save the changes

a. Type the Unisphere Central Certificate Hash.


b. Type and confirm the eight characters long Challenge Phrase.

Deep Dive: For more information, read the latest white paper on
Unisphere Central available in the product support site.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 33


User Interfaces and Access Control

Centralized Management - CloudIQ

Overview

The Dell EMC Unity system collects metrics at certain intervals and sends the data to ESRS

- Alerts = 5 min - Configuration = 1 hour


- Performance = 5 min - Data collects = Daily

- Capacity = 1 hour

User Environment Dell Environment

SRS Infrastrucuture

Data is sent to
the Cloud

External
Firewall
Public
Internet

Authorization
Encrypted Externa
files l
Firewall

CloudIQ access supports a


SRS Client Dell Customer Support diversity of clients / browsers

CloudIQ theory of operations

CloudIQ is a Cloud-based Software-as-a-Service (SaaS) solution used to monitor


and service Dell Unity systems.
• CloudIQ is a Dell-hosted service that uses data collected by the Secure
Connect Gateway.
− Secure Connect Gateway also known as Secure Remote Services (SRS) is
a secure, bi-directional connection between the Dell products in user
environments and the Dell Support infrastructure.
− Configured Unity storage systems collect several metrics at various
predetermined time intervals and send the information to the SRS
infrastructure.
• The CloudIQ functionality is embedded into the Dell Unity OE code and is free
of cost requiring no license.

Module 1 Course Introduction and System Administration

Page 34 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

Administrators can monitor supported systems and perform basic service actions.
• The feature enables the access to near real-time analytics from anywhere at
any time.
• The CloudIQ interface is accessible using a web browser from any location.

Interface Navigation

CloudIQ Interface Overview page

Navigation through the CloudIQ interface is done by selecting a menu option on the
left pane. The selected information is displayed on the right pane.

CloudIQ provides dashboard views of all connected systems, displaying key


information such as performance and capacity trending and predictions.

The Overview page widgets provide storage administrators with a quick check of
the overall systems.
• Default widgets include system health scores, cybersecurity risks, system alerts,
capacity approaching full, reclaimable storage, performance impacts, and
systems needing updates.
• The System Health widget provides a summary of the health scores of all the
monitored systems.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 35


User Interfaces and Access Control

System Health

RED Status: POOR (0 - 69)

AMBER Status: FAIR (70 - 89)

GREEN Status: GOOD (90 - 100)

Component Configuration Capacity Perforrmance Data Protection

System Health page with filtered view of the monitored Unity systems

The System Health page is broken into Storage, Networking, HCI, Service and
Data Protection systems categories.

The page uses the proactive health score feature to display the health of a single or
aggregated systems in the format of:
• A score that is shown as a number.
− Systems are given a score with 100 being the top score and 0 being the
lowest.
• A color that is associated with the score.

CloudIQ services running in the background collect data on each of the five
categories: components, configuration, capacity, performance, and data protection.
• A set of rules is used to determine the impact point of these core factors on the
system.
• The total score is based on the number of impact points across the five
categories.

Module 1 Course Introduction and System Administration

Page 36 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

Individual System

Category with
impact points

CloudIQ individual system summary view

Selecting an individual system from the System Health view opens a summary
page with key system information divided into tabs: health, configuration, capacity,
and performance.

The summary landing page is the Health tab showing the system health score and
the total of issues that are found in each category.

From the Health tab, additional information about the cause can be retrieved by
selecting the affected category.

In this example, the Unity system Test_Dev is selected and the Health tab shows a
score of 60 (status = poor).
• The RED color is used to identify the system status and category with impact
points that are causing the condition.
• The problem is at the storage capacity level. There are three issues reported:
three storage pools are full.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 37


User Interfaces and Access Control

Storage Pool

Impact points

CloudIQ pool properties page

To view a details about the pools provisioned by the monitored system:


• Select the Pools option under the Capacity section.
• Select the pool from the list and the Properties page for the individual resource
is displayed.

The Properties page shows detailed information including the system health status
at the capacity level, with the score impact points and the number of issues.

The Capacity tab displays the used and free capacity in the pool, and the time to
reach a full state.

The Performance tab enables a storage administrator to view the top performance
storage objects

The STORAGE tab on the bottom of the page shows storage objects that are
created from the pool.

The VIRTUAL MACHINES tab shows information about the VMs using the
provisioned storage resource.

Module 1 Course Introduction and System Administration

Page 38 © Copyright 2022 Dell Inc.


User Interfaces and Access Control

The DRIVES tab shows the number of drives, drive types, and capacity that is used
to create the storage pool.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 39


Basic System Settings

Basic System Settings

Module 1 Course Introduction and System Administration

Page 40 © Copyright 2022 Dell Inc.


Basic System Settings

Unisphere Settings

From the Unisphere Settings window, a storage administrator can configure the
global settings and parameters for a Dell Unity system.

Unisphere Settings window

Open the Settings configuration window by selecting its icon from the top menu.
Supported operations include:
• Monitor installed licenses.
• Manage users and groups that can access the system.
• Configure the network environment.
• Enable centralized management.
• Enable logging of system events to a remote log server.
• Start and pause FAST suite feature operations.
• Register support credentials, and enable Secure Remote Services.
• Create IP routes.
• Enable CHAP authentication for the iSCSI operations.
• Configure email and SNMP alerts.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 41


Basic System Settings

Configure Unisphere Basic Settings - Licenses

Unisphere Settings Licenses Information

The first screen of the Settings window is the License Information page.
1. Select a feature license from the list.
2. A description of the feature is displayed.
3. To obtain a product license, select Get License Online.
• Access the product page on the support site and download the license file.
• Transfer the license file to a computer with access to the storage system.
4. To unlock the Dell Unity features, select Install License.

Module 1 Course Introduction and System Administration

Page 42 © Copyright 2022 Dell Inc.


Basic System Settings

• Review and accept the software license and management agreement.


• Locate and upload the product license file from the local computer with
access to the storage system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 43


Basic System Settings

Configure Unisphere Basic Settings - System Time

Unisphere Settings System Time and NTP

The Dell Unity platform supports two methods for configuring the storage system
time:
• Manual setup
• Network Time Protocol (NTP) synchronization

To configure Unity to synchronize its time with NTP servers:


1. Select the System Time and NTP option under the Management section.
2. Check the Enable NTP synchronization radio button.
3. Then select Add to launch the Add NTP Server window.

Module 1 Course Introduction and System Administration

Page 44 © Copyright 2022 Dell Inc.


Basic System Settings

4. Enter the IP address or the name of an NTP server and select Add on the
dialog box.
5. The NTP server is added to the list. Select Apply to save the changes.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 45


Basic System Settings

Configure Unisphere Basic Settings - Schedule Time Zone

The Dell Unity platform supports a time zone configuration for snapshot schedules
and asynchronous replication throttling.
• The schedule time zone applies to system defined and user created snapshot
schedules.

Unisphere Settings Schedule Time Zone

To configure a local time zone:


1. Select Management >Schedule Time Zone.
2. Open the drop-down list and select the time zone that matches your location.

Module 1 Course Introduction and System Administration

Page 46 © Copyright 2022 Dell Inc.


Basic System Settings

• The selected schedule time zone reflects the Universal Time Coordinated
(UTC)3 adjusted by an offset for the local time zone.
3. Select Apply. A disclaimer message is displayed with a warning about the
impact of the change.
4. Select Yes to confirm the time zone change. The new schedule time zone is set
for the system.

Important: Existing snapshot schedules are not updated to the same


absolute time when the time zone is changed. After changing the
Schedule Time Zone, you must check whether your snapshot schedule
must be updated.

3 Unity systems use the Universal Time Coordinated (UTC) for time zone setting
(operating system, logs, FAST VP, and so on).

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 47


Basic System Settings

Configure Unisphere Basic Settings - Domain Name Servers

Some Dell Unity features rely on network name resolution configuration to work.
For example, Unisphere alert settings.

Unisphere Settings DNS Server

To manually add the network address of DNS servers the storage system uses for
name resolution:
1. Select the DNS Server option under the Management section.
2. Select Configure DNS server address manually.
3. To open the Add DNS Server configuration window, select Add.
4. Enter the IP address of the DNS server and select Add.
5. The DNS server entry is added to the list. Select Apply to submit and save the
changes.

Module 1 Course Introduction and System Administration

Page 48 © Copyright 2022 Dell Inc.


Basic System Settings

If running Unity on a network which includes DHCP and DNS servers, the system
can automatically retrieve one or more IP addresses of DNS servers.
• Select the Obtain DNS servers address automatically on the Manage
Domain Name Servers page.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 49


Basic System Settings

Configure Unisphere Basic Settings - Management Port


Network Address

Administrators can view and modify the hostname, and the network addresses
assigned to the Dell Unity storage system.

Unisphere Settings Unity network address

The storage system supports both IPv4 and IPv6 addresses. Each IP version has
radio buttons to disable the configuration and select the dynamic or static
configuration.
• If running the storage system on a dynamic network, the management IP
address can be assigned automatically by selecting the proper radio button.
• If enabling ESRS support for the Unity system, then Dell Technologies
recommends that a static IP address is assigned to the storage system.

Module 1 Course Introduction and System Administration

Page 50 © Copyright 2022 Dell Inc.


Basic System Settings

To view or modify the network configuration of the Unity system management port:
1. Expand the Management section and select the Unisphere IPs option.
2. To manually configure a IPv4 network address, select the Use a static IPv4
address. (Default option.)
3. Enter or modify the network address configuration: IP address, Subnet Mask,
Gateway.
4. Select Apply to submit and save the changes.

If running the storage system on a dynamic network, the management IP address


can be assigned automatically by selecting the proper radio button.

If enabling ESRS support for the Unity system, then Dell Technologies
recommends that a static IP address is assigned to the storage system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 51


Basic System Settings

Configure Unisphere Basic Settings - Failback Policy

Unisphere Settings failback policy

In Dell EMC Unity XT systems with dual SPs, when one of them has a problem or
is rebooting, the NAS servers that are hosted on the SP fail over to the other SP.

Failback to the recovered SP can be automatic or manual depending on the


failback policy set on the storage system.

To view or modify the failback policy of the Unity system:


1. Select the Failback Policy option under the Management section.
2. Disable, or enable the automatic failback policy (the default is enabled).

Module 1 Course Introduction and System Administration

Page 52 © Copyright 2022 Dell Inc.


Basic System Settings

− When the option is disabled, you can manually fail back all NAS servers, by
selecting Failback Now.
3. Select Apply to submit and save the changes. The Failback Policy is set for the
storage system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 53


Support Configuration

Support Configuration

Module 1 Course Introduction and System Administration

Page 54 © Copyright 2022 Dell Inc.


Support Configuration

Configure Support Settings - Proxy Server

Proxy server configuration enables the exchange of service information for the Dell
Unity systems that cannot connect to the Internet directly.

Unisphere Settings Proxy Server Configuration

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 55


Support Configuration

To configure the Proxy Server settings, the user must open the Settings window
and perform the following:
1. Expand the Support Configuration section, and select Proxy Server.
2. The Connect through a proxy server checkbox must be selected.
3. Select the communication protocol: HTTP4 or SOCKS5.
4. Enter the IP address of the Proxy Server, and the credentials (username and
password) if the protocol requires user authentication.
• The SOCKS protocol requires user authentication.
5. Select Apply to save the changes.

After configured, the storage administrator performs the following service tasks
using the proxy server connection:
• Configure and save support credentials.
• Configure Secure Connect Gateway.
• Display the support contract status for the storage system.
• Receive notifications about support contract expiration, technical advisories for
known issues, software and firmware upgrade availability, and the Language
pack update availability.

4 The HTTP (nonsecure) protocol supports all service tasks including upgrade
notifications. This option uses port 3128 by default.
5 The SOCKS (secure) protocol should be selected for IT environments where

HTTP is not allowed. This option uses port 1080 by default and does not support
the delivery of notifications for technical advisories, software, and firmware
upgrades.

Module 1 Course Introduction and System Administration

Page 56 © Copyright 2022 Dell Inc.


Support Configuration

Configure Support Settings - Dell Support Credentials

Support credentials are used to retrieve the customer current support contract
information and keep it updated automatically.
• The data provides access to all the options to which the client is entitled on the
Unisphere Support page.

Unisphere Settings Dell support credentials configuration

To configure the support credentials, the user must open the Settings page and
perform the following:
1. Expand the Support Configuration section, and select the Dell EMC Support
Credentials option.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 57


Support Configuration

2. Then enter the username and password on the proper fields. The filled out
credentials must be associated with a support account.
3. Select Apply to commit the changes.

Support credentials are required to configure Secure Connect Gateway.


• The service provides Dell support the direct access to the storage system
(through HTTPS or SSH).
• Dell Support personnel can perform troubleshooting on the storage system and
resolve issues more quickly.

Module 1 Course Introduction and System Administration

Page 58 © Copyright 2022 Dell Inc.


Support Configuration

Configure Support Settings - Contact Information

Up-to-date contact information ensures that Dell support has the most accurate
information for contacting the user in response to an issue.

Unisphere Settings contact information configuration

To configure the contact information, perform the following:


1. Expand the Support Configuration section, and select Contact Information.
2. Then type the contact information details on the proper fields.
3. Select Apply to commit the changes.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 59


Support Configuration

Tip: The user receives system alert reminders to update the contact
information every six months.

Module 1 Course Introduction and System Administration

Page 60 © Copyright 2022 Dell Inc.


Support Configuration

Secure Connect Gateway (SCG)

Overview

Secure Connect Gateway is a Dell Technologies Services solution which


consolidates the capabilities of the SupportAssist Enterprise (SAE) and Secure
Remote Services (SRS) connectivity platforms. The solution provides a secure, bi-
directional connection between the Dell products in user environments and the Dell
Support infrastructure.

User Environment Dell SCG Environment

SCG direct connect version

SCG Infrastructure

Unity XT systems
HTTPS

Outbound
Inbound

Public Internet

two-way communication
Firewall Firewall

Secure Connect Gateway Virtual


Appliance Edition UnityVSA Dell Customer Support

Secure Connect Gateway architecture

Benefits:
• Dell support can remotely monitor configured systems by receiving system-
generated alerts.
• Support personnel can connect into the customer environment for remote
diagnosis and repair activities.
• Provides a high-bandwidth connection for large file transfers.
• Enables proactive Service Request (SR) generation and usage license
reporting.
• The service operates on a 24x7 basis.

Secure Connect Gateway is implemented as stand-alone virtual appliance and as a


direct connect (integrated) version for selected Dell hardware.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 61


Support Configuration

Deployment Options

Secure Connect Gateway options available with the Dell Unity platform include an
embedded version and the Secure Connect Gateway virtual appliance edition.

The embedded version provides direct connectivity integrated into the Dell Unity XT
storage system.
• The Secure Connect Gateway software is embedded into the Dell Unity XT
operating environment (OE)6 as a managed service.
• The embedded version uses an on-array Docker container which enables only
the physical system to communicate with Support Center.
• The storage administrator can configure one way (outbound) or two way
(outbound/inbound) communication.

The Secure Connect Gateway Virtual Application Edition is a centralized gateway


version that is installed as an off-array virtual machine.
• Secure Connect Gateway virtual appliance servers can be configured in a
cluster for service resiliency.
• Dell Unity XT or Unity VSA systems are added to the Secure Connect Gateway
cluster.
• The storage administrator provides the IP address of the primary and secondary
Secure Connect Gateway servers.
• A single secure connection (two-way communication) is established between
the Support Center servers and the off-array Secure Connect Gateway.

6The Dell EMC Unity OE is responsible for persisting the configuration and the
certificates that are needed for Secure Connect Gateway to work.

Module 1 Course Introduction and System Administration

Page 62 © Copyright 2022 Dell Inc.


Support Configuration

Communication

There are two remote service connectivity options for the Integrated Secure
Connect Gateway version:
• Outbound/Inbound (default)
− This option enables remote service connectivity capabilities for remote
transfer to and from the Support Center, with the Dell Unity XT system.
− Ports 443 and 8443 are required for outbound connections.
− Two-way Secure Connect Gateway is the recommended configuration.
• Outbound only

− This option enables remote service connectivity capability for remote


transfer to the Support Center from the Dell Unity XT system.
− Ports 443 and 8443 must be opened for outbound connections.
− One-Way Secure Connect Gateway is available for users who have security
concerns but still want to take advantage of CloudIQ.
For the Centralized version, the administrator must ensure that port 9443 is open
between the SCG virtual appliance server and the Unity system.
• For outbound network traffic, the port 443 must be open.

Comparison

The table shows a comparison of the two deployment options.

Name Centralized (Virtual Integrated (Embedded)


Appliance Edition)

Feature set Same Same

Number of devices Multiple 1

Use of external VM is Yes No


required.

Management interface Native Unisphere

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 63


Support Configuration

Internet connectivity is Gateway only Every system


required.

Ports used for Inbound 9443 80


Traffic

Ports used for Outbound 443 443 and 8443


Traffic

Module 1 Course Introduction and System Administration

Page 64 © Copyright 2022 Dell Inc.


Support Configuration

Configure Support Settings - Secure Connect Gateway

Storage administrators can view the status and enable Secure Connect Gateway
from the Secure Remote Services page of the Unisphere settings.

Configure Integrated or Centralized


Secure Connect Gateway for the
storage system

Verifies the system's network connectivity


and support credentials

Unisphere Settings Secure Connect Gateway configuration

To verify the Secure Connect Gateway configuration, expand the Support


Configuration section, and select EMC Secure Remote Services.
• Run a readiness check, to verify if the system is properly set for configuring
the Secure Connect Gateway. Running this operation is optional, but highly
recommended.
• Select Configure to launch the Configure Secure Connect Gateway wizard.
As discussed before, the remote service options available to send storage

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 65


Support Configuration

systems information to the Support Center for remote troubleshooting are


Integrated Secure Remote Services and Centralized Secure Connect.

For proper functionality:


• At least one DNS server must be configured on the storage system.
• The storage system must have unrestricted access to Support Center over the
Internet using HTTPS (for nonproxy environments).
• Online support full-access account is required.

− User contact information and specific credentials that are associated with
the site ID, which is associated with the system serial number.
− If there is a problem with the user Online Support account, support
personnel can help with the configuration using their RSA credentials.

Module 1 Course Introduction and System Administration

Page 66 © Copyright 2022 Dell Inc.


Support Configuration

Secure Connect Gateway - Readiness Check

Readiness Check

Unisphere Settings Secure Connect Gateway configuration

To verify if the storage system is ready for Secure Remote Services configuration,
select the Readiness Check option on the Secure Remote Services page.

Dell Technologies recommends that you perform readiness check before


configuring Secure Remote Services. The check verifies the system network
connectivity and if the provided support credentials to configure Secure Remote
Services are valid.

In the Secure Connect Gateway Readiness Check window, select the Secure
Connect Gateway deployment option to configure

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 67


Support Configuration

Integrated

To verify if the Unity XT storage system is ready for an Integrated Secure Connect
Gateway deployment, select Integrated.
• Select the check box to configure two-way or uncheck it for one-way
communication, and advance the step.

ESRS Readiness Check window with integrated option selected

• Before the readiness check runs, the end user license agreement (EULA) must
be accepted. Select Accept license agreement, and advance to the next step.

Module 1 Course Introduction and System Administration

Page 68 © Copyright 2022 Dell Inc.


Support Configuration

ESRS Readiness Check window with license agreement step

• After the readiness check runs, a results page is displayed.

− If no errors are found, a successful message is displayed and you can select
Configure ESRS to close the check and advance to the configuration.
− However, If errors are displayed, a Check Again button is displayed and
you must resolve the issues before running the new check.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 69


Support Configuration

ESRS Readiness Check window showing the results of the readiness check

Module 1 Course Introduction and System Administration

Page 70 © Copyright 2022 Dell Inc.


Support Configuration

Centralized

After selecting the centralized option, enter the network address of the primary and
secondary Secure Connect Gateway servers.

ESRS Readiness Check window with centralized option selected

• After the readiness check runs, a results page is displayed.

− If no errors are found, a successful message is displayed and you can select
Configure ESRS to close the check and advance to the configuration.
− However, If errors are displayed, a Check Again button is displayed and
you must resolve the issues before running the new check.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 71


Support Configuration

ESRS Readiness Check window showing the results of the readiness check

Module 1 Course Introduction and System Administration

Page 72 © Copyright 2022 Dell Inc.


Support Configuration

Secure Connect Gateway Configuration - Integrated

Integrated SCG

To configure Integrated Secure Connect Gateway on the storage system, the user
must select Integrated on the Secure Remote Services page.
• Select the check box to configure two-way or uncheck it for one-way
communication, and advance the step.

Configure ESRS wizard window with integrated option selected

Network Check

If a proxy server has been configured for the storage system, the server information
is displayed on this page.
• To add a proxy server or modify the configuration select the pencil icon beside
the Connect Through a Proxy Server option.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 73


Support Configuration

Configure ESRS wizard window showing the proxy configuration determined by the network check.

Module 1 Course Introduction and System Administration

Page 74 © Copyright 2022 Dell Inc.


Support Configuration

Contact Information

Verify the customer contact data information and make any edits if required. Select
NEXT to advance the step.

Configure ESRS wizard window showing the contact data information

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 75


Support Configuration

Email Verification

In the email verification process, select the Send access code to start a request
for an access code.
• This option is unavailable if valid support credentials are not configured.

Configure ESRS wizard with Email verification

A message is sent to the contact email with a generated 8-digit PIN access code
which is valid for 30 minutes from the generated time.

This code must be entered in the Access code field.

Select Next to advance the step.

Module 1 Course Introduction and System Administration

Page 76 © Copyright 2022 Dell Inc.


Support Configuration

RSA Credentials

If there is a problem with the user Online Support account, Dell support can help
with the configuration by selecting the Alternative for Support Personnel only.

And then enter the RSA credentials and site ID on the proper fields to browse the
Secure Connect Gateway configuration. Select Next to continue.

Configure ESRS wizard customer account validation using RSA credentials

The system starts initializing the Secure Connect Gateway. The Support Personnel
RSA credentials are requested once again to finish the configuration. A new token
code must be entered (only if the Alternative for Support personnel was invoked).

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 77


Support Configuration

Results

The results page notifies that Secure Connect Gateway should be connected to the
Support Center in 15 minutes. The user can monitor the status of the Secure
Connect Gateway connectivity on the Service page, and configure Policy Manager
while waiting for Secure Connect Gateway to connect.

Configure ESRS wizard results page

Module 1 Course Introduction and System Administration

Page 78 © Copyright 2022 Dell Inc.


Support Configuration

SCG Configured

The Secure Remote Services page show the status of the connection and which
type of Secure Connect Gateway configuration is saved to the system.

Unisphere Settings Secure Connect Gateway configuration

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 79


Support Configuration

Secure Connect Gateway Configuration - Centralized

Centralized SCG

Select the Centralized option in the Secure Remote Services page.


• If the Secure Connect Gateway End User License Agreement (EULA) was not
yet accepted the license would be the next step.

Configure ESRS wizard window with Centralized option selected

Specify the Primary gateway network address of the Secure Connect Gateway
virtual appliance server that is used to connect to the Dell Enterprise.
• Ensure that the port 9443 is open between the server and the storage system.

RSA credentials can be used for Primary Gateway configuration without a


Customer Support account.
• This alternative enables the Secure Connect Gateway configuration while
support account credentials are being created and validated on the backend.

Module 1 Course Introduction and System Administration

Page 80 © Copyright 2022 Dell Inc.


Support Configuration

• If a Secondary Gateway network address was also entered in the configuration,


then the RSA credentials are required one more time.
• The RSA credentials that were used for the primary gateway must be also
provided to complete the configuration of the second gateway.

After you click Next,the system starts initializing Secure Connect Gateway.

Results

The results page notifies that Secure Connect Gateway should be connected to the
Support Center in 16 minutes. The user can monitor the status of the Secure
Connect Gateway connectivity on the Service page.

Configure ESRS wizard results page

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 81


Support Configuration

SCG Configured

The Secure Remote Services page show the status of the connection and which
type of Secure Connect Gateway configuration is saved to the system.

Unisphere Settings Secure Connect Gateway configuration

Module 1 Course Introduction and System Administration

Page 82 © Copyright 2022 Dell Inc.


Support Configuration

Activity: Unisphere Tour

During this lab, you will:


• Explore the Unisphere UI dashboard,
preferences, and help options.
• View the system components and
check storage system health.
• View the system settings page, and
explore the sections.
• Create a user account with an
associated role.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 83


Unisphere Alerts and Events Monitoring

Unisphere Alerts and Events Monitoring

Module 1 Course Introduction and System Administration

Page 84 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

Unisphere Alerts and Events Monitoring

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 85


Unisphere Alerts and Events Monitoring

Unisphere System Alerts

Alerts are usually events that require attention from the system administrator.
Some alerts indicate that there is a problem with the Dell Unity system. For
example, you might receive an alert telling you that a disk has faulted, or that the
storage system is running out of space.

Unisphere UI with dashboard view and the three methods used for alerts monitoring

Alerts are registered to the System Alerts page in Unisphere. Access the page
using one of three methods
• Select the link on the top menu bar.
• Select the option on the navigation pane.
• Select notification icons on the dashboard view block.

The view block on the dashboard shows an icon with the number of alerts for each
recorded severity category.
• The link on these icons opens the Alerts page, showing the records filtered by
the selected severity level.

Module 1 Course Introduction and System Administration

Page 86 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

System Alerts Severity Levels

System alerts with their severity levels are recorded on the System Alerts page.
Logging levels are not configurable.

The table provides an explanation about the alert severity levels from least to most
severe.

Icon Label Indicates

Information An event has occurred that does not impact system functions.
No action is required.

Notice An event has occurred that does not impact system functions.
No action is required.

Warning An error has occurred that the user should be aware of but
does not have a significant impact on the system. For
example, a component is working, but its performance may not
be optimum.

Error An error has occurred that has a minor impact on the system
and should be remedied—no need to fix immediately. For
example, a component is failing and some or all its functions
may be degraded or not working.

Critical An error has occurred that has a significant impact on the


system and should be remedied immediately. For example, a
component is missing or has failed and recovery may not be
possible.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 87


Unisphere Alerts and Events Monitoring

Tip: Two of these severity levels are identified by the same icon and
refer to events that require no user intervention: Information and Notice.
Information alerts report the status of a system component or changes
to a storage resource condition. Notice alerts normally report the status
of a system process that is triggered by service commands.

Module 1 Course Introduction and System Administration

Page 88 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

System Alerts States

There are multiple ways to review the health of a Dell Unity system. In the
Unisphere UI, the storage administrator can review the System Health view block
on the dashboard, and the System View page. The user can also check the Alerts
page for resolved issues.

The Alerts page shows the event log with all alerts that have occurred in the
system.
• Alert states are used to help the user determine which records are current, and
which records are resolved.
• An alert state changes when the software OE is upgraded, the error condition is
resolved, and the alert is repeating.

Alert State Description

Active_Manual Status when the alert is active and must be manually cleared.
The alert is still Active, and a user must deactivate the alert to
mark it Inactive once the condition is solved.

Active_Auto Status when the alert is active but will be automatically cleared
when the issue is resolved. The alert is still Active and will be
marked Inactive automatically once the condition is cleared.

Inactive Status when the alert is no longer active because it has been
resolved. The alert condition has been resolved.

Updating Status when the alert is transitioning between the other states:
Active_Auto to Inactive, Active_Manual to Inactive.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 89


Unisphere Alerts and Events Monitoring

Manage Alerts

The System Alerts page in Unisphere is accessed by selecting Alerts on the


navigation pane under the Events section.

Filter

Unisphere System alerts page

In Unisphere, the Alerts page is automatically filtered by default to show only the
records in Active and Updating states. Records in an Inactive state are hidden.

In the example, the records were also filtered to show only the log entries already
acknowledged by the user.

Active_Manual alerts must be manually deactivated by an Administrator. To


deactivate an alert in Unisphere:
1. Select an alert that is in an Active_Manualstate.
2. Select the Deactivate button.
3. A Confirm Deactivate dialog box is shown. Select Deactivate to continue

The dialog box is closed, and the record entry is marked as inactive. Because of
the page filtering, the record entry is not displayed in the list of entries.

Module 1 Course Introduction and System Administration

Page 90 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

View Alerts Details

To view detailed information about a system alert, select the alert from the list of
records of the Alerts Page.

Unisphere Alerts page with details of a selected alert

Details about the selected alert record are displayed in the right pane. The
information includes:
• Time the event was logged.
• Severity level
• Alert message
• Description of the event
• Acknowledgement flag
• Component affected by the event
• Status of the component

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 91


Unisphere Alerts and Events Monitoring

The example shows the details about theAlert_2721. Observe that the current
status of the alert is Degraded, and the state is Active_Auto. The alert will
transition to Inactive once the issue is resolved.

Unisphere can be configured to send the system administrator alert notifications.


These notifications are sent through an email or through an SNMP message.

Module 1 Course Introduction and System Administration

Page 92 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

Configure Alert Notifications - Email

Email Address

A system administrator can configure Unisphere to send alert notifications in an


email. Email alerts are used only for internal communication. No service requests
are created based on the configured email alerts. Only the configuration of Secure
Connect Gateway provides interaction with Dell support.

Unisphere Settings Alerts section Email and SMTP

In Unisphere, open the Settings configuration window and expand the Alerts
section:
1. Select Email and SMTP, under the Alerts section.
2. On the Specify Email Alerts and SMTP configuration, click the Add icon.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 93


Unisphere Alerts and Events Monitoring

3. The Alert Email Configuration window opens - enter the email that is
supposed to receive the notification messages.
4. Select the severity level from the drop-down list, then select OK to save the
configuration.

• The dialog box closes, and the new email is displayed in the list of email
messages.
The example shows the configuration of the email address epratt@hmarine.test
as a recipient for notifications about issues with Notice and above severity levels.

SMTP Configuration

Unisphere Settings Alerts section Email and SMTP

Module 1 Course Introduction and System Administration

Page 94 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

On the SMTP Configuration section:


1. Type the IP address of the Simple Mail Transfer Protocol (SMTP) server that is
used to send email messages.
2. Optionally, bypass the global proxy server settings that are typically used for
SMTP email messages by checking the appropriate box.
3. Select the Encryption Level (SSL method) for the email server.
4. Specify the Authentication Type, and enter the authentication credentials.
5. Select Apply to commit the changes.

• Optionally select Send Test Email to verify that the SMTP server and
destination email addresses are valid.
• The Send Test Email button is only available after changes to the email
configuration.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 95


Unisphere Alerts and Events Monitoring

Configure Alert Notifications - SNMP

SNMP Target

A system administrator can configure Unisphere to send alert notifications through


a Simple Network Management Protocol (SNMP) message - known as a trap.

Unisphere Settings Alerts section SNMP

Configure the SNMP trap destination targets in Unisphere:


1. From the Settings window, select SNMP from the Alerts section.
2. On the the Manage SNMP Alerts page, select the SNMP version.
− Dell Unity supports SNMP v2c and SNMP v3.0.

Module 1 Course Introduction and System Administration

Page 96 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

3. Select + (Add). The SNMP target window opens


4. Enter the network name or IP address.

• For SNMP v2.c, specify a community.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 97


Unisphere Alerts and Events Monitoring

SNMP v3.0

Configuration of a v3 SNMP trap

For the configuration of version 3.0 SNMP traps:


1. Type the user name to authenticate to the SNMP manager.
2. Select the authentication protocol for the traps from the drop-down: MD5, SHA,
or none.
3. For the MD5 and SHA selections, type and confirm the password.
4. Select the privacy protocol (AES, DES, or none).
• You can only specify the privacy protocol that is used to encode trap
messages when you edit an existing destination.
5. If required, type and confirm the password.
6. Select OK To save the SNMP target. The new entry is displayed in the list.

Module 1 Course Introduction and System Administration

Page 98 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

Severity Level

Unisphere Settings Alerts section SNMP

Configure the severity level of the alert notifications:


1. Select from the drop-down list the severity level for the alert notifications.
2. Click Send Test SNMP Trap to verify that the SNMP configuration is valid.
3. Select Apply button to commit the changes.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 99


Unisphere Alerts and Events Monitoring

System Jobs Monitoring

Storage administrator can view information about all jobs, including the ones that
are active, complete, or failed. From the Jobs page the administrator can also
delete a completed or failed job, and cancel an active job (queued or running).

Unisphere Jobs page

To view and manage the active and completed jobs in Unisphere:


1. Select Jobs, under Events.
2. To view the properties of a job, select it from the list.
3. Select the Details icon.

Select the Jobs icon on the top menu to quickly view the jobs in progress.
• The Jobs icon also helps determine the number of active jobs: queued or
running.
• The system polls for active jobs every 10 seconds and updates the count.

When a job is complete, a notification similar to system alerts is displayed on the


screen. The user can select the notification message to access the Jobs page.

Inactive jobs older than seven days are automatically deleted from the list. Only the
most recent 256 jobs are listed. Inactive jobs have a status of completed or failed.

Module 1 Course Introduction and System Administration

Page 100 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

System Logs Monitoring

Administrators can also view information about the Dell Unity system logged events
by selecting Logs, under Events.
• Unisphere immediately displays real time changes to the storage system.
• By default, the logged events are sorted by the time the event was posted, from
most recent to earlier.

Unisphere Logs page

The storage administrator can also customize the view and sort, filter, and export
the data. The event log list can be sorted by Date/Time: ascending or descending.

A link to the Remote Logging page in the Unisphere Settings window enables the
administrator to configure the logging of user/audit messages to a remote host.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 101


Unisphere Alerts and Events Monitoring

Add Remote Logging Configuration

Remote Logging

Unisphere Settings Remote Logging Configuration

The Remote Logging setting enables a Dell Unity system to log user/audit
messages to a remote host. A remote host running syslog must be configured to
receive logging messages from the storage system before the user can enable this
feature in Unisphere.

Module 1 Course Introduction and System Administration

Page 102 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

To view and add a new remote logging configuration for another remote host,
perform the following steps:
1. Open the Unisphere Settings window, and select Remote Logging under the
Management section.
2. Select the Add icon. (Only a maximum of five remote logging configurations are
supported. If five configurations are already configured, the Add icon is
disabled.)

• The Add Remote Logging window opens

Add Configuration

Add Remote Logging configuration window

1. Check the Enable logging to a remote host check box.


• Specify the network address of the new host that receives the log data
(include port 514 in the address).
2. Select the component that generates the log messages to record.
3. Select the severity level of the log entries that are sent to the remote host.
• The severity levels are displayed in descending order in the related drop-
down list.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 103


Unisphere Alerts and Events Monitoring

4. Then select the protocol used to transfer log information (UDP or TCP).
5. Select OK to save the configuration.

Tip: In many scenarios, a root or administrator account on the receiving


computer can configure the remote syslog server to receive log
information from the storage system. The configuration is set by editing
the syslog-ng.conf file on the remote computer. For more information
about setting up and running a remote syslog server, read the remote
computer operating system documentation.

Module 1 Course Introduction and System Administration

Page 104 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

Edit Remote Logging Configuration

Unisphere Settings Remote Logging Configuration

To view or modify a Remote Logging configuration, perform the following:


1. Select a remote logging configuration.
− You can edit the settings for remote logging to the first remote host (record
entry with no network address) or an existing configuration.
2. Select the Edit icon.
3. Make any necessary changes, then select OK to save the configuration:

− Unselect Enable logging to a remote host to disable remote logging.


− Change the network address of the host that receives the log data (include
port 514 in the address).

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 105


Unisphere Alerts and Events Monitoring

− Change the component that generates the log messages to be recorded.


o Kernel Messages - Messages that are generated by the operating
system kernel. These messages are specified with the facility code 0
(keyword kern).
o User-Level Messages - This type of messages are the default option.
Messages that are generated by random user processes. These
messages are specified with the facility code 1 (keyword user).
o Messages Generated Internally by syslogd - Messages that are
generated internally by the system logging utility—syslogd. These
messages are specified with the facility code 5 (keyword syslog).
− Change the severity level of the log entries sent to the remote host.
− Change the protocol used to transfer log information: UDP or TCP.

Module 1 Course Introduction and System Administration

Page 106 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

System Administration Key Points

1. User Interfaces and Access Control


a. Configuration and management of the Dell Unity family of storage systems is
performed using three interfaces: Unisphere, UEMCLI and REST API.
b. Access to Dell Unity XT systems is granted to defined and configured user
accounts (local or LDAP). The user accounts are role based.
c. Dell Unity XT systems can be monitored through the Unisphere Central and
CloudIQ applications.
2. Basic System Settings
a. Dell EMC XT system global settings and parameters such as System time
and DNS can be configured from Unisphere Settings.
b. The Unisphere Settings window also provides options to configure the time
zone for snapshot schedules and asynchronous replication throttling.
c. The Dell Unity XT management port network address, and the failback
policy can be configured from the Unisphere Settings.
3. Support Configuration
a. A Proxy server can be configured to exchange service information for Dell
Unity XT systems that cannot connect to the internet directly.
b. Support credentials are used to retrieve the customer current support
contract information and keep it updated automatically.
c. Contact information ensures that Dell support has the most accurate
information for contacting the user in response to an issue.
d. Storage administrators can view the status and enable the Secure Connect
Gateway (Secure Remote Services) feature from the Support Configuration
section of Unisphere settings.
• There are two Secure Connect Gateway deployment options available
for the Dell Unity family of storage systems: centralized and integrated.
• There are two remote service connectivity options for Integrated Secure
Connect Gateway: Outbound/Inboud or Outbound only.
4. Unisphere Alerts and Event Monitoring

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 107


Unisphere Alerts and Events Monitoring

a. Unisphere alerts are usually events that require attention from the system
administrator.
b. The alerts severity levels are categorized as Information, Notice, Warning,
Error, and Critical.
c. There are four states for alerts: Active_Manual, Active_Auto, Inactive, and
Updating.
d. Alert details provide time of the event, severity level, alert message,
description of the event, acknowledge flag, component affected by the
event, and status of the component.
e. Unisphere can be configured to send the system administrator alert
notifications via email or though an SNMP trap.
f. Users can monitor when jobs are active, complete, or failed. The jobs page
shows the number of active jobs: queued or running. The system polls for
active jobs every 10 seconds and updates the active jobs count.
g. Unisphere can be configured to send log message entries of a determined
severity level to a remote server.

For more information, see the Dell Unity: Unisphere Overview,


Secure Remote Support (SRS) Requirements and Configuration
on the Dell Technologies Support site.

Module 1 Course Introduction and System Administration

Page 108 © Copyright 2022 Dell Inc.


Storage Resources

Module 1 Course Introduction and System Administration

© Copyright 2021 Dell Inc. Page 109


Storage Resources

Module 1 Course Introduction and System Administration

© Copyright 2021 Dell Inc. Page 111


Dell Unity XT Platform Supported Storage Resources

The Dell Unity XT platform provides storage resources that are suited for the needs
of specific applications, host operating systems, and user requirements.

Exchange Network
Database Linux/UNIX File
Mail Servers
Servers Hosts Servers
Windows
Unisphere
Users
VMware
Hosts

iSCSI or FC
Storage Management
Network NAS Network

Storage Pool

NAS Server

VMFS NFS
LU
N
vVol

Thin CG File Share


Clone System
VMware
Block File Datastores

Dell Unity XT platform supported storage resources and hosts/NAS Clients

These storage resources are categorized in:


• Storage Pools: Dynamic or Traditional
• Block storage: LUNs, Consistency Groups, and Thin Clones
• File storage: NAS Servers, file systems, and shares
• VMware datastores: VMFS, NFS, and the vVol datastores

LUNs and Consistency Groups provide generic block-level storage to hosts and
applications.

• Hosts and applications use the Fibre Channel (FC) or the iSCSI protocol to
access storage in the form of virtual disks.

Module 1 Course Introduction and System Administration


File systems and shares provide network access to NAS clients in Windows and
Linux/UNIX environments.
• Windows environments use the SMB protocol for file sharing, Microsoft Active
Directory for authentication, and the Windows directory access for folder
permissions.
• Linux/UNIX environments use the NFS protocol for file sharing and the POSIX
access control lists for folder permissions.

VMware datastores provide storage for VMware virtual machines.


• These datastores are accessible with the use of the FC, or the iSCSI protocols
(VMFS) and the NFS protocol.

Another modality of supported VMware datastores is the vVol (Block) and vVol
(File) datastores. These storage containers store the virtual volumes or vVols.

• The vVol (File) use NAS protocol endpoints and vVol (Block) uses SCSI
protocol endpoints for I/O communications from the host to the storage system.

Module 1 Course Introduction and System Administration


Unified Storage Pools

All storage resources on the Dell Unity XT platform are provisioned from unified
storage pools7.

Storage Pool
NAS Server

NAS vVol
NAS
Server Server NFS (Block) VMFS LUN LUN
root config
Datastores

User file Snap file vVol


system system
(File) LUN LUN

Unified storage pool with shared storage resources

7A storage pool is a collection of drives that are arranged into an aggregate group,
with some form of RAID protection applied.

Module 1 Course Introduction and System Administration


Storage pools are dedicated to creating storage resource objects.
• The storage pools provide optimized storage for a particular set of applications
or conditions.
• Pools are created using the SAS Flash drives, SAS drives, and NL-SAS drives
which are available in the storage system.

− A pool can contain a few disks or hundreds of disks.


The Dell Unity family of storage systems share pools with all the resource types.
• File systems, LUNs, and the VMware datastores can be provisioned out of the
same pools.
• There is no need to create separate pools per resource type: Block or File.

A storage administrator can modify the pool configuration to improve efficiency and
performance using the management interfaces. The administrator can also monitor
a pool capacity usage, and expand it, or delete a pool that is no longer in use.

Module 1 Course Introduction and System Administration


Homogeneous and Heterogeneous Pools

Dell Unity XT HFA systems have multiple tiers of drives to select when building a
storage pool. Dell Unity XT AFA systems have a single SSD tier.

Tiers are a set of drives of similar performance. Each tier supports a single RAID
type and only certain drive types. The Dell Unity XT defined tiers are:
• Extreme Performance tier: SAS Flash drives
• Performance tier: SAS drives
• Capacity tier: NL-SAS drives

Depending on the tiers selection, a storage pool can be heterogeneous (multi-


tiered), or homogeneous (single-tiered):
• Homogeneous pools are composed of one type of drive. Only one disk type is
selected during pool creation: SAS Flash, SAS, or NL-SAS drives.
• Heterogeneous pools are made up of more than one type of drive. If the FAST
VP8 license is installed, multiple tiers are selected for the storage pool,and each
tier can be associated with a different RAID type.

8 Fully Automated Storage Tiering for Virtual Pools or FAST VP is a feature that
relocates data to the most appropriate disk type depending on activity level. The
feature improves performance while reducing cost.

Module 1 Course Introduction and System Administration


Homogeneous Pools Heterogeneous Pool

Extreme Performance Capacity


Performance Pool Pool Pool

SAS Flash SAS NL-SAS


Tier 0 Tier 1 Tier 2
SAS Flash Drives SAS Drives NL-SAS Drives

LUN LUN LUN LUN

Storage pools are also classified as dynamic or traditional depending on the


system supportability.

Warning: SAS Flash 4 drives cannot be part of a heterogeneous pool.


These drives can only be part of a homogeneous All-Flash pool.

Module 1 Course Introduction and System Administration


Storage Pools Management

To manage a storage pool in Unisphere, the storage administrator must select


Pools from the Storage section.

Unisphere Pools page with the available storage pools

The Pools page shows the list of created pools with its allocated capacity, its
utilization details, and free space.

Details about a pool are displayed on the right-pane whenever a pool is selected.

The Pools page enables an administrator to:


• Create a storage pool.
• View the properties of a selected pool.
• Modify some of the pool settings.
• Expand existing pools.
• Delete a storage pool.

Module 1 Course Introduction and System Administration


Dynamic Pools

Module 1 Course Introduction and System Administration


Dynamic Pools Overview

Dynamic pools are storage pools whose tiers are composed of Dynamic Pool
private RAID Groups.

Dynamic Pools are supported on Dell Unity XT physical hardware only.


• All-Flash arrays (AFA): Dell Unity XT 380F, 480F, 680F, and 880F models.
• Hybrid Flash arrays (HFA): Dell Unity XT 380, 480, 680, and 880 models.

All storage pools that are created on a Dell Unity XT physical system using
Unisphere are dynamic pools by default.

All Dell Unity storage resources and software features are supported on dynamic
pools.

Pool management operations in Unisphere, Unisphere CLI, and REST API are the
same for both dynamic and traditional pools.

Important: The Dell UnityVSA support only the traditional storage


pools.

Module 1 Course Introduction and System Administration


Dynamic Pool Benefits

Dynamic pools are not limited by traditional RAID technology, and provide
improved storage pool planning, provisioning, delivering a better cost per GB.
• Users can provision pools to a specific capacity without having to add drives in
specific multiples. For example, the expansion by 4+1 or 8+1.
• Users can expand pools by a specific capacity, generally by a single drive,
unless crossing a drive partnership group.
• Different drive sizes can be mixed in dynamic pools.

Another major benefit is the time that it takes to rebuild a failed drive.
• Dynamic pools reduce rebuild times by having more drives engaged in the
rebuild process.
− Data is spread out to engage more drives.
• Multiple regions of a drive can be rebuilt in parallel.

− The process increases rebuild performance with increased drive counts


− Reduces exposure of a second drive failure.

Module 1 Course Introduction and System Administration


Provisioning Dynamic Pools

Dynamic Pools can be created using any of the supported management interfaces:
Unisphere, UEMCLI, and REST API.

The user selects the RAID type for the dynamic pool, while creating it.
• Dynamic pools support RAID 5, RAID 6, and RAID 1/0.

The RAID width is set when the pool is created.


• With Unisphere, the system automatically defines the RAID width to use based
on the number of drives selected.
− For example, If the storage administrator selects 11 drives with RAID 5
protection for a pool, the RAID width for that pool is 8+1.
• Any expansion of the pool uses the underlying RAID width no matter how many
drives are added to the pool.

The user may set the RAID width only when creating the pool with the UEMCLI or
REST API interfaces.

Dynamic pools also support the use of drives of the same type but different
capacities in the same drive partnership group.
• The rule applies to storage pool creation, storage pool expansion, and the
grabbing of unused drives if one of the pool drives fail.
• If the number of larger capacity drives is not greater than the RAID width, the
larger drive entire capacity is not reflected in the “usable capacity.”

Tip: Check the table with RAID width defined by the drive count
selected for each RAID level.

Module 1 Course Introduction and System Administration


Creating Dynamic Pools

Private LUN 1
Private RAID Group 1

RE RE RE RE

Drive Extent Pool


RE RE RE RE
DE DE DE DE DE DE

DE DE DE DE DE DE

DE DE DE DE DE DE

DE DE DE DE DE DE

DE DE DE DE DE DE

DE DE DE DE DE DE
Private LUN 2
... ... ... ... ... ...
SPARE SPARE SPARE SPARE SPARE SPARE Private RAID Group 2

RE RE RE RE

RE RE RE RE

256 MB slice Legend


DE = Drive Extent
Drive Partnership Group
SPARE = Spare space
RE = RAID Extent

Dynamic pools creation process

The storage administrator must select the RAID type when creating a dynamic
pool.

The system automatically populates the RAID width which is based on the number
of drives in the system.
40. The example shows a RAID 5 (4+1) configuration in a drive partnership group.
41. At the physical disk level, the system splits the whole disk region into identical
portions of the drive called drive extents.
m. Drive extents hold a position of a RAID extent or are held in reserve as a
spare space.
n. The drive extents are grouped into a drive extent pool.
42. The drive extent pool is used to create a series of RAID extents. RAID extents
are then grouped into one or more RAID Groups.
43. The process creates a single private LUN for each created RAID Group by
concatenating pieces of all the RAID extents and striping them across the LUN.

o. The LUN is partitioned into 256 MB slices. The system distributes the slices
across many drives in the pool.

Module 1 Course Introduction and System Administration


p. The 256 MB slices are the granularity that the slice manager operates and
storage resources are allocated.

Module 1 Course Introduction and System Administration


Creating Dynamic Pools in Unisphere

Name and Description

To create a dynamic pool using the Unisphere interface, the user must select
Pools under the Storage section on the navigation pane.

To launch the Create Pool wizard, select the + icon in the Pools page. Then the
user must follow the steps in the wizard window.

Enter the pool name and optionally a pool description, then advance the wizard
step.

Launching the Unisphere Create Pool wizard

Tiers

Select the tiers to build the storage pool.


• In Dell Unity XT hybrid flash arrays, the tiers that do not have enough drives to
build a pool are grayed out.
• In Dell Unity XT All-Flash systems, the wizard displays by default only the
Extreme Performance tier (if there are available drives).

A storage administrator can change the RAID protection level for the selected tiers
and reserve up to two drives per 32 drives of hot spare capacity.

Module 1 Course Introduction and System Administration


Hot spare capacity in dynamic pools is used as additional spare space necessary
to rebuild faulted drives.

Selecting the storage tiers to build the pool

Drives

Select the number of drives from the selected tier to add to the pool (the step is
displayed here).

In the example, a minimum of 7 drives was selected to comply with the RAID width
(4+1) plus the two drives of hot spare capacity.

Module 1 Course Introduction and System Administration


Setting the pool storage capacity

Capability Profiles Name

If the pool is used for the provisioning of vVol storage containers, the user can
create and associate a Capability Profile to the pool.

A Capability Profile is a set of storage capabilities for a vVol datastore. The feature
is discussed in VMware Datastores Provisioning.

Module 1 Course Introduction and System Administration


Enabling a Capability Profile for VMware storage containers

Summary

The pool configuration can be reviewed from the Summary page.

The user can go back and change any of the selections that are made for the pool,
or click Finish to start the creation job.

Module 1 Course Introduction and System Administration


Wizard Summary page

Results

The Results page shows the status of the job.

A green check mark with a 100% status indicates successful completion of the
task.

Module 1 Course Introduction and System Administration


Wizard results page

Module 1 Course Introduction and System Administration


View Dynamic Pool Properties

In Unisphere, a dynamic pool properties page can be invoked by double-clicking a


selected dynamic pool or by clicking the edit icon.

The properties page for both pool types includes General, Disks, Usage, and the
Snapshot Settings tabs. The properties page for a dynamic pool includes the
RAID tab.

Dynamic pool properties

The General tab shows the type of pool, and the user can only change the pool
name and description.

The Drives tab displays the characteristics of the disks in the pool.

The Usage tab shows information about storage pool allocation. The information
includes the space storage resources use, and the free space. The Usage tab also
shows an alert threshold for notifications about the space remaining in the pool,
and a chart with the pool used capacity history.
• The space all snapshots and thin clones use in the pool is reported in the Non-
base Space field.
• The tab also displays the data reduction savings (in GB, percentage and
savings ratio) achieved by the deduplication and compression of supported
storage resources.

Module 1 Course Introduction and System Administration


On the Snapshot Settings tab, the user can review and change the properties for
snapshot automatic deletion.

On the RAID tab (shown here), the user can view the drive types and number of
drives per drive type within the pool. The user can also check:
• The RAID protection level (RAID 5, RAID 6 OR RAID 1/0)
• The stripe width of each drive type
• The hot spare capacity reserved based on the tier selection

Module 1 Course Introduction and System Administration


Expanding Dynamic Pools

A dynamic pool can expand up to the system limits by one or more drives under
most circumstances.
• Expansion is not allowed if the number of drives being added is more than
enough to fill a drive partnership group, but at the same time not enough to also
fulfill the minimum drive requirements to start another drive partnership group.
− The maximum size of a drive partnership group is 64 drives.
− The minimum number of drives to start a new partnership group is the RAID
width+1 requirement for spare space.
• When a new drive type is added, the pool must be expanded with a minimum
number of drives.

− The minimum of drives must satisfy the RAID width+1 requirement for spare
space.
When expanding with a drive count that is equal to the Stripe width or less, the
process is divided into two phases:
53. The dynamic pool is expanded by a single drive, and the free space made
available to the user.
t. This process enables some of the additional capacity to be added to the
pool.
u. However, only if the single drive expansion does not increase the amount of
spare space required.
v. If the pool is running out of space, the new free space helps delay the pool
from becoming full.
w. The new free space is made available to the user If the expansion does not
cause an increase in the Spare Space the pool requires.

Module 1 Course Introduction and System Administration


i. When extra drives increase the spare space requirement, a portion of the
space being added is reserved equal to the size of one drive.
ii. This space reservation can occur when the spare space requirement for
drive type -1 for 31 drive policy is crossed.
54. The dynamic pool is expanded by the remaining drive count for the original
expansion request. Once this process is concluded, the expansion Job is
complete.

The system automatically creates the private RAID Groups depending on the
number of drives added.
• Space becomes available in the pool after the new RAID Group is ready.
• Expanding by the RAID width+1 enables space to be available quickly

For more information about expanding dynamic pools, refer to Dell EMC Unity:
Dynamic Pools white paper.

Warning: Be aware that a single drive expansion takes time to


complete as RAID extents are rebalanced and space is created.

Module 1 Course Introduction and System Administration


Expand Dynamic Pool by Set of Drives

Adding Drives

When the user expands a dynamic pool, the number of added drives determines
the time in which the new space is made available. The reason is that the drive
extents are rebalanced across multiple drives.

If the number of drives that are added to the pool is equivalent to the existing drive
count (stripe width plus an extra drive):
• The drive count exceeds the Stripe width.
• The time for the space to be available matches the time that it takes to expand
a traditional pool.

Private RAID Group 1


Adding same amount of drives

RAID 5(4+1) + extra drive DE= Drive extent | Spare= Spare space | D1-12= Disks | RE= RAID extent

In this example, a RAID 5 (4+1) configuration with a RAID width of 5 is shown. The user then adds
the same number of drives to the current pool.

When extra drives increase the spare space requirement, a portion of the space
being added is reserved equal to the size of one drive. This space reservation can
occur when the spare space requirement for drive type -1 for 31 drive policy is
crossed.

Expansion Process

This expansion process creates extra drive extents. From the drive extents, the
system creates RAID extents and RAID Groups and makes the space available to
the pool as user space.

Module 1 Course Introduction and System Administration


In the example, since the storage administrator is expanding the Dynamic Pool by
the same number and type of drives, the process concludes relatively fast.
• The user and spare extents are all contained on the original six disks.
• The number of drives in the Pool has not reached the 32 drive boundary so
there is no requirement to increase the spare space.

Private RAID Group 1 Private RAID Group X

RAID 5(4+1) + extra drive DE= Drive extent | Spare= Spare space | D1-12= Disks | RE= RAID extent

The system runs a background process to rebalance the space across all the drives.

Rebalancing Drive Extents

Adding capacity within a Drive Partnership Group causes the drive extents to
rebalance across the new space. This process includes rebalancing new, used,
and spare space extents across all drives.

The process runs in parallel with other processes and in the background.
• Balancing extents across multiple drives distributes workloads and wear across
multiple resources.
• Optimize resource use, maximize throughput, and minimize response time.

Private RAID Group 1 Private RAID Group X

RAID 5(4+1) + extra drive DE= Drive extent | Spare= Spare space | D1-12= Disks | RE= RAID extent

Observe that this graphic is an example and the actual algorithm is design-dependent.

Module 1 Course Introduction and System Administration


Expand Dynamic Pool with Single Drive

Adding One Drive

RAID 5 (4+1) + extra drive Adding one drive

Single drive expansion process

When adding a single drive or less than the RAID width, the space is available
about the same time a PACO operation to the drive takes.

If adding a single drive and the spare space boundary is crossed, none of that drive
capacity is added to the pool usable capacity.

The example shows the expansion of a dynamic pool with a single drive of the
same type. The process is the same as adding multiple drives.

With the traditional method, if 12 TB drives are used in a RAID 5 (4+1)


configuration, the expansion would require a minimum of 60 TB.
• The reason is that a new RAID Group consisting of five drives must be added.
• This method is not cost effective since the storage administrator must purchase
5 x 12 TB drives.

With dynamic pools, the pool can be expanded based on a single drive capacity.
This method is more cost effective since pools can be expanded based on capacity
needs without the additional cost of drives. Example: 1 x 12 TB drive.

Module 1 Course Introduction and System Administration


Expansion Process

Extents moved to
RAID 5 (4+1) + extra drive new drive

Single drive expansion process

In the example, the system first identifies the extents that must be moved off drives
to the new drive as part of the rebalance process. As the extents are moved, their
original space is freed up.

Rebalancing Drive Extents

RAID 5 (4+1) + extra drive

Single drive expansion process

The expansion process continues to rebalance the extents to free space on the
new drive. The background process also creates free space within the pool.

Module 1 Course Introduction and System Administration


Mixing Drive Sizes within Dynamic Pools

Adding Drive with Different Capacity

RAID 5 (4+1)

Not available until the drive partnership group contains


at least the same number of 800 GB drives as the RAID
width + 1
LEGEND

D1-6 = Drives DE = Drive extent

Adding a different capacity drive to a dynamic pool

Although not a recommended best practice:


• Drives of the same type but different capacities can be mixed within a dynamic
pool.
• Drives can be placed within the same drive partnership group.

This rule applies for storage pool creation and expansion, and the use of spare
space. However, different drive types including SAS-Flash drives with different
writes per day, cannot be in the same RAID Group.

The example displays a RAID 5 (4+1) configuration using mixed drive sizes.
Although Unisphere displays only 400 GB drives, users can select 800 GB drives to
add to the pool using the user interface.

In this configuration, only 400 GB of space is available on the 800 GB drive. The
remaining space is unavailable until the drive partnership group contains at least
the same number of 800 GB drives as the RAID width+1.

Module 1 Course Introduction and System Administration


Adding More Drives to Pool

RAID 5 (4+1)
[5 x 400 GB] + [1 x 800 GB]

LEGEND
D1-6 = Drives DE = Drive extent

Adding a different capacity drive to a dynamic pool

Depending on the number of drives of each capacity, dynamic Pools may or may
not use the entire capacity of the larger drives. All the space within the drives is
available only when the number of drives within a drive partnership group meet the
RAID width+1 requirement.

The example shows the expansion of the original RAID 5 (4+1) mixed drive
configuration by five drives. The operation reclaims the unused space within the
800 GB drive.

Reclaiming Available Space

RAID 5 (4+1)
[5 x 400 GB] + [1 x 800 GB]

LEGEND

D1-6 = Drives DE = Drive extent


Space becomes available after adding drives to satisfy the RAID width [4+1] + 1 = 6

After adding the correct number of drives to satisfy the RAID width of (4+1) + 1, all
the space becomes available.

Module 1 Course Introduction and System Administration


The same scenario applies to a pool that is being created with mixed drive
capacities.

Observe that although these examples are possible scenarios, best practices for
building pools with the same drive sizes and types should be followed whenever
possible.

Module 1 Course Introduction and System Administration


Expanding Dynamic Pools in Unisphere

In Unisphere, the Expand Pool wizard is launched by selecting the pool to expand
and clicking the Expand Pool button.

The wizard displays the tier that is used to build the pool and the tiers with available
drives to the pool.
• For Dynamic Pools, the available tiers are Extreme Performance tiers with
different SAS-Flash drives.
• The next step enables the user to select the number of drives from the tier with
available drives, to add to the Dynamic pool.
• The user can add a single drive or all the drives available in the tier.

In the example, 1 new disk is added to the system increasing the usable capacity in 280 Gb.

Module 1 Course Introduction and System Administration


Spare Space

Dynamic pools use spare space to rebuild failed drives within the pool.

Spare space consists of drive extents that are not associated with a RAID Group,
used to rebuild a failed drive in the drive extent pool.
• Each drive extent pool reserves a specific percentage of extents on each disk
as the spare space.
• The percentage of reserved capacity varies based on drive type and the RAID
type that is applied to this drive type.
• If a drive within a dynamic pool fails, spare space within the pool is used.

Spare space is handled automatically when a pool is created or expanded and is


automatically balanced across all the drives within the pool.
• The minimum drive count includes spare space allocation.
• For every 32 drives of the same drive type within a dynamic pool, enough spare
space is allocated to rebuild the largest drive in the pool.

Spare space is counted as part of a Pool overhead, as with RAID overhead, and
therefore is not reported to the user.
• Spare space is also not part of the usable capacity within the pool for user data.

Module 1 Course Introduction and System Administration


Drive Rebuild

Drive Fail

When a drive fails, spare space is identified.

Drive failure scenario

When a pool drive fails, the spare space within the same Drive Partnership Group
as the failed drive is used to rebuild the failed drive.

A spare extent must be from a drive that is not already in the RAID extent that is
being rebuilt.

Rebuild Process

Drive extents are rebuilt using the spare Spare drive space is consumed if available
space

Pool rebuild process

Module 1 Course Introduction and System Administration


The drive extents are rebuilt using the spare space. Multiple RAID extents can be
rebuilt simultaneously.

The idea is to spread the extents across several drives and do so in parallel so
rebuild operations complete quicker.
• RAID extents are composed of drive extents from different drives. For that
reason, the drive extents being rebuilt target different drives which in turn,
engages more drives for the rebuild.
• The code ensures that multiple drive extents from the same RAID extents do
not end up on the same drive. This condition would cause a single point of
failure for the RAID extent.

Spare space must be replenished after a rebuild completes if there is insufficient


spare space within the Drive Partnership Group.
• After rebuilding the drive, if there is an appropriate “unused” drive, the pool
consumes the free drive within the system which is a valid spare. The unused
drive automatically replaces the failed drive thus replenishing the consumed
spare space.
• If no free drives exist which match the requirement, an alert is logged to indicate
that there is not enough spare space. After a new drive is added, it is pulled into
the pool. The resulting operation moves drive extents in the background to
rebalance the data.

Module 1 Course Introduction and System Administration


Demonstration - Dynamic Pools

This demo covers how to create and manage a Dynamic Pool on a Dell Unity XT
storage system.

Movie:

The web version of this content contains a movie.

Module 1 Course Introduction and System Administration


Traditional Pools

Module 1 Course Introduction and System Administration


Traditional Pools Overview

Traditional pools are storage pools whose tiers are composed of Traditional RAID
Groups.
• Traditional RAID Groups are based on Traditional RAID with a single
associated RAID type and RAID width.
• Traditional RAID Groups are are limited to 16 drives.

Dell UnityVSA supports the deployment of ONLY traditional storage pools.

The configuration of a traditional storage pool involves defining the types and
capacities of the disks in the pool.

A storage administrator can define the RAID configuration (RAID types and stripe
widths) when selecting a tier to build a traditional storage pool.
• Each tier supports drives of a certain type and a single RAID level.

During the storage pool provisioning process, a storage administrator has the
option to select:
• More than one drive type to build a a multitiered pool
• Just one drive type to create a single-tiered pool.

Administrators can also identify if the pool must use the FAST Cache feature, and
associate a Capability Profile for provisioning vVol datastores.

Important: The Dell Unity XT platform uses dynamic pools by default,


but support the configuration of traditional pools using UEMCLI or
REST API.

Module 1 Course Introduction and System Administration


Provisioning Traditional Pools

Traditional storage pools can be created using following interfaces:


• Unisphere (only on UnityVSA systems)
• Unisphere CLI
• REST API

Provisioning Traditional Pool process steps:


• Provide a name and a description for the pool.
• Select the storage tiers to build the pool from. More than one tier can be used to
create a multitiered pool or one drive type to create a single-tiered pool.
• Define the RAID type and change RAID width to accommodate the number of
drives that best fit the user needs.
• Define if the pool is using FAST Cache.
• Select the number of drives from the selected storage tiers matching the
defined RAID configuration.
• Associate a Capability Profile with the pool for provisioning vVol datastores.

The FAST VP and Capability Profile features are discussed in more details in the
Scalability and Performance section and the VMware Datastores section.

Tip: The Dell Unity XT platform uses dynamic pools by default, but
support the configuration of traditional pools using UEMCLI or REST
API. UnityVSA only support the configuration of traditional pools using
the Unisphere interface.

Module 1 Course Introduction and System Administration


Creating Traditional Pools - Process

Traditional pools consist of some traditional RAID Groups, which are built from
drives of a certain type.
• These RAID Groups are based on traditional RAID with a single associated
RAID type, RAID width, and are limited to 16 drives.
• These pools use dedicated hot spares and are only expanded by adding RAID
Groups to the pool.

Private LUN Private LUN

RAID Group 2+2 RAID Group 4+2

Expanded view of the process for creating Traditional Pools.

96. A heterogeneous pool is created from a RAID 1/0 (2+2) group that is built from
the SAS Flash drives.
• The heterogeneous pool also includes a RAID 6 (4+2) group that is built with
HDDs.
97. A RAID Group Private LUN is created for each RAID Group.
98. These Private LUNs are split into continuous array slices that are 256 MB.
Slices hold user data and metadata. (FAST VP moves slices to the various tiers
in the pool using this granularity level).
99. After the Private LUNs are partitioned out in 256 MB slices, they are
consolidated into a single pool that is known as a slice pool.

Module 1 Course Introduction and System Administration


Creating Traditional Pools

To create a new traditional storage pool using the Unisphere interface, select
Pools under the STORAGE section on the navigation pane.

Wizard Summary page with the configuration details for a homogeneous storage pool called
Performance Pool.

To launch the Create Pool wizard, select the add (+) icon in the Pools page.
Follow the steps in the wizard window:
• Enter the pool Name and the pool Description.
• The wizard displays the available storage tiers. The user can select the tier and
change the RAID configuration for the selected tier.
• Select whether the pool is supposed to use FAST Cache.
• Select the number of drives from the tier to add to the pool.
• If the pool is used for the provisioning of vVol storage containers, the user can
create and associate a Capability Profile * to the pool. A Capability Profile is a
set of storage capabilities for a VVol datastore.

Module 1 Course Introduction and System Administration


• The pool configuration can be reviewed from the Summary page (displayed
here). The user can go back and change any of the selections that are made for
the pool, or click Finish to start the creation job.
• The results page shows the status of the job. A green check mark with a 100%
status indicates successful completion of the task.

* The Capability Profile feature is described in the VMware Datastores


Provisioning section.

Module 1 Course Introduction and System Administration


Viewing Traditional Pool Properties

In Unisphere, the pool Properties page can be invoked by double-clicking a


selected storage pool or by clicking the edit icon.

The properties for both a traditional or a dynamic pool include the General, Drives,
Usage, and the Snapshot Settings tabs. The properties page for traditional pools in
hybrid systems includes a tab for FAST VP.

1 2 3 6

4
5

1: The General tab shows the type of pool (traditional and dynamic), and the user
can only change the pool name and description.

2: The Drives tab displays the characteristics of the disks in the pool.

3: On the FAST VP tab, it is possible to view the data relocation and tier
information. The FAST VP tab is displayed only for traditional pools on hybrid and
the virtual storage systems.

Module 1 Course Introduction and System Administration


4: The Capacity option on the Usage tab shows information about storage pool
allocation and use including, as shown here:

• The total amount of space that is allocated to existing storage resources and
metadata. This value does not include the space that is used for snapshots.
• The Used field displays the pool total space that is reserved by its associated
storage resources. This value includes the space the thin clones and snapshots
use. This value does not include preallocated space.
• The Non-base Space field displays the space that is used by all snapshots and
thin clones in the pool.
• The Preallocated Space field displays the amount of remaining space in the
pool that is reserved for, but not actively being used by, a storage resource.
• The Free field shows the amount of unallocated space that is available for
storage resources consumption that is measured in TB. The percentage of free
capacity is also displayed by hovering over the pool graphic of the current pool
capacity.
• Alert threshold, which is the percentage of storage allocation at which
Unisphere generates notifications about the amount of space remaining in the
pool. You can set the value between 50% and 84%.
• The Data Reduction Savings shows the amount of space that is saved when the
feature is enabled on the pool. The savings is displayed as a size, percentage,
and a proportion ratio. The value includes savings from compression and
deduplication.
• The Pool used capacity history is a chart graphic with pool consumption over
time. The user can verify the used capacity at a certain point in time by hovering
over different parts of the graphic.

5: The Storage Resources option on the Usage tab provides a list of the storage
resources in the pool, along with the applicable pool utilization metrics.

6: On the Snapshot Settings tab, the user can review and change the properties
for snapshot automatic deletion.

Module 1 Course Introduction and System Administration


Expanding Traditional Pools

A storage pool that needs extra capacity can be expanded by adding more disks to
the storage tiers of the pool.

In the example, five new disks from a performance tier are added to the pool. Their additional 1.7
TB of capacity increase the pool total usable capacity to 13.9 TB.

In Unisphere, select the pool to expand at the Pools page, and then select Expand
Pool. Follow the wizard steps:
• On the Storage Tiers step, select the tiers for the drives you want to add to the
pool.
− If you are adding another tier to the storage pool, you can select a different
RAID configuration for the disks in the tier.
• On the Drives step, select the number of drives to add to each tier selected on
the previous step.
• Review the pool configuration on the Summary page (the page is displayed
here). Click Finish to start the expansion job.
• The results page shows the status of the job. A green check mark with a 100%
status indicates successful completion of the task.

Module 1 Course Introduction and System Administration


Activity: Creating Storage Pools

Virtual lab for facilitated sessions:


• Manually assign storage tier levels to virtual
disks presented to the Dell EMC UnityVSA
system.
• Create two multitiered (heterogeneous)
storage pools.
• Create three single-tier (homogeneous)
storage pools.

Module 1 Course Introduction and System Administration


Provision Block Storage

Module 1 Course Introduction and System Administration


Block Storage Resources

With Dell Unity XT systems, a storage administrator can manage addressable


partitions of block storage resources so that host systems can use these resources.

LUN

Consistency
Thin Group
Clone

Block Storage

Supported Block storage resources

Block storage resources provide hosts with access to general-purpose block-level


storage through iSCSI or Fibre Channel (FC) connections. After a host connects to
the Block storage resource, it can use it as a local storage drive.

Block storage resources that are supported by the Dell Unity XT platform include
LUNs, consistency groups, and clones.
• A LUN or logical unit represents a quantity of block storage that is allocated for
a host. You can allocate a LUN to more than one host if you coordinate the
access through a set of clustered hosts.
• A Consistency Group is an addressable instance of LUN storage that can
contain one or more LUNs (up to 50). Consistency Groups are associated with

Module 1 Course Introduction and System Administration


one or more FC or iSCSI hosts. Snapshots that are taken of a Consistency
Group apply to all LUNs associated with the group.
• A Thin Clone is a read/write copy of thin block storage resources that shares
blocks with the parent resource. Supported block storage resources are LUN,
Consistency Group, or VMFS datastore. Thin Clones are discussed in more
details in the Efficiency Features module.

Important: Although not listed here, VMFS datastores and vVol


(Block) datastores are storage objects that are provisioned to the
ESXi host through block storage protocols. Both objects are treated
as VMware storage resources supported by the storage system with a
specific set of features which are discussed in the VMware
Datastores Provisioning.

Module 1 Course Introduction and System Administration


LUN Provisioning

A storage administrator can provision LUNs to SAN hosts using any of the
supported management interfaces.

LUNs page showing details of selected LUN

In Unisphere, select Block under the Storage section to provision and manage
LUNs.

The LUNs page opens by default.

The page shows the list of created LUNs with its size in GB, the allocated capacity,
and the pool it was built from.

To see the details about a LUN, select it from the list. The LUN details are
displayed on the right pane.

From the LUNs page, you can create a LUN, view the LUN properties, modify
some settings, and delete an existing LUN.
• Before creating a LUN, at least one pool must exist in the storage system.
• The deletion of a LUN that has host access that is configured is not allowed.
• The administrator must manually remove host access before deleting the LUN.

Module 1 Course Introduction and System Administration


Create LUNs

Launch Wizard

To create LUNs select the add (+) icon from the LUNs page to launch the wizard.
Then follow the Create LUNs wizard steps to complete the configuration.

Opening the Create LUNs wizard from the LUNs page

The wizard steps include:


• Configure LUN properties
• Configure Host Access
• Configure Snapshot schedule
• Provide Replication Mode and RPO
• Configuration Summary
• Process Results

Module 1 Course Introduction and System Administration


Configure

Define the characteristics of the LUN (or multiple LUNs) to create in the storage
system.

Create LUNs wizard: Configure LUN step

134. Type or select the number of LUNs to create.


135. Provide a name and description for the LUNs.
136. Select the pool to use for building one or more LUNs.
137. If using a multi-tiered pool to build the LUN, define the tiering policy
for data relocation.
138. Define the storage size to allocate for the LUNs.
139. Enable or disable thin provisioning.
• Thin Provisioning is enabled by default.
• Thin provisioning can only be disabled at the moment the LUN is created.

Module 1 Course Introduction and System Administration


• A thin storage resource cannot be changed to a thick storage resource later.
140. Enable data reduction If the selected pool has a significant
percentage of SAS Flash drives.
• The data reduction feature includes deduplication and compression
capabilities.
• Advanced deduplication is also available in case data reduction is selected
for the storage object.
141. Host I/O limits policies can also be associated with one or more
LUNs to prioritize I/O operations for the assigned hosts.

Access

The storage administrator can associate the LUNs with an existing host or host
group configuration.

Create LUNs wizard: Configure Access step

Module 1 Course Introduction and System Administration


The host or host configuration is previously created with a defined connectivity
protocol and access level.

To associate a host configuration select the add (+) icon and then choose the host
profile.

Select OK to save the configuration or Cancel to dismiss the selection.

Snapshot

Local data protection can be configured for one or more LUNs at the time of the
creation.

Create LUNs wizard: Configure Snapshot Schedule step

Snapshot Schedule is disabled by default.

To configure local data protection select Enable Automatic Snapshot Creation.


Then select an existing snapshot schedule or create a new one.

Module 1 Course Introduction and System Administration


Snapshots are covered in more details in the Data Protection with Snapshots
section.

Replication

A storage administrator can also configure remote data protection on a LUN at the
time of its creation or later.

Create LUNs wizard: Provide a Replication Mode and RPO step

Replication is disabled by default.

To configure remote data protection, select Enable Replication. Then define a


Recovery Point Objective, and the destination of the replica.

The administrator can optionally enable the replication of the snapshot schedule to
the destination, and overwrite destination resource.

Replication is covered in more details in the Data Protection with Replication


section.

Module 1 Course Introduction and System Administration


Summary

The Summary page shows all the selections made for the new LUN (or multiple
LUNs) for a quick review.

Create LUNs wizard: Summary step

The storage administrator can go back to make changes or select Finish to start
the creation job.

In the example, snapshot schedule and replication were not enabled for the single
LUN been created. And no host configuration was assigned for LUN access.

Results

The Results of the process are displayed on the last page. Select OK to close the
wizard.

Module 1 Course Introduction and System Administration


Create LUNs wizard: Results step

Module 1 Course Introduction and System Administration


View LUN Properties

List of LUNs

To modify the properties of a LUN, double-click the storage resource, or select it


from the list and select the pencil icon.

LUNs page showing the list of LUNs with LUN-1 selected

LUN Properties

The General tab of the LUN properties shows both the storage object capacity
utilization details and the free space.

Module 1 Course Introduction and System Administration


Properties of LUN-1 showing the storage capacity and data reduction savings on the General tab

142. A storage administrator can expand the LUN size.


143. For thin LUNs built from a pool with enough flash capacity, Data
Reduction can be enabled or disabled.
• If the feature is enabled, the tab also displays the data reduction savings in
GB (or TB), ratio, and percentage.
• If the LUN is built from a dynamic pool, the Advanced Deduplication
feature can also be enabled.
• Data reduction and advanced deduplication are discussed in more details on
the Storage Efficiency Features section.
144. Optionally change the LUN SP ownership to the peer SP.

Module 1 Course Introduction and System Administration


Other tabs of the LUN properties enable the administrator to:
• View or associate host configurations
• Configure and manage local and remote protection
• Set Host I/O limits
• Change tiering policy and check data relocation status

− For LUNs built from a multi-tiered pool, the properties window also includes
a FAST VP tab.

Module 1 Course Introduction and System Administration


Consistency Groups Provisioning

A storage administrator can provision Consistency Groups with new or existing


LUNs using any of the supported management interfaces.

Consistency Groups page showing details of a selected group

To manage Consistency Groups in Unisphere, select Block from the navigation


pane and Consistency Groups from the top menu.

The Consistency Groups page shows the list of created groups with their size in
GB. The page also shows the number of pools that are used to build each group
and the allocated capacity.

To see the details about a Consistency Group, select it from the list and its details
are displayed on the right-pane.

From this page, create a group, view its properties, modify some settings, or delete
an existing Consistency Group.
• The deletion of a Consistency Group that has configured host access is not
allowed.
• The storage administrator must manually remove host access before deleting
the Consistency Group.

Module 1 Course Introduction and System Administration


Create Consistency Groups

Launch Wizard

To create a Consistency Group select the add (+) icon from the Consistency
Groups page to launch the wizard. Then follow the Create a Consistency Group
wizard steps.

Launching the Create a Consistency Group wizard from the Consistency Groups page

Enter a name and provide an optional description for the consistency group.

Follow the other wizard steps to complete the configuration:


• Add existing LUNs, or create new LUNs
• Configure Host Access
• Configure Snapshot schedule
• Provide Replication Mode and RPO
• Review Configuration Summary
• Check Process Results

Module 1 Course Introduction and System Administration


Storage

Select the add (+) icon to configure the Consistency Group member LUNs. There
are two options: selecting existing LUNs or creating new ones.

When adding existing LUNs, the group members can be from different pools.
The selected LUNs are included in the list and marked with the Add action.

LUNs set to be added to the Consistency Group

Module 1 Course Introduction and System Administration


LUNs set to the created with the Consistency Group

The Create new LUNs options launches the Configure LUN wizard. Define the
characteristics of the member LUNs: name, size, pool to use.
• The LUN identity is a combination of the name and a sequenced number.

Module 1 Course Introduction and System Administration


Access

The storage administrator can associate the Consistency Group with an existing
host or host group configuration.

Create a Consistency Group wizard: Configure Access step

The host or host configuration is previously created with a defined connectivity


protocol and access level.
• The administrator must define the access level of the hosts to the LUN
members of the Consistency Group.

To associate a host configuration select the add (+) icon and then choose the host
profile.

Select OK to save the configuration or Cancel to dismiss the selection.

Module 1 Course Introduction and System Administration


Snapshot

Local data protection can be configured for the Consistency Group at the time of
the creation.

Create a Consistency Group wizard: Configure Snapshot Schedule step

Snapshot Schedule is disabled by default.

To configure local data protection select Enable Automatic Snapshot Creation.


Then select an existing snapshot schedule or create a new one.

Snapshots are covered in more details in the Data Protection with Snapshots
section.

Replication

A storage administrator can also configure remote data protection for the
Consistency Group at the time of its creation or later.

Module 1 Course Introduction and System Administration


Create a Consistency Group wizard: Provide a Replication Mode and RPO step

Replication is disabled by default.

To configure remote data protection, select Enable Replication. Then define a


Recovery Point Objective, and the destination of the replica.

The administrator have also the options to:


• Enable the replication of the snapshot schedule to the destination
• Reuse destination resource
• Overwrite destination resource

Replication is covered in more details in the Data Protection with Replication


section.

Summary

The Summary page shows all the selections made for the new Consistency Group
for a quick review.

Module 1 Course Introduction and System Administration


The storage administrator can go back to make changes or select Finish to start
the creation job.

Existing LUNs are added once the New LUNs are created and added once
group is created. the group is created.

LUNs set to be added to the Consistency Group LUNs set to the created with the Consistency
Group

In the example, snapshot schedule and replication were not enabled for the
Consistency Group. And no host configuration was assigned for the access.

Results

The Results of the process are displayed on the last page. Select OK to close the
wizard.

Module 1 Course Introduction and System Administration


Create a Consistency Group wizard: Results step

Module 1 Course Introduction and System Administration


View Consistency Group Properties

To view and modify the properties of a Consistency Group, select the edit (pencil)
icon.

Consistency Groups properties: LUNs

The General tab of the Consistency Group properties page depicts its utilization
details and free space.

The LUNs tab shows the LUNs that are part of the Consistency Group. The
properties of each LUN can be viewed and a LUN can be removed from the group
or moved to another pool. LUNs can also be added to a Consistency Group. Host
access to the Consistency Group member LUN can also be removed to enable the
deletion of the storage resource.

The other tabs of the Consistency Group properties window enable the user to
configure and manage local and remote protection, and advanced storage features.

Module 1 Course Introduction and System Administration


Activity: Create Block Storage LUNs and Consistency Groups

Virtual lab for facilitated sessions:


• Create two LUNs.
• Create a multi-LUN Consistency Group.
• Create three single-tier (homogeneous)
storage pools.

Note: This lab covers the provisioning of block-


level storage resources from the
heterogeneous pool that is created on the
Create Storage Pools lab.

Module 1 Course Introduction and System Administration


Block Storage Access Overview

Host access to Dell Unity XT block storage and VMware datastores requires host
connectivity to the storage system with supported protocols.

Hosts
iSCSI

Fibre Channel

Register and discover storage

VMFS

LUN
vVol
(Block)
CG*
VMware
Block Datastore

* CG = Consistency Group

Dell Unity XT Storage System

Hosts access to Unity XT block storage resources

Configuration operations to achieve host access to the provisioned storage span


across the host, the connectivity, and the storage system.
• Connectivity to the storage system uses storage networking with a combination
of switches, physical cabling, and logical networking for the specific block
protocol.
• Connected hosts must have an initiator that is registered on the Unity system.
Hosts initiators are either an iSCSI initiator IQN or an FC WWN.

Storage resources must be provisioned on the Dell Unity XT storage system for the
host.

Module 1 Course Introduction and System Administration


The host must then discover the newly presented block storage within its disk
subsystem. The discovery and preparation of the storage for access depend on the
operating system being used. In an iSCSI environment, discovery can be
accomplished using a SCSI bus rescan.

Preparing the storage is accomplished by creating disk partitions and formatting the
partition.

Tip: Hosts can be directly cabled to the Dell Unity XT storage


systems. However, the key benefits of switch-based block storage
connectivity are realized in the logical networking.
Hosts can share the Dell Unity XT front-end ports. The number of
connected hosts can be greater than the number of Dell Unity XT
front-end ports.
Redundant connectivity is also created by networking multiple
switches, enhancing storage availability.

Module 1 Course Introduction and System Administration


Host Access Requirements

Some requirements must be met before you configure hosts to access the storage
system.

To connect hosts to the storage system, ensure that these requirements are
fulfilled:
• Configure the storage system for block storage provisioning.
− Prepare supported front-end interfaces for host connectivity.
− Configure LUNs or consistency groups using any of the supported
administrative interfaces.
• The host must have an adapter to communicate over the storage protocol.
− In the Fibre Channel environments, a host has a host bus adapter (HBA).
− For iSCSI, a standard NIC can be used.
• Multipathing software is recommended to manage paths to the storage system.
− If one of the paths fails, it provides access to the storage.
• In a Fibre Channel environment, users must configure zoning on the FC
switches.
− In iSCSI environments, initiator and target relationships must be established.
• To achieve the best performance:
− In iSCSI environments, the host should be on a local subnet with each iSCSI
interface that provides storage for it.
− To achieve maximum throughput, the iSCSI interface and the hosts should
have their own private network.
− In a multipath environment, the SP physical interfaces must have two IP
addresses (one IP address assigned to the same port on each SP). The
interfaces should be on separate subnets.
• After connectivity has been configured, the hosts must be registered with the
Dell Unity XT storage array.

− Registration must be performed manually on the Windows and Linux hosts


while the ESXi host is registered automatically.

Module 1 Course Introduction and System Administration


Having completed the connectivity between the host and the array, you are in a
position to provision the block storage volumes to the host.

Module 1 Course Introduction and System Administration


Host-to-Storage Connectivity

Host connections to a Dell Unity XT system are SAN attached or directly


connected.

Server A Server B Server C Server D


Initiators Initiators Initiators Initiators
NIC or NIC or NIC or NIC or
FC FC FC FC
iSCSI iSCSI iSCSI iSCSI
HBA HBA HBA HBA
HBA HBA HBA HBA

Subnet Subnet
192.124.1.100 192.124.2.100
FC Switches

0 1 2 3 0 1 2 3
SP A SP B

Target Ports Target Ports

Dell Unity XT System

Diagram with high availability options for host to storage connectivity

Directly attaching a host to a Dell Unity XT system is supported if the host connects
to both SPs and has the supported multipath software.

HBAs are initiators that are used to discover and connect to the storage system
target ports.

Depending on the type of HBA being used on the host (Emulex, QLogic, or
Brocade), users can install HBA utilities to view the parameters. Utilities are
downloaded from the respective vendor support pages and are used to verify
connectivity between HBAs, and the arrays they are attached to.

Fibre Chanel HBAs should be attached to a dual fabric for High Availability.

iSCSI connections should be attached using different subnets for HA.

For the iSCSI connectivity, a software or hardware iSCSI initiator must be used.

Module 1 Course Introduction and System Administration


Deep Dive: Check the E-Lab navigator
(https://elabnavigator.emc.com/eln/hostConnectivity) for more details
on host connectivity options.

Module 1 Course Introduction and System Administration


Host-to-Storage Connectivity Rules

Certain rules should be followed as a best practice when configuring host to


storage connectivity.

Some guidelines to follow are:


• Any single host should connect to any single array with one protocol only.
• A host may connect to different arrays with different protocols.
• iSCSI should use all HBA or all NIC connections, with no mixing in a host.
• Arrays may see connections from hosts with NICs or HBAs.
• Hosts with CNAs use either FC or iSCSI to connect to a single array.

Module 1 Course Introduction and System Administration


Front-End Connectivity Options

Dell Unity XT 380/380F Models

CNA Ethernet I/O Modules

I/O Modules Ethernet CNA

The Dell Unity XT 380/380F models support two embedded ports per SP for
connectivity over Fibre Channel and a Fibre Channel I/O modules.
• The FC embedded ports enable connectivity at 4 Gb/s, 8 Gb/s, or 16 Gb/s.
• The 4 port, 16 GB FC expansion I/O modules enables connectivity at 4 Gb/s, 8
Gb/s, or 16 Gb/s.
• The 4 port, 32 GB FC expansion I/O modules enables connectivity at 8 Gb/s, 16
Gb/s, or 32 Gb/s.

− For best performance, the 32 Gb/s FC module is recommended.


iSCSI connections are supported for multiple port options.
• If the CNA port is configured for iSCSI, the port supports 10 Gb/s Optical SFPs
or the Active Twinax cables.
• iSCSI I/O expansion modules support a 4 Port, 1 Gb/s, or 10 Gb/s GbaseT
RJ45 module in either Copper or Cat 5 - 6 cables.
• Also supported is the 2 or 4 Port, 10 Gb/s IP/iSCSI module with SFP+ or Active
Twinax copper.

− The 2-Port I/O module includes the iSCSI offload engine.


− iSCSI TCP offload devices eliminate processor bottlenecks by processing
TCP and iSCSI workloads independently of processor cycles.
− iSCSI and TCP offload devices are 1 Gb/s and 10 Gb/s speeds.

Module 1 Course Introduction and System Administration


Tip: To improve processor bound applications, consider implementing
these solutions. If possible, configure Jumbo frames (MTU 9000) on all
ports in the end-to-end network, to provide the best performance.

Dell Unity XT 480/480F and Higher Models

I/O Module Mezz card

Mezz card I/O Module

The Dell Unity XT 480/480F, and higher models support block storage connectivity
with Mezz Card configurations and expansion I/O modules.

Mezz Cards are only available on the Unity XT 480/480F, 680/680F, and 880/880F
storage arrays, with these configurations.
• The 4-port SFP Mezz Card serves Ethernet traffic and iSCSI block protocol.
− The card provides SFP+ connection to a host or switch port with connectivity
speeds of 1Gb/10Gb/25Gb.
• 4-port 10 Gb BaseT Mezz Card serves Ethernet traffic and iSCSI block protocol.

− The card employs a 1G/10G BaseT (RJ45) connection to a host or to a


switch port.
Expansion I/O modules that provide Fibre Channel or iSCSI block protocol support
are available with the following configuration:
• 4-Port 16 Gb Fibre Channel SLIC
• 4-Port 32 Gb Fibre Channel SLIC

Module 1 Course Introduction and System Administration


• 4-Port 25 GbE SFP-based SLIC
• 4-Port 10 GbE BaseT SLIC

Module 1 Course Introduction and System Administration


Front-End Fibre Channel Interfaces Management

Fibre Channel interfaces are created automatically on the Dell Unity XT storage
systems.

Information about the SP A and SP B Fibre Channel I/O modules and a particular
Fibre Channel port can be verified using Unisphere or uemcli commands.

Fibre Channel interfaces management in Unisphere Settings

In Unisphere, verify the Fibre Channel interfaces by selecting the Fibre Channel
option under the Access section of the Settings configuration window.

The Fibre Channel Ports page shows details about I/O modules and ports. The
World Wide Names (WWNs) for the Dell Unity XT system Fibre Channel ports are
unique.

To display information about a particular FC port, select it from the list and select
the edit link.

Module 1 Course Introduction and System Administration


The user can change the speed the I/O module port is operating for optimization
purposes.

Module 1 Course Introduction and System Administration


Host Fibre Channel Initiators Registration

Initiators are endpoints from which FC and iSCSI sessions originate. Each initiator
has a unique worldwide name (WWN) or iSCSI qualified name (IQN).

Any host bus adapter (HBA) can have one or more initiators that are registered on
it.
• Some HBAs or CNAs have multiple ports. Each HBA or CNA port that is zoned
to an SP port is one path to that SP and the storage system containing that SP.
• Each path consumes one initiator record. An HBA or CNA port can be zoned
through different switch ports to the same SP port or to different SP. The result
provides multiple paths between the HBA or CNA port and an SP.

Dell Unity XT storage systems support both Fibre Channel and iSCSI host
identifiers registration.
• A Unity XT system register all paths that are associated with an initiator in place
of managing individual paths.
• After registration, all paths from each registered initiator automatically have
access to any storage provisioned for the host.
• Multiple paths ensure a highly available connection between the host and array.
• You can manually register one or more initiators before you connect the host to
the storage system.

The maximum number of connections between servers and a storage system has
limitations. Each Unity XT model has a maximum number of initiator records that
are supported per storage-system SP and is model-dependent.

Failover software running on the server may limit the number of paths.

Access from a server to an SP in a storage system can be:


• Single path: A single physical path (port/HBA) between the host system and the
array

Module 1 Course Introduction and System Administration


• Multipath: More than one physical path between the host system and the array
over multiple HBAs, HBA ports, and switches
• Alternate path: Provides an alternate path to the storage array when a primary
path fails

Module 1 Course Introduction and System Administration


Host Fibre Channel Initiators Management - Unisphere

Registration

In Unisphere, verify the host initiators by selecting Initiators under the ACCESS
section. The Initiators page is displayed.

A storage administrator can manually register one or more initiators before


connecting the host to the storage system using the Fibre Channel protocol.
• Initiators that are registered and associated with a host, display a green and
white check icon.
• All paths from each registered initiator are automatically granted access to
provisioned storage for the hosts ensuring high availability.

Initiators page in Unisphere

In the example, two FC host initiators display a green check with a blue icon
indicating they logged in but are not associated with a host. This icon is displayed
when the connected FC ports are used for replication purposes. The other initiators
are iSCSI as shown by the IQN identifier.

Initiator Paths

The link between a host initiator and a target port on the storage system is called
the initiator path.

Module 1 Course Introduction and System Administration


Each initiator can be associated with multiple initiator paths.
• Storage administrators can control operations at the initiator level.
• The storage system manages the initiator paths automatically.

In Unisphere, verify the host initiator paths by selecting the Initiator Paths tab from
the Initiators page.

Initiator Paths page in Unisphere

The example displays two FC initiators. Each FC initiator is connected to a single


SP target port.

Module 1 Course Introduction and System Administration


Front-End iSCSI Interfaces Management - Unisphere

iSCSI Interfaces

iSCSI interfaces in the Dell Unity XT and UnityVSA systems enable hosts to access
the system block storage resources using the iSCSI protocol.
• A storage administrator can associate an iSCSI interface with one or both SPs.
• Multiple iSCSI interfaces can coexist on each Storage Processor (SP).

− These iSCSI interfaces become the available paths hosts, with the proper
privileges can use to access the relevant storage resources.
To view and manage the iSCSI interfaces select Block under the STORAGE
section in Unisphere, and open the iSCSI Interfaces tab.
• The iSCSI interfaces page shows the list of interfaces, the SP, and Ethernet
ports where they were created, and the network settings.
• The page also shows the interfaces IQN (iSCSI Qualified Name) in the last
column.

iSCSI Interfaces page in Unisphere

From this page, it is possible to create, view and modify, and delete an iSCSI
interface.

The example shows iSCSI interfaces that are created for Ethernet Port 2 on SPA
and SPB and Ethernet Port 3 on SPA and SPB.

Creating Interface

To add iSCSI interfaces to a Unity XT system, select Block under STORAGE in


Unisphere. Then select add (+) from the iSCSI Interfaces page.

Module 1 Course Introduction and System Administration


Add iSCSI Network Interface settings window

• Select the Ethernet port for the iSCSI interface.


− The interfaces can only be created on an Ethernet port that is not
participating in a link aggregation.
• Enter the network address for the interfaces.
− Specify an IPv4 or IPv6-based address. Although both IPv4 and IPv6
address are supported, the same type of address must be used for both
SPs.
− Enter the subnet Mask or prefix length that identifies the subnet where the
iSCSI target resides.
− Enter the gateway IP address associated with the iSCSI network interface.
• The system automatically generates the IQN.
• If applicable, associate a VLAN with the interface to isolate network traffic.
− VLAN IDs should be configured only if the network switch port was
configured to support VLAN tagging of multiple VLAN IDs.
• Select OK to create the interfaces.

Module 1 Course Introduction and System Administration


View/Modify

To view and modify the properties of an iSCSI interface, select the interface and
the edit icon. The Edit iSCSI Network interface window is launched.

A storage administrator can change the network settings for the interface and
assign a VLAN.

To assign a VLAN, click the Edit link to enter a value (1 through 4094) to be
associated with the iSCSI interface.

Best Practices: For continuous storage system access in an event of a


failover, Dell recommends setting up iSCSI interfaces on the same port of
each SP. Also, the use of host-based multipathing software to manage
multiple connections to the storage system is recommended. Multipathing
ensures data access if one of the SPs becomes unavailable because of a
system software upgrade or component failure.

Module 1 Course Introduction and System Administration


iSCSI CHAP Security Settings

Unisphere Settings CHAP configuration

On a Dell Unity XT system, you can require all hosts to use CHAP authentication
on one or more iSCSI interfaces.

To require CHAP authentication from all initiators that attempt access to the iSCSI
interface, open Settings and go to the Access section.
• Select Enable CHAP Setting.
− The system denies access to storage resources of this iSCSI interface from
all initiators that do not have CHAP configured.
• To implement the global CHAP authentication, select Use Global CHAP and
specify a Username and Global CHAP Secret.

Module 1 Course Introduction and System Administration


− Optionally set a global forward CHAP secret that all initiators can use to
access the storage system.
• To implement Mutual CHAP authentication, enable Use Mutual CHAP, and
specify a Username and mutual CHAP Secret.

− Mutual CHAP authentication occurs when the hosts on a network verify the
identity of the iSCSI interface by verifying the iSCSI interface mutual CHAP
secret.
− Any iSCSI initiator can be used to specify the "reverse" CHAP secret to
authenticate to the storage system.
− When mutual CHAP Secret is configured on the storage system, the CHAP
secrets are shared by all iSCSI interfaces that run on the system.

Module 1 Course Introduction and System Administration


Host iSCSI Initiator Options

Different host iSCSI initiator options are supported for connectivity with Dell Unity
XT storage systems.

iSCSI Software Initiator iSCSI HBA


iSCSI Software Initiator
(with TOE) (HW Initiator)

Application OS Application OS Application OS

SCSI SCSI SCSI

ISCSI ISCSI ISCSI


Server Resources

NIC/HBA Resources
TCP TCP TCP

IP IP IP

Network Interface Network Interface Network Interface

Host iSCSI interface initiator options

With a software iSCSI initiator, the only server hardware that is needed is a Gigabit
networking card. All processing for iSCSI communication is performed using server
resources, such as processor, and to a lesser extent, memory. The processor
handles the iSCSI traffic consuming resources that could be used for other
applications. Windows has an iSCSI initiator that is built into the operating system.

On the other hand, a hardware iSCSI initiator is an HBA that is displayed to the
operating system as a storage device. The iSCSI HBA handles the processing
instead of server resources minimizing resource use on the server hardware.

Hardware iSCSI HBAs also enable users to boot a server from the iSCSI storage,
something a software iSCSI initiator cannot do. The downside is that iSCSI HBAs
typically cost 10 times what a Gigabit NIC would cost, so you have a cost vs.
functionality and performance trade-off.

Module 1 Course Introduction and System Administration


Most production environments with high loads will opt for hardware iSCSI HBA over
software iSCSI, especially when other features such as encryption are considered.

There is a middle ground, though. Some network cards offer TCP/IP Offload
Engines (TOE) that perform most of the IP processing that the server would
normally perform. TOEs lessen the resource overhead associated with software
iSCSI because the server only processes the iSCSI protocol workload.

Module 1 Course Introduction and System Administration


Host iSCSI Initiator Registration Process – Windows Host

Microsoft iSCSI Initiator software enables connectivity from a host


computer running a supported Windows operating system.
SANs allow for the iSCSI target functionality without investing in more
hardware.
See the Dell Support Matrix for the latest information about boot device
support.

Microsoft iSCSI initiator is loaded on the Windows host.


• Discovers targets and establishes connections to an external iSCSI-based
storage array through an Ethernet network interface controller.
• Users can use the Microsoft iSCSI Initiator in existing network infrastructure to
enable block-based storage area networks or SANs.
• MS iSCSI initiator does not support booting the iSCSI host from the iSCSI
storage system.

Within iSCSI, a node is defined as a single initiator or target. iSCSI nodes identify
themselves by an iSCSI name. iSCSI names are assigned to all nodes and are
independent of the associated address. An iSCSI node name is also the SCSI
device name. These definitions map to the traditional SCSI target/initiator model.

iSCSI addresses can be one of two types:


• iSCSI Qualified Name (IQN)
• IEEE naming convention, Extended Unity Identifier (EUI)

Module 1 Course Introduction and System Administration


Host iSCSI Initiator Registration Process – Linux Host

The iSCSI driver open-iscsi comes with the Linux kernel.


* See the Dell Unity Family Configuring Hosts to Access Fibre
Channel or iSCSI Storage guide for the driver parameters.

The Linux operating system includes the iSCSI initiator software.


• Configure the open-iscsi driver with the network parameters for each initiator
that connects to the iSCSI storage system.
• Dell Technologies recommends changing some driver parameters. The
configuration file is /etc/iscsi/iscsi.conf *

− Each host connected to an iSCSI storage system must have a unique iSCSI
initiator name for its initiators.
− Multiple hosts connected to the iSCSI interface have the same iSCSI initiator
name.
Connects to an external iSCSI-based storage array through an Ethernet network.
Example:
• Host: iqn.1994-05.com.linux1:2246335fe96e
• Target Port: iqn.1992-04.com.emc:cx.virt1619dzgnh7.a0

To view the iSCSI initiator name


• Use the cat command to check the /etc/iscsi/initiatorname.iscsi

To view and discover the target array


• Command: iscsiadm -m discovery -t sendtargets -p <target ip port addr>

Module 1 Course Introduction and System Administration


In the example, the iscsiadm command was used to discover target port
192.168.32.91. The discovery shows all connected ports on the array
(apm00172445908), each entry represents an initiator path to the storage system.
Each target port displays the array port and SP at the end of the entry. For
example, a0, a1, b0, and b1.

Note: The Linux iSCSI driver gives the same name to all NICs in a
host. This name identifies the host, not the individual NICs. When
multiple NICs from the same host are connected to an iSCSI interface
on the same subnet, only one NIC is used. The other NICs are in
standby mode. The host uses one of the other NICs only if the first
NIC fails.

Module 1 Course Introduction and System Administration


Host iSCSI Initiators Management – Unisphere

Registration

A storage administrator can associate an initiator when adding or editing a host


configuration. To view existing host initiators registered with the storage system, go
to ACCESS > Initiators.

For the iSCSI initiators, an iSCSI target port can have both a physical port ID and a
VLAN ID. In this case, the initiator path is between the host initiator and the virtual
port.

Unisphere Initiators page with the host iSCSI interfaces

The example shows two iSCSI initiators (a Windows host and an ESXi host).
• As with FC, any host with a green and white check icon are registered and
associated with an initiator.
• The yellow triangle indicates that the initiator has no logged in initiator paths,
and a check of the connections should be verified.

Module 1 Course Introduction and System Administration


Initiator Paths

A storage administrator can manually register one or more initiators before


connecting the host to the storage system using the iSCSI protocol.
• Once the initiators are registered and associated with a host, all paths from
each registered initiator are automatically granted access to storage provisioned
for the host.
• Having multiple initiator paths ensures a high availability connection between
the host and storage system.

In Unisphere, verify the host initiator paths by selecting the Initiator Paths tab from
the Initiators page.

Initiator Paths page in Unisphere

The example displays the initiator paths for host Win12b. The initiator shows two
paths, one to each SP. The Target Ports are the hardware Ethernet ports on the
array.

Module 1 Course Introduction and System Administration


Host Configuration

Host configurations are logical connections through which hosts or applications can
access storage resources. They provide storage systems with profiles of the hosts
that access storage resources using the Fibre Channel or iSCSI protocols (block).

Before a host can access block storage, you must define a configuration for the
host and associate it with the provisioned storage resource.

Hosts FC

iSCSI

Host Configuration

Host configurations define how the host connect to the Unity system

The relationship between a host and its Initiators enable block storage resources
on the Dell Unity XT to be assigned to the correct machine.

Tip: Host profiles are also used to identify NAS clients that access
storage resources using the NFS protocol (file). The configuration of
these profiles is discussed in the File Storage Provisioning section.

Module 1 Course Introduction and System Administration


Hosts Configuration Management - Unisphere

Management

To manage the hosts configuration profiles, select Hosts from the Access section
of Unisphere.

From the Hosts page, users can create, view, modify, and delete a host
configuration.

To see the details about a host configuration, select it from the list and the details
about the host profile are displayed on the right-pane.

Create Host Configuration

To create a host configuration, click the + icon, and select the Host as the profile to
create.

Module 1 Course Introduction and System Administration


Summary page of the Add a Host configuration wizard

Follow the Add a Host wizard steps:


• Select the name for the host profile and the host operating system.
• Provide the Host IP address. This step is mostly required when creating a NAS
client.
• A SAN host requires users to select one of the automatically discovered
initiators. When initiators are not automatically discovered and logged into the
system, a storage administrator can manually add the initiators. To manually
add initiators, select the add (+) on the Manually Added Initiators section.
− Then select Create iSCSI Initiator, input the SAN host IQN and the CHAP
credentials on the Add iSCSI Initiator window, and select Add.
− Or select Create Fibre Channel Initiator, input the SAN host HBA WWN on
the Add Fibre Channel Initiator window, and select Add.
• On the Summary page, review the host configuration and select Finish to
accept the changed configuration.

The initiators are registered, and the host added to the list. In the example, the
Windows server WIN16B was added with two Fibre Channel initiators registered.

View/Modify Host Configuration

The properties of a host configuration can be invoked by selecting the host and the
edit icon.

Module 1 Course Introduction and System Administration


Initiator Paths tab of the host configuration properties window

• The General tab of the Host properties window enables changes to the host
profile.
• The LUNs tab displays the LUNs provisioned to the host.
• The Network Addresses tab shows the configured connection interfaces.
• The Initiators tab shows the registered host initiators.
• The Initiator Paths tab shows all the paths that are automatically created for
the host to access the storage.

Module 1 Course Introduction and System Administration


Host Access to Provisioned Block Storage

Unisphere

Host access to provisioned block storage is specified individually for each storage
resource.

To configure host access, open the storage resource properties window and go to
the Host Access tab.
• The tab displays the host configurations that currently associated with the
storage resource.

Association of a LUN with three windows hosts.

To add host configurations, select the add (+) icon.


• The window shows a list of available host configurations.
− Select one or more hosts to associate with the storage resource. Available
protocols for the connection are shown for each host.
− If there are no available host configurations, launch the Add a Host or Add
ESXi host wizard from the More Actions option.

Module 1 Course Introduction and System Administration


• Host LUN Identifiers (HLUs) are assigned automatically by the system but an
administrator can change them by selecting Assign Host LUN IDs.
− One of the benefits of manually setting the HLUs is that you can have a
consistent mapping view of LUNs for each host in a cluster.
− It also allows the configuration of a bootable LUN that enables a host to boot
from the Storage Area Network (SAN).
− The Host LUN ID valid range is from 0 to 16381, however, some operating
systems do not recognize HLUs higher than 255.
• Select OK to save the changes.

Host

To access a block storage resource from the host, the storage resource must be
made available to the host.

Verify that each host is associated with an initiator IQN or WWN record by viewing
the Access > Initiators page.

Perform a scan of the bus from the host to discover the resource. Different utilities
are used depending on the operating system.
• With the Linux systems, run a SCSI scan function or reboot the system to force
the discovery of new devices.
• Use a Windows or Linux/UNIX disk management tool such as Microsoft Disk
Manager to initialize and set the drive letter or mount point for the host.

Format the LUN with an appropriate file system type: For example, FAT or NTFS.

Configure the applications on the host to use the LUN as a storage drive as an
alternative.

Module 1 Course Introduction and System Administration


Host Groups Overview

Host Configurations

Host 1 Host 2 Host 3 Host 4

LUN 1
Host Group

Resource shared by the member hosts of a Host Group

A host group is a logical container which groups multiple hosts and block storage
resources.

• Hosts in a host group have access to the same LUNs or VMFS datastores. The
feature provides each host in the group the same type of access to the selected
block resources.
• A host can only be part of a single host group at a time.
• There are two types of host groups set at creation.
− General: Used to provide LUN access to multiple hosts.
− ESX: Used to provide VMFS datastore access to multiple ESXi hosts.

Module 1 Course Introduction and System Administration


• A host group displays all resources that are connected to hosts within the
group. If a resource is only connected to some but not all hosts within the group,
the resource is displayed within the properties of the host group.
• A storage administrator can create and add new resources to one or more hosts
that are contained within a host group.
• Host group management is done using Unisphere, Unisphere CLI, and REST
API.

Module 1 Course Introduction and System Administration


Host Group Configurations

When creating a host group, a storage administrator has the option to choose
between the types of hosts to group: General and ESX.

Existing host configurations can be added to the host group, or new hosts created
within the wizard.
• Each host has a unique HLU for each LUN it accesses, whether or not that host
is part of a host group.

Module 1 Course Introduction and System Administration


All resources within the host group connect to all hosts within the host
group.

Host 1 Host 2 Host 3 Host 4

LUN 1 LUN 2 LUN 3 LUN 4


HLU 0 HLU 1 HLU 2 HLU 3

Host Group

Example: LUNs 1-4 within host group are accessible by all members

Block storage resources added to a host group become accessible to


all the hosts within the group.
When adding existing hosts as members:
• Option of merging existing host access into the host group.
• Merging ensures all hosts in the group have the same level of
access.

Module 1 Course Introduction and System Administration


Host 1 Host 2 Host 3 Host 4

LUN 1 LUN 2 LUN 3 LUN 4 LUN 5


HLU 0 HLU 1 HLU 2 HLU 4
HLU 3

Host Group

Example: Host access to LUN 5 is granted only to Host 4. Hosts 1-3 have no access
A host group does not restrict a resource from being attached to a subset of
hosts contained within a host group.
When adding the host access of a host group member host to a storage
resource:
• Access is granted only to the individual member host added.
• The other host group members have no access to the storage resource.

Module 1 Course Introduction and System Administration


Host Groups Management - Unisphere

Management

To manage the hosts groups, select Hosts from the Access section of Unisphere.

Host Groups page in Unisphere

Select the Host Groups tab.

From the Host Groups page, users can create, view, modify, and delete a host
group.

To see the details about a host group, select it from the list and the details about
the host group are displayed on the right-pane.

Create Host Group

To create a host group, select the Add (+) icon from the page menu.

Module 1 Course Introduction and System Administration


Summary page of the Create a Host Group wizard

The Create a Host Group wizard is launched. Follow the wizard steps:
• Type a name for the host group, and select the type of host group: General or
ESX.
• On the Configure Hosts section, select the + icon:
− The Add Hosts to Host Group window is launched with the list of existing
host configuration profiles.
− Select the checkbox of the hosts to add or select More Actions and Create
New Host to create a new host configuration profile.
− Click OK to associate the host configuration profiles with the host group. The
Add Hosts to Host Group window is closed and the hosts are listed.
• Select the Merge existing LUNs of selected hosts into host group checkbox
if adding existing host configuration profiles. Advance the wizard step.
• The list of merged LUNs or VMFS datastores that are now accessible to all the
members of the host group are displayed.
− You can also select + to create new storage resources and automatically
make them available to all the hosts in the group.

Module 1 Course Introduction and System Administration


− Optionally you can change the Host LUN IDs.
• On the Summary page, review the host configuration and then click Finish to
save the new configuration.

− In the example, a host group (type ESX) was created to aggregate ESXi
hosts esxi-1.hmarine.test and esxi-2.hmarine.test.
− The VMFS datastore datastore05 that was originally provisioned to esxi-
1.hmarine.test was merged into the host group.
− The VMFS datastore is accessible to both ESXi hosts.

View/Modify Hosts

The properties of a host group can be invoked by selecting the host group and
clicking the edit icon.

Host group Hosts tab showing the members of the group

The General tab of the host group properties window shows the status and the type
of host group.
• The tab also enables the change of the host group name and description.

To view/modify the host members of a host group, select the Hosts tab.

• The page shows the list of hosts added to the host group.
• The list also shows the number of LUNs that each host has access.

Module 1 Course Introduction and System Administration


• The user can select a host and remove it from the group.
• The user can also click the + icon to add new hosts to the host group.

View/Modify Storage Resources

To view/modify the storage resources that are accessible by the member of the
host group, select the LUNs (shown here) or the Datastores tab.

Host Group LUNs tab showing the LUNs associated with the group

• The page shows the list of storage resources the hosts within the host group
have access.
• The list also shows the level of access:
− All Access means that all the hosts in the group were granted access to the
storage resource.
− Some Access means that only specific hosts within the group were granted
access to the storage resource.
• The user can select a storage resource and remove all access that is assigned
to the host group.

Module 1 Course Introduction and System Administration


• The user can change the Host LUN ID associated with the storage resource.
• The user can also click the + icon to add new storage resources to the host
group.

Module 1 Course Introduction and System Administration


Activity: Windows Host Access to Block Storage

Virtual lab for facilitated sessions:


• Manually set the initiator configuration in
a Windows host and discover the
storage array iSCSI targets.
• Configure the Windows host access
profile that is associated with the host
initiators using Unisphere.
• Present the provisioned LUN to the
Windows host.

Module 1 Course Introduction and System Administration


Activity: Linux Host Access to Block Storage

Virtual lab for facilitated sessions:


• Manually set the host initiator
configuration in a Linux host, and
discover the storage system iSCSI
targets.
• Configure a host profile for the Linux
host and associate it with the host iSCSI
initiators.
• Present the provisioned LUN to the
Linux host.

Module 1 Course Introduction and System Administration


Provision File Storage

Module 1 Course Introduction and System Administration


File Storage

NAS Server

File Share
System

File Storage

Supported File storage resources

File storage in the Dell Unity platform is a set of storage resources that provide file-
level storage over an IP network.

SMB and NFS shares are created on the storage system and provided to Windows,
Linux, and UNIX clients as a file-based storage resource. Shares within the file
system draw from the total storage that is allocated to the file system.

File storage support includes NDMP backup, virus protection, event log publishing,
and file archiving to cloud storage using CTA as the policy engine.

The components in the Dell Unity platform that work together to provision file-level
storage include:
• NAS Server
− Virtual file server that provides file resources on the IP network (to which
NAS clients connect).

Module 1 Course Introduction and System Administration


− Configured with IP interfaces and other settings that are used to export
shared directories on various file systems.
• File system
− Manageable file-based storage resource associated with a specific quantity
of storage, a particular file access protocol, and one or more shares.
Network clients can access shared files or folders.
• Shares

− Exportable access pointer to file system storage that network clients can use
for file-based storage.
− The file-based storage is accessed through the SMB/CIFS or NFS file
access protocols.

Module 1 Course Introduction and System Administration


NAS Servers

• NAS servers are software components that provide file data transfer and
connection ports for users, clients, and applications that access the storage
system file systems.
− Communication is performed through TCP/IP using the storage system
Ethernet ports.
− The Ethernet ports can be configured with multiple network interfaces.
• Before provisioning a file system over SMB or NFS, an NFS datastore or a File
(vVol) datastore, a NAS server must be running on the system.
− The NAS server must be appropriate for managing the storage type.
− NAS servers can provide multiprotocol access for both UNIX/Linux and
Windows clients simultaneously.
• NAS servers retrieve data from available disks over the SAS backend, and
make it available over the network using the SMB or NFS protocols.

− SMB users and applications from the Microsoft Windows environments


− NFS users and applications from UNIX/Linux
− VMware NFS and File(vVol) datastores

Module 1 Course Introduction and System Administration


NAS Servers Management

To manage NAS servers in Unisphere, select Storage > File > NAS Servers.

The NAS Servers page shows the list of created NAS servers, the SP providing
the Ethernet port for communication, and the Replication type, if any are
configured.

To manage NAS servers in Unisphere, select Storage > File > NAS Servers.

From the NAS Servers page, a storage administrator can create a NAS server,
view its properties, modify some settings, and delete an existing NAS server.

To see the details about a NAS server, select it from the list and its details are
displayed on the right-pane.

Module 1 Course Introduction and System Administration


Create NAS Servers

To create a NAS server, select Add (+) from the NAS Servers page, and follow the
Create a NAS Server wizard steps.

Summary page of the Create a NAS Server wizard.

Enter a name for the NAS server. Select the storage pool to supply file storage.
Select the Storage Processor (SP) where you want the server to run. It is also
possible to select a Tenant to associate with the NAS Server. IP multitenancy is a
feature that is supported by Dell Unity systems.

In the next step, configure the IP interfaces used to access the NAS server. Select
the SP Ethernet port that you want to use and specify the IP address, subnet
mask, and gateway. If applicable select a VLAN ID to associate with the NAS
server. VLAN ID should be configured only if the switch port supports VLAN
tagging. If you associate a tenant with the NAS server, you must select a VLAN ID.

In the Configure Sharing Protocols page select the protocols the NAS server
must support:
• Windows shares: If you configure the NAS server to support the Windows
shares (SMB, CIFS), specify an SMB hostname, and a Windows domain. You

Module 1 Course Introduction and System Administration


must also provide the username and password of a Windows domain account
with privileges to register the SMB system name in the domain.
• Linux/UNIX shares: If you configure the NAS server to support UNIX/Linux
shares (NFS), the NFSv4 protocol can be enabled and the support to File vVols.
• Multiprotocol: SMB and NFS shares on the same file system.

The UNIX Directory Service page is only available if UNIX/Linux shares or


multiprotocol are selected. A UNIX Directory Service must be used: NIS or LDAP.

The NAS server DNS can be enabled on the next page. For Windows shares
enable the DNS service, add at least one DNS server for the domain and enter its
suffix.

The configuration of remote replication for the NAS Server is also available from
the wizard.

Review the configuration from the Summary page and click Finish to start the
creation job.

Module 1 Course Introduction and System Administration


View NAS Server Properties

To view and modify the properties of a NAS Server, select the server and the edit
icon.

From the General tab of the properties window, you can view the associated pool,
SP, and supported protocols. It is also possible to change the name of the NAS
Server and change the SP ownership. Selecting which NAS servers run on each
SP balances the performance load on the Storage Processors.

NAS Server properties window

The General tab also displays the configured interfaces and their associated roles.
Possible roles are Production or Backup and DR Test.

From the Network tab, you can view and modify the properties of the associated
network interfaces. New interfaces can be added, and roles defined. Existing
interfaces can be deleted. From this tab, it is also possible to change the preferred
interface, view and define network routes, and enable advanced networking
features such as Packet Reflect.

If you have configured multiple interfaces for a NAS server, the system
automatically selects the interface the default route uses for outgoing
communication. This interface is identified as the preferred interface. The NAS

Module 1 Course Introduction and System Administration


server uses preferred interfaces when the application does not specify the source
interface, or the destination is on a remote subnet.

When a NAS server starts outbound traffic to an external service, it compiles a list
of all the available network interfaces on the proper subnet. Then performs one of
the following actions if a preferred interface of the appropriate type, IPv4, or IPv6, is
in the compiled list:
• If the preferred production interface is active, the system uses the preferred
production interface.
• If the preferred production interface is not active, and there is a preferred active
backup interface, the system uses the preferred backup interface.
• If the preferred production interface is not active (NAS server failover), and
there is no preferred backup interface, the system does nothing.

The Naming Services tab enables the user to define the Naming services to be
used: DNS, LDAP and/or NIS.

The Sharing Protocols tab enables the user to manage settings for file system
storage access. For Windows shares (SMB, CIFS) it provides the Active Directory
or Standalone options. For Linux/UNIX shares, it provides the NFS v3 and/or NFS
v4 options. The user can also enable the support for File Transfer Protocol and
Secure File Transfer Protocol. If a UNIX Directory Service is enabled in the Naming
Services tab, multiprotocol access to the file system may also be provided.

The other tabs of the NAS Server properties window enable the user to enable
NDMP Backup, DHSM support, and Event Publishing. Extra configurable features
include anti-virus protection and Kerberos authentication, and remote protection.

Module 1 Course Introduction and System Administration


Ethernet Ports

You can verify the configuration of the network ports the NAS Server interfaces use
in the Settings configuration window.

Double-click the image for an enlarged view.

From the Settings window, select the Ethernet option under the Access section.

From the Ethernet Ports page, settings such as link transmission can be verified
and changed.

To display information about a particular Ethernet port, select it from the list and
click the edit link.

The properties window shows details about the port, including the speed and MTU
size. The user can change both these fields if required.

The port speed can be set to 100 Mbps or 1 Gbps. The user can also set the port
to Auto Negotiate with the switch it is connected to.

The MTU for the NAS Server, Replication, and Import interfaces can be set to any
value (1280 to 9216). The MTU has a default value of 1500 bytes. If you change
the value, you must also change all components of the network path—switch ports

Module 1 Course Introduction and System Administration


and host. If you want to support jumbo frames, set the MTU size field to 9000
bytes. This setting is only appropriate in network environments where all
components support jumbo frames end-to-end. In virtualized environments, jumbo
frames should be configured within the virtual system, as well.

Module 1 Course Introduction and System Administration


Activity: Create NAS Servers

Virtual lab for facilitated sessions:


• Create one NAS server for SMB file
access.
• Create one NAS server for NFS file
access.

Module 1 Course Introduction and System Administration


File Systems Management

To manage a file system, select File from the Storage section.

Managing a file system.

The File Systems page shows the list of created file systems with their sizes in
GB. Information includes the allocated capacity, the NAS server that is used to
share each one, and the pool it was built from.

From the File Systems page, you can create a file system, view its properties,
modify some of its settings, and delete it.

To see the details about a file system, select it from the list. The details about the
file system are displayed on the right pane.

Module 1 Course Introduction and System Administration


Create File System

To create a file system, click the “add” link from the File Systems page to launch
the Create a File System wizard.

Summary page of the Create a File System wizard

To set the parameters for creating the file system, follow the steps of the wizard:
• Provisioning a file system involves selecting the NAS Server to associate with it.
So, before provisioning the file system a NAS server must be created. The
protocols the file system supports depend on the selected NAS server.
• On the next step of the wizard, you must enter a name and optionally enter a
description for the file system.
• The wizard enables the configuration of file-level retention for the file system.
This feature is covered in more details in the Scalability, Performance, and
Compliance Features module.
• The storage administrator also defines the size of the file system and the pool to
built it from. The capacity of the file system can be expanded after its creation.
− If not defined otherwise, a file system is thin provisioned by default. The only
time a user can define a file system as thick provisioned is at the moment it
is created. This setting cannot be changed later.

Module 1 Course Introduction and System Administration


− Data Reduction can be enabled at the moment of the file system creation.
The file system must be thin-provisioned to support the feature. Data
Reduction is discussed in more details on the Storage Efficiency module.
− If a multitiered pool was selected to build the file system, the user can define
the tiering policy for it.
• The user can also define how the file system should be shared at the moment
of its creation or later.
− For Windows shares, it is possible to configure extra SMB settings.
− For Linux/UNIX shares, it is possible to associate it with a host profile and
set the access level: Read-only, read/write, and read/write, allow root. The
access level is discussed in more details in the Storage Resources Access
module.
• Local and remote data protection features are also supported and can be
configured on the file system at the time of its creation or later.

The Review section of the wizard shows the configuration, and the user can click
Finish to start the creation job.

The Results of the process are displayed on the last page, and the user can click
OK to close the wizard.

Module 1 Course Introduction and System Administration


View File System Properties

To view and modify the properties of a file system, click the edit icon.

Viewing File System Properties

The General tab of the properties window depicts the details about file system
utilization and free space. Also, the size of the file system can be expanded and
shrunk from this tab. The Capacity Alarm Setting link enables you to change the
settings for info, warning, and error alerts when a threshold for used space is
exceeded.

The other tabs of the file system properties window enable the user to:
• Configure and manage local and remote protection.
• For file systems built from traditional pools, the FAST VP tab is available. If a
multitiered pool was used to build the file systems, and then the tiering policy
can be changed.
• Configure file system quotas.

Module 1 Course Introduction and System Administration


• If File-Level Retention was configured at the moment of the file system creation,
an FLR tab enables the user to make changes to the feature settings.
• Enable the event notifications for the file systems to be monitored by the Event
Publishing service.

You can also enable and disable the Data Reduction feature for the storage
resource. When the feature is enabled, the tab displays the achieved Data
Reduction savings that are measured in GB, percentage, and ratio. The Data
Reduction feature is discussed in the Storage Efficiency section.

The File System Quotas feature is discussed in the Storage Efficiency Features
section. The File Level Retention feature is discussed in the Scalability,
Performance, and Compliance Features section.

Module 1 Course Introduction and System Administration


Activity: Create File Systems

Virtual lab for facilitated sessions:


• Create one file system for SMB file
data.
• Create one file system for NFS file
data.

Module 1 Course Introduction and System Administration


File Storage Access Overview

Access to File storage on the Dell Unity platform requires a NAS client having
connectivity to a NAS Server from the storage system.

NAS Clients
SMB/CIFS

NFS

NAS Server

NFS

vVol (File)

File Shares
System VMware
File Datastore

Dell Unity XT Storage System

NAS clients access to Unity XT file storage resources

Connectivity between the NAS clients and NAS servers is over the IP network
using a combination of switches, physical cabling, and logical networking. The Dell
Unity XT front-end ports can be shared, and redundant connectivity can also be
created by networking multiple switches together.

Storage must be provisioned on the storage system for the NAS client.
• NAS servers and file systems are created from storage pools.
• File system shares are configured based on supported sharing protocols
configured on the NAS Server.

Module 1 Course Introduction and System Administration


• NAS client access to the shared storage resources is over SMB/CIFS and NFS
storage protocols.
• Dell EMC Unity platform can also provision NFS datastores and vVol (File)
datastores for the ESXi hosts.

The NAS client must then mount the shared file system. The mounting of file
systems is completed differently depending on the operating system.

NFS clients have a host configuration profile on the storage system with the
network address and operating systems defined. An NFS share can be created and
associated with host configuration. The shared file system can be mounted in the
Linux/UNIX system.

SMB clients do not need a host configuration to access the file system share. The
shared file system can be mounted to the Windows system.

ESXi hosts must be configured on the storage system by adding the vCenter
Server and selecting the discovered ESXi host. The host configuration can then be
associated with a VMware datastore in the Host Access tab of the datastore
properties. VAAI enables the volume to be mounted automatically to the ESXi host
after it is presented.

Module 1 Course Introduction and System Administration


Create Host Configurations for NFS Clients

For Linux/UNIX NFS file system access, the user can configure host access using
a registered host (NAS client), or a list of NAS clients without registration. To create
a configuration profile and register a host, click the + icon and select the profile
type: Host, Subnet, or Netgroup.

Summary page of the Add a Host wizard

Selecting the profile launches the wizard and steps the user through the process to
configure the profile.
• Users must enter the hostname at a minimum.
• While the host operating system information is not needed, providing the
information enables for a more specific setup.
• To customize access to NFS shares, the Network Address is required. The
Network Address is a name or IP address. No port information is allowed.
• Tenant information is not needed. Tenants are configured at the file system
level.

Other profile options include:


• Subnet access: IP address and subnet mask that defines a range of network
addresses that can access shares.
• Netgroup access: Name of a netgroup that defines a subset of hosts, users, or
domains that can access shares.

Module 1 Course Introduction and System Administration


The configuration can be reviewed from the Summary page. Select Finish to
complete the configuration.

Module 1 Course Introduction and System Administration


NFS Shares Management

To manage file system shares created for Linux/UNIX hosts access, select File
from the Storage section. From the NFS Shares page, it is possible to create a
share, view the share properties, modify settings, or delete an existing NFS share.

To manage NFS shares, select Storage > File > NFS Shares

In the example, the vol/fs01 share is selected and the details are shown on the
right.
• The share is on the NAS Server nas01 with a file system fs01.
• The local path to the share is /fs01/ and the exported path to access the share
is: 192.168.64.182:/vol/fs01
• The share name can be a virtual name that is different from the real pathname.
• Access to the share is set to No Access which is the default.

Other options are Read-Only, Read/Write, Read/Write, allow Root, and the Read
only, allow Root.

Module 1 Course Introduction and System Administration


Creating NFS Shares

To create an NFS share for a file system in Unisphere, go to the File option under
the Storage section and select the NFS Shares tab.

Click the Add (+) icon to create an NFS share.

Select the Add (+) icon from the NFS Shares page. To create a share, follow the
Create an NFS Share wizard steps:
• Select the source file system for the new share.
• Provide an NFS share name and path. The user can also customize the anon
UID (and have it mapped to the uid of a user that has admin rights).
• Configure access to an existing host.
• Review the Summary page selections and Finish the configuration.

The example shows the wizard summary for a file system named fs02. The share
is the virtual name vol/fs02 with the local path of /fs02/. The default host access is
No Access and no customized host access is configured. The export path that is
used is the IP address of the NAS Server followed by the share.

Module 1 Course Introduction and System Administration


Viewing NFS Share Properties

To view and modify the properties of an NFS share, select it from the File > NFS
Shares page and click the pencil icon.

Adding NAS client access to a file system

Two tabs are available for selection.


• The General tab of the properties window shown on the left. The window
provides details about the Share name, NAS Server, file system, local path, and
the export path.
• The Host Access tab enables users to configure host access to file-based
storage resources. Host access to file-based resources is configured initially
when created or, later, from the relevant Properties screen.

Based on the type of storage resource or share you set up, you may choose to
configure default access for all hosts. Users can also customize access to
individual hosts.

Module 1 Course Introduction and System Administration


You can classify host access for file storage under the following categories:
• Default Access – Access permissions for all hosts with network connectivity to
the storage resource.
• Customized Access – Overrides the default access and enables you to set a
specific type of access for individual hosts.

Module 1 Course Introduction and System Administration


Select Host Access

Granting access to a list of NAS clients

Unregistered hosts (NAS clients) can also be associated with an NFS share.

To associate the NFS share, perform the following operation:


• From the NFS share properties window, select the Host Access tab.
• Select the + icon from the Share properties window.

Module 1 Course Introduction and System Administration


• Then define the access level for the hosts to be added.
• There are two methods for adding the hosts.

− Select the first option to enter a list of unregistered hosts to add separated
by a comma.
− Select the second option if you want to pick each one of the NAS clients.
The graphic displays the selection of Host access using a list of unregistered NAS
clients.

Module 1 Course Introduction and System Administration


Setting Host Access Levels

The default access permissions set for the share apply the default access
permissions set for the file system.

The table shows the permissions that can be granted to a host when accessing a
file system shared over NFS.

Access Description
Level

Read-only Hosts have permission to view the contents of the share, but not to
write to it.

Read/Write Permission to view and write to the file system, but not to set
permission for it.

Read/Write, Permission to read and write to the file system, and to grant and
allow root revoke access permissions. For example, enables permission to
read, modify and execute specific files and directories for other
login accounts that access the file system.

Read-only, Hosts have permission to view the contents of the share, but not
allow Root write to it. The root of the NFS client has root access to the share.

No Access No access is permitted to the storage resource.

Module 1 Course Introduction and System Administration


Connecting Host to Shared NFS File System

• When mounting the share, specify the network address of the NAS server and
the export path to the target share.
− Share address is a combination of NAS server network address and the
export path to the target share.
• To connect the shared NFS file system to the Linux/UNIX host, use the mount
command.
− Linux command: mount –t nfs NAS_server:/<share_name>
<directory>
• After mounting the share to the host, set the directory and file structure of the
share.

− Set the user and group permissions to the shares directories and files.

Module 1 Course Introduction and System Administration


Activity: NFS File Storage Access

Virtual lab for facilitated sessions:


• Configure an NFS share on a Dell EMC
Unity file system.
• Create a top-level share of the file
system for administrator access.
• Create a lower-level subfolder share,
configure permissions to a specified
user community, and test file storage
access.

Module 1 Course Introduction and System Administration


SMB Shares Management

To manage file systems shares created for Windows host access, select File from
the STORAGE section in Unisphere.

Manage SMB Shares

Select the SMB Shares tab. The SMB Shares page shows the list of created
shares, with the NAS server, its file system, and its local path.
• From the SMB Shares page, create a share, view its properties, modify some
settings, or delete an existing SMB share.
• To view the details about a share, select the share from the list. The details are
shown on the right.

Module 1 Course Introduction and System Administration


Create SMB Shares

New SMB shares for a file system can be created from the SMB Shares page.

Summary page of the Create an SMB Share wizard

To launch the Create an SMB Share wizard, select the Add (+) icon.

Follow the wizard steps:


• Select the supported file system
• Input a name and description for the share.
• Configure optional advanced SMB properties.

− These features are optional and will be explained in the next page.
Access level permissions for the SMB shares are controlled by the network access
controls set on the shares. No host configuration is necessary.

Module 1 Course Introduction and System Administration


View SMB Share Properties

View and modify the properties of an SMB share by selecting the share and the
edit icon.

Viewing SMB Shares.

The General tab of the properties window provides details about the Share name
and location of the share: NAS Server, file system, Local Path, and the Export path.

The Advanced tab enables the configuration of advanced SMB share properties:
• Continuous availability gives host applications transparent, continuous access
to a share following a failover of the NAS server.
• Protocol encryption enables SMB encryption of the network traffic through the
share.
• Access-Based Enumeration filters the list of available files on the share to
include only the ones to which the requesting user has read access.
• Branch Cache Enabled copies content from the share and caches it at branch
offices. Branch Cache Enabled copies enable client systems at branch offices
to access the content locally rather than over the WAN.

Module 1 Course Introduction and System Administration


• Distributed file system (DFS) enables the user to group files on different
shares by transparently connecting them to one or more DFS namespaces.
• Offline Availability configures the client-side caching of offline files.

Module 1 Course Introduction and System Administration


Connect Host to Shared SMB File System

Dell CIFS Management snap-in software consists of a set of Microsoft


management console (MMC) snap-ins.

• Map the share using the host user interface or CLI commands.
− Specify the full Universal Naming Convention (UNC) path of the SMB share
on a NAS server.
− Windows Explorer select Tools and Map Network Drive
o Drive letter and UNC path to the file system share in the NAS server
− Command prompt
o net use [device]:\\NAS_server\share_export_path
• Authentication and authorization settings are maintained on the Active Directory
server for NAS servers in the Windows domain.
− Settings are applied to files and folders on the SMB file systems
• Dell recommends the installation of the Dell CIFS Management snap-in on a
Windows system.

− The snap-ins are used to manage home directories, security settings, and
virus-checking on a NAS Server.

Module 1 Course Introduction and System Administration


Activity: SMB File Storage Access

Virtual lab for facilitated sessions:


• Configure an SMB share on a Dell EMC
UnityVSA file system.
• Create a hidden share to the top level of
the file system for administrator access.
• Create a lower-level subfolder share,
configure permissions to a specified
user community, and test file storage
access.
• Create a lower-level subfolder share to
a Dell EMC UnityVSA file system using
the Windows computer Management
utility and test file storage access.

Module 1 Course Introduction and System Administration


Provision VMware Datastores

Module 1 Course Introduction and System Administration


VMware Storage

VMFS NFS

vVol

VMware
Datastores

Specialized VMware storage resources that are called datastores are provisioned
from Dell Unity XT platform. Unisphere supports the configuration of host profiles
that discover ESXi hosts that are managed by a vCenter server.

A VMware datastore is a storage resource that provides storage for one or more
VMware vSphere ESXi hosts.
• The datastore represents a specific quantity of storage capacity made available
from a particular Dell Unity XT LUN (Block) or NAS file system (File).
• The storage system supports storage APIs that discover and mount datastores
that are assigned to ESXi hosts within the VMware environment.

The provisioning of the VMware datastores using the GUI or CLI interfaces involves
defining the datastore type:
• VMFS (Block)
• NFS (File)

Module 1 Course Introduction and System Administration


• vVol (Block)
• vVol (File)

A storage pool must be associated with a Capability Profile to enable VMware


vVols based storage provisioning. Capability profiles describe defined storage
characteristics so that a user-selected policy can be mapped to a set of compatible
vVol datastores.

Module 1 Course Introduction and System Administration


VMware Datastores Management

To manage a VMware datastore, select VMware from the Storage section.

Unisphere Datastores page

From the Datastores tab of the VMware page, the storage administrator can create
a datastore, view its properties, modify some of its settings, and delete it. The
Datastores tab shows the list of created VMware datastores with its size in GB, the
allocated and used capacity, and the type of datastore. The page also shows the
storage pool that is associated with the datastore, and the NAS server used for
NFS and vVol (File) datastores.

Details about a datastore are displayed on the right-pane whenever a datastore is


selected from the list.

Module 1 Course Introduction and System Administration


Provision VMFS Datastores

To create a datastore in Unisphere, you must click the “add” link from the
Datastores page to launch the Create VMware Datastore wizard.

TIERING POLICIES

- Start High then Auto-Tier (default)

- Auto-Tier

- Highest Available Tier

- Lowest Available Tier

Summary page of the Create VMware Datastore wizard

Follow the steps of the wizard for creating the datastore.


• To provision a VMFS (Block) datastore, select the Block option on the Type
section of the wizard. Then provide a name and a description for the new
datastore.
• Select the storage pool to create the datastore, and the total capacity to allocate
to the storage object.
− The datastore capacity can be expanded later if there is sufficient primary
storage available.
• If a multitiered pool is used to build the datastore, select the tiering policy. The
policy is used for datastore data relocation.
• If not defined otherwise, a datastore is thin provisioned by default. The only time
an administrator can define the storage object as thick provisioned is at the
moment it is created by clearing the Thin parameter setting. This setting cannot
be changed later.
• Enable Data Reduction if the selected pool includes enough SAS Flash drives,
and if thin provisioning is set for the datastore. The feature can be enabled on
the storage object for new writes.

Module 1 Course Introduction and System Administration


• A storage administrator can also associate Host I/O limit policies to the VMFS
datastore to optimize the ESXi hosts access.
• In the Configure Access window, specify the hosts to access the datastore.
ESXi hosts can be granted access at the time the datastore is provisioned or
later.
• The storage administrator can also enable local or remote protection for the
datastore.

Review the configuration settings for the datastore on the Summary step and select
Finish to start the job creation.

Module 1 Course Introduction and System Administration


Provision NFS Datastores

In a similar way, if creating an NFS datastore in Unisphere, the user must launch
the Create VMware Datastore wizard.

TIERING POLICIES

- Start High then Auto-Tier (default)

- Auto-Tier

- Highest Available Tier

- Lowest Available Tier

Summary page of the Create VMware Datastore wizard

Follow the steps of the wizard for creating the datastore.


• Before provisioning a File datastore, the storage administrator must have
previously created at least one NAS server with support to NFS protocol in the
system. The storage administrator must associate the NAS server to the new
datastore, select the storage pool, and the total capacity to allocate.
• If a multitiered pool is used to build the datastore, a storage administrator can
select the tiering policy for the datastore data relocation.
− The data relocation is covered in the FAST VP lesson of the Storage
Efficiency module.
• If not defined otherwise, a datastore is thin provisioned by default. To set thick
provisioning, the Thin check box must be cleared.
• Enable Data Reduction If the selected pool has enough Flash drives, and if thin
provisioning is set.
− The Data Reduction feature is discussed in more details on the Storage
Efficiency section.

Module 1 Course Introduction and System Administration


• The Host IO Size parameter can be used to match storage block size with the
I/O size of the application.
− This configuration maximizes the performance of the VMware NFS
datastores.
• In the Configure Access window, the user can specify the ESXi hosts that can
access the datastore.
• The storage administrator can also enable local or remote protection for the
datastore.

Review the configuration settings for the datastore on the Summary step and select
Finish to start the job creation.

The Results page shows the conclusion of the process with a green check mark for
a successful operation.

Module 1 Course Introduction and System Administration


View VMware Datastore Properties

To view and modify the properties of a VMware datastore, select the View/Edit (
) icon.

Datastore properties window

The General tab of the properties window depicts the details about the datastore,
including its capacity utilization and free space. You can expand the size of the
datastore from this page.

For an NFS datastore, the General tab also shows the Host I/O size, the file
system format, the NAS server used, and the NFS export path.

For a VMFS or vVol (Block) datastore, the user can modify the datastore name and
change the SP ownership to balance the workload between SPs.

Module 1 Course Introduction and System Administration


VMware Datastores Access

Before an ESXi host can access the provisioned storage, some pre-configuration
must be done to establish host connectivity to the storage system.

The ESXi host must have an adapter to communicate over the storage protocol.
• In the Fibre Channel environments, use a host bus adapter (HBA).
• For iSCSI, and NFS protocols a standard NIC can be used.

For VMFS and vVol (Block) datastores provisioning


• Use a multipathing software to manage paths to the storage system.
• In a Fibre Channel environment, users must configure zoning on the FC
switches.
• In iSCSI environments, initiator and target relationships must be established.

For NFS and vVol (File) datastores provisioning


• At least one NAS server with support to NFS protocol must be created in the
storage system.
• The ESXi host network interface must be able to connect to the NAS server IP
interface.

Having completed the connectivity between the host and the array, you are in a
position to provision datastores to the ESXi host.

A host configuration must be created and associated to provisioned datastores with


a defined level of access.

Module 1 Course Introduction and System Administration


ESXi Host Configuration Profile

Unisphere provides the VMware discovery capabilities to collect virtual machine


and datastore storage details from vSphere and display them in the context of the
storage system. This capability automates the iSCSI target discovery for ESXi
hosts to access the storage.

Adding an ESXi host configuration

In Unisphere, a storage administrator can configure host access to storage


provisioned for a VMware datastore. The system automatically connects the ESXi
host and configures the relevant datastore access. The storage system
automatically updates the ESXi host with any changes or removal of the datastore
in Unisphere.

To create an ESXi host configuration profile in Unisphere, follow these steps:


3. Under Access, select VMware and go to the vCenters page.
4. Select the Add (+) icon to launch the Add vCenter wizard.
5. On the Find ESXi Hosts step of the wizard, enter the vCenter server
authentication credentials, and click Find.

Module 1 Course Introduction and System Administration


• The wizard automatically discovers the ESXi hosts when a vCenter Server is
added.
• The automation tasks fail if manual host configurations are created for the
ESXi hosts.
• From the list of discovered entries, select the relevant ESXi hosts, and click
Next.
6. To register the Dell Unity system as a VASA provider with the vCenter, select
Register VASA Provider and enter the storage system Unisphere login
credentials.

The Summary page enables the storage administrator to review the ESXi hosts
profile, and conclude the configuration.

Module 1 Course Introduction and System Administration


VMware Host Access to Provisioned Datastore

Host access to the VMware datastores over FC, iSCSI, or NFS protocol is defined
when selecting the host configuration to associate with the provisioned datastore.
In Unisphere, this operation can be accomplished when creating the datastore, or
later from the storage resource properties or the ESXi host properties window.

Grant ESXi host access to a provisioned datastore

From the datastore properties window in Unisphere go to Host Access tab, and
follow these steps:
7. Select the Add icon to open the Select Host Access window.
8. Select one or more wanted ESXi hosts from the filtered list of host configuration
profiles and select OK.
9. The newly added ESXi host is displayed in the list of hosts with granted access
to the datastore. For VMFS datastores, the host is given an automatic Host LUN
ID (HLU).
10. Optionally change the HLU assigned to the VMFS datastore.

Module 1 Course Introduction and System Administration


When creating an NFS datastore, specify the version of the NFS protocol. Mount
the storage resource to the ESXi host using NFSv3 or NFSv4.

To configure customized host access for the NFS datastore, set one of these
permission levels:
• No access: No access is permitted to the storage resource.
• Read-only: Permission to view the contents of the storage resource or share,
but not to write to it.
• Read/write: Permission to read and write to the NFS datastore or share. Only
hosts with "Read/Write" access are allowed to mount the NFS datastore using
NFSv4 with Kerberos NFS owner authentication.
• Read/write, enable Root: Permission to read and write to the file system, and
grant and revoke access permissions. For example, permission to read, modify
and execute specific files and directories for other login accounts that access
the file system. Only hosts with "Read/Write, enable Root" access are allowed
to mount the NFS datastore, using NFSv4 when NFS owner is set to root
authentication.

Module 1 Course Introduction and System Administration


Discovered Storage Device in vSphere

After a VMware datastore is created and associated with an ESXi host profile,
check to see if the block storage resource is discovered in the vSphere server.

Discovered Unity storage device

Open a vSphere Web Client session to the vCenter Server.


• Select the ESXi server from the list of hosts.
• Open the Configure tab.
• Expand the Storage section, and select Storage Devices.

New storage devices are displayed on the list as attached to the host.

The device Details section of the page displays the device properties and all the
created paths for the provisioned block storage.

Module 1 Course Introduction and System Administration


Automatically Created Datastores

From the vSphere Web Client, select the Datastores option under the Configure
section or the Datastores tab.

Verify that the datastores provisioned in Unisphere were automatically created and
presented to the ESXi host.

Unity provisioned datastores automatically mounted in vSphere

Module 1 Course Introduction and System Administration


Activity: VMware Datastore Access

Virtual lab for facilitated sessions:


• Add a vCenter server to Unisphere
and verify the managed ESXi hosts
are discovered.
• Create VMware vStorage VMFS and
NFS datastores in Unisphere.
• Verify that the newly created
datastores are available to the ESXi
host for use.

Module 1 Course Introduction and System Administration


VMware Virtual Volumes (vVols)

VCENTER SERVER

STORAGE POLICY BASED MANAGEMENT

VMWARE
VSPHERE

PROTOCOL
ENDPOINTS

VASA
vVols Data Path
Provider

STORAGE
CONTAINER
vVol vVol vVol
vVol

VMware Virtual Volumes (vVols) are storage objects that are provisioned
automatically by a VMware framework to store Virtual Machine (VM) data.

The Dell Unity XT platform supports the creation of storage containers for both
Block and File VMware vVol datastores deployments.

Virtual volumes support enables storage policy based management in the vSphere
environment.
• Use of storage profiles that are aligned with published capabilities to provision
virtual machines.
• The VMware administrator can build storage profiles using services levels,
usage tags, and storage properties.

Module 1 Course Introduction and System Administration


A VM-granular integration facilitates the offloading of data services with support to
snapshots, fast clones, full clones, and reporting on existing vVols and affected
virtual machines.

Compatible arrays such as the Dell Unity XT storage systems can communicate
with the ESXi server through VASA APIs based on the VASA 2.0 protocol. The
communications is established through the management network using https.

Protocol Endpoints provide access to the Unity XT provisioned storage containers


(vVol datastores) using iSCSI and FC (Block) and NFS (File) protocols.

Module 1 Course Introduction and System Administration


What Is Stored in vVol Datastores?

Virtual volumes are storage objects that are provisioned automatically on a vVol
datastore and store VM data.

These objects are different than LUNs and file systems and are subject to their own
set of limits.

Virtual Description
Volume
(vVol)

Data Stores data such as VMDKs, snapshots, clones, fast-clones, and so


on. At least one Data vVol must be created per VM to store its hard
disk.

Config Stores standard VM-level configuration data such as .vmx files, logs,
NVRAM, and so on. At least one Config vVol must be created per VM
to store its .vmx configuration file.

Swap Stores a copy of memory pages of a VM when the VM is powered on.


Swap vVols are automatically created and deleted when VMs are
powered on and off.

Memory Stores a complete copy of the memory on disk of a VM when


suspended or for a with-memory snapshot.

Module 1 Course Introduction and System Administration


Provisioning vVols Workflow - Storage

Storage Administrator
Configure Create
Create
Capability Storage
Storage Pools
Profiles Containers

VM Administrator Compliant Complete VM


Provisioning
Add Vendor Create
Provision VMs to
Provider / vVol Storage
Storage Policies
Datastores Policies
Alert
Non Compliant Administrator

Provisioning Virtual Volumes (vVols) involve jobs that are performed on the storage
system and others that are performed on the vSphere environment.

First, the storage administrator must create the storage pools to associate with the
VMware Capability Profiles.

Capability Profile definitions include:


• Thin or thick space efficiency
• User-defined set of strings
• Storage properties (drive type and RAID level)
• Service level

Then the storage containers can be created by selecting the storage pool and
associated Capability Profile.

Module 1 Course Introduction and System Administration


Provisioning vVols Workflow - Storage (Associate Capability
Profiles with Storage Pools)

Workflow Step

Capability profiles can be created at the time of pool creation (recommended), or


can be added to an existing pool later.
• Capability profiles must be created before you can create a vVol datastore.

Storage Policy
vVol Datastore
- Capacity

- Performance
vVol (Block) / vVol (File)
- Availability
Datastore - Data Protection
Configure
- Security Capability
vDisks
Capability Profile Profile

- Characteristics of pool: drive type, RAID, FAST Cache,


NAS and SCSI
FAST VP, space efficiency
- Service levels: Gold, Silver, Bronze
Protocol
Endpoints

Storage Policy-Based VASA Storage Pool Create


Storage Pool
Management Provider

Provisioning vVols workflow

A Capability Profile is a set of storage capabilities for a vVol datastore. These


capabilities are derived based on the underlying pools for the vVol datastore.

Capability profiles define storage properties such as drive type, RAID level, FAST
Cache, FAST VP, and space efficiency (thin, thick). Also, service levels are
associated with the profile depending on the storage pool characteristics. The user
can add tags to identify how the vVol datastores that are associated with the
Capability Profile should be used.

Capability Profiles Management

To manage a capability profile, select VMware from the Storage section, and then
select Capability Profiles from the top submenu.

Module 1 Course Introduction and System Administration


Double-click the image for an expanded view.

From the Capability Profiles page, it is possible to create a capability profile, view
its properties, modify some settings, and delete an existing capability profile.

The Capability Profile page shows the list of created VMware Capability Profiles,
and the pools it is associated with.

To see details about a capability profile, select it from the list and its details are
displayed on the right-pane.

Creating Capability Profiles

To create a capability profile, click the “add” link from the Capability Profiles page to
launch the Create VMware Capability Profile wizard.

Module 1 Course Introduction and System Administration


Double-click the image for an expanded view.

Follow the steps of the wizard to set the parameters for creating the capability
profile:
• Enter the capability profile name and description.
• Select the storage pool to associate the capability profile with.
• Enter any Usage Tags to use to identify how the associated vVol datastore
should be used.
• Then review the capability profile configuration, and click Finish to start the
operation. The results of the process are displayed.

Only after a capability profile is associated with a storage pool you can create a
vVol datastore.

View/Modify Properties

To view and modify the properties of a capability profile, select it from the list, and
click the edit icon.

Module 1 Course Introduction and System Administration


Double-click the image for an expanded view.

The Details tab of the properties window enables you to change the name of the
capability profile. Also, the Universally Unique Identifier (UUID) associated with the
VMware object is displayed here for reference.

The Constraints tab shows the space efficiency, service level, and storage
properties that are associated with the profile. The user can add and remove user
tags.

Module 1 Course Introduction and System Administration


Provisioning vVols Workflow - Storage (Create Storage
Containers)

Workflow Step

Virtual volumes reside in the vVol datastores, also known as storage containers.
A vVol datastore is associated with one or more capability profiles.

Create
Storage Policy
vVol Datastore Storage
Container
- Capacity

- Performance
vVol (Block) / vVol (File)
- Availability
Datastore - Data Protection Capability Profile
- Security
vDisks - Characteristics of pool: drive type, RAID, FAST Cache,
FAST VP, space efficiency
- Service levels: Gold, Silver, Bronze
NAS and SCSI

Protocol
Endpoints

Storage Policy-Based VASA Storage Pool


Management Provider

Provisioning vVols workflow

There are two types of vVol datastores: vVol (File) and vVol (Block).
• vVol (File) are virtual volume datastores that use NAS protocol endpoints for
I/O communication from the host to the storage system. Communications are
established using the NFS protocol.
• vVol (Block) are virtual volume datastores that use SCSI protocol endpoints for
I/O communication from the host to the storage system. Communications are
established using either the iSCSI or the FC protocols.

The vVol datastore is displayed as compatible storage in vCenter or the vSphere


Web Client if the associated capability profiles meet the VMware storage policy
requirements.

Module 1 Course Introduction and System Administration


Creating vVol Datastores

To create a vVol datastore, the user must launch the Create VMware Datastore
wizard, and follow the steps.

Double-click the image for an expanded view.

The user must define the type of vVol datastore to create: vVol (File) or vVol
(Block). If provisioning a vVol (File), a NAS server must have been created in the
system. The NAS server must be configured to support the NFS protocol and
vVols. Also, a capability profile must have been created and associated with a pool.

A name must be set for the datastore, and a description to identify it.

The user can and then define the capability profiles to use for the vVol datastore.
The user can also determine how much space to consume from each of the pools
associated with the capability profiles. The capacity can be defined from the
Datastore Size (GB) column.

The datastore can also be associated with an ESXi host on another step of the
wizard.

The Summary page enables the user to review the configuration of the vVol
datastore before clicking Finish to start the creation job. The results of the process
are displayed on the final page of the wizard.

Module 1 Course Introduction and System Administration


Protocol Endpoints

Storage Policy

Capacity Performance

Data
Availability
Protection

Storage Policy-based Management Security

Virtual Volumes

VASA Provider

VM = Virtual Machine

PE = Protocol Endpoint

Protocol Endpoints or PEs establish a data path between the ESXi hosts and the
respective vVol datastores. The I/O from Virtual Machines is communicated
through the PE to the vVol datastore on the storage system.

A single protocol endpoint can multiplex I/O requests from many virtual machine
clients to their virtual volumes.

The Protocol Endpoints are automatically created when a host is granted access to
a vVol datastore.
• NAS protocol endpoints are created and managed on the storage system and
correspond to a specific NFS-based NAS server.

Module 1 Course Introduction and System Administration


− A File vVol is bound to the associated NAS PE every time that virtual
machine is powered on. When the VM is powered off, the vVol is unbound
from the PE.
• SCSI protocol endpoints use any iSCSI interface or Fibre Channel connection
for I/O. Two SCSI PEs are created for every ESXi host to the vVol datastore,
storage container, pair.

− The block vVol is bound to the associated SCSI PE every time that the VM
is powered on. When the VM is powered off, the PE is unbound.
− SCSI protocol endpoints simulate LUN mount points that enable I/O access
to vVols from the ESXi host to the storage system.

Module 1 Course Introduction and System Administration


Provisioning vVols Workflow – vSphere Environment

Storage Administrator
Create Configure Create
Storage Pools Capability Storage
Profiles Containers

VM Administrator
Complete VM
Compliant Provisioning
Add Vendor Create Provision VMs to
Provider / vVol Storage Storage Policies
Datastores Policies Non Compliant
Alert
Administrator

The Dell Unity system must be registered as a storage provider on the vCenter
Server to use the vVol datastores. The VM administrator performs this task using
the IP address or FQDN of the VASA provider.

The VM administrator can then create storage policies in vSphere. VM storage


policies define which vVol datastores are compatible based on the capability
profiles that are associated with them. The administrator can provision the VM and
select the storage policy and the wanted vVol datastore.

After the virtual machines are created using the storage policies, users can view
the volumes that are presented on the Virtual Volumes page in Unisphere.

Module 1 Course Introduction and System Administration


Provisioning vVols Workflow - vSphere Environment (Add
Storage Provider)

Workflow Step

The Dell Unity XT storage system or the Dell UnityVSA must be added as a
storage provider to the vSphere environment. The storage provider enables the
access to the vVols provisioned storage for the creation of storage policy-based
virtual machines.

Storage Policy vVol Datastore


- Capacity

- Performance
vVol (Block) / vVol (File)
- Availability
Datastore - Data Protection

- Security
vDisks
Capability Profile

- Characteristics of pool: drive type, RAID, FAST Cache,


NAS and SCSI
FAST VP, space efficiency
- Service levels: Gold, Silver, Bronze

Protocol
Endpoints

Storage Policy-Based VASA


Management Provider

URL Format - https://<Unity MGMT port IP address>:8443/vasa/version.xml Storage Pool

Provisioning vVols workflow

Register VASA Provider

The vSphere administrator must launch a vSphere Web Client session to the
vCenter Server and open the Hosts and Clusters view.

Select the vCenter Server on the left pane, and from the top menu select the
Configure option and the Storage Providers option from the More submenu.

Module 1 Course Introduction and System Administration


URL Format - https://<Dell EMC Unity MGMT port IP address>:8443/vasa/version.xml

Double-click the image for an enlarged view.

To add the Unity XT or UnityVSA system as a VASA vendor, open the New
Storage Provider window by clicking the Add sign.
• Enter a name to identify the entity.
• Type the IP address or FQDN of the VASA provider (the Dell Unity system) on
the URL field. The URL is a combination of the Dell Unity XT or UnityVSA
management port IP address, the network port, and VASA version XML path.
Ensure to use the full URL format described in the slide.
• Next, type the credentials to log in to the storage system.

The first time the array is registered, a warning message may be displayed for the
certificate. Click Yes to advanced, and validate the certificate.

Module 1 Course Introduction and System Administration


Provisioning vVols Workflow - vSphere Environment (Add vVol
Datastores)

Workflow Step

Next step is the creation of datastores in the vSphere environment using the vVol
datastores that were created in the storage system.

Storage Policy
vVol Datastore
- Capacity

- Performance
vVol (Block) / vVol (File)
- Availability
Datastore - Data Protection Capability Profile
- Security
vDisks - Characteristics of pool: drive type, RAID, FAST Cache,
FAST VP, space efficiency
- Service levels: Gold, Silver, Bronze
NAS and SCSI

Protocol
Endpoints

Storage Policy-Based VASA Storage Pool


Management Provider

Provisioning vVols workflow

Adding vVol Datastores

When vVol datastores (containers) are associated with an ESXi host in Unisphere,
they are seamlessly attached and mounted in the vSphere environment.

Module 1 Course Introduction and System Administration


Double-click the image for an enlarged view.

vVol datastores that are only created but not associated with an ESXi host still
show as available for use in the vSphere environment. The VMware administrator
must manually associate these storage containers with a datastore as explained
here.

Open the Hosts and Clusters view, and select the ESXi host from the left pane from
the vSphere Web Client.

The Datastores page is available by selecting the Datastores tab. The same page
can also be opened by selecting the Configure tab and the Datastores option on
the Storage section.

Module 1 Course Introduction and System Administration


Open the New Datastore wizard using the Add sign link.
• Besides the VMFS and NFS types, the wizard now has a VVol option that can
be selected.
• Enter a name for the datastore, and select one of the available VVol datastores
from the list.

The new vSphere datastore is created.

Module 1 Course Introduction and System Administration


Provisioning vVols Workflow - vSphere Environment (Create
Storage Policies)

Workflow Step

Create the storage policies for virtual machines. These policies map to the
capability profiles associated with the pool that was used for the vVol datastores
creation.

vVol Datastore
Storage Policy
- Capacity

- Performance vVol (Block) / vVol (File)


- Availability

Datastore - Data Protection

- Security Capability Profile


vDisks
- Characteristics of pool: drive type, RAID, FAST Cache,
FAST VP, space efficiency

- Service levels: Gold, Silver, Bronze

NAS and SCSI

VASA
Provider

Protocol Storage Pool


Storage Policy-Based
Management Endpoints

Provisioning vVols workflow

Module 1 Course Introduction and System Administration


Create VM Storage Policy

Double-click the image for an enlarged view.

Launch the Create New VM Storage Policy wizard from the VM Storage Policies
page.
• Enter a name for the policy.
• Select the EMC.UNITY.VVOL data services rule type.
• Add different tags a datastore must comply with such as usage tag, service-
levels, and storage properties. In the example, only the usage tag was used.
• The next step shows all the available mounted datastores that are categorized
as compatible and incompatible. The administrator must select the datastore
that complies with the rules that were selected on the previous step.

After the wizard is complete, the new policy is added to the list.

Module 1 Course Introduction and System Administration


Provisioning vVols Workflow - vSphere Environment
(Provision VMs to Storage Policies)

Workflow Step

After the storage policies are created, the vSphere Administrator can create new
Virtual Machines to these policies.

Storage Policy vVol Datastore

- Capacity

- Performance
vVol (Block) / vVol (File)
Datastore - Availability
- Data Protection
vDisks
- Security Capability Profile

- Characteristics of pool: drive type, RAID, FAST Cache,


FAST VP, space efficiency

- Service levels: Gold, Silver, Bronze

NAS and SCSI

VASA
Provider

Protocol Storage Pool


Storage Policy-Based
Endpoints
Management

Provisioning vVols workflow

Create VM from Storage Policy

To create a Virtual Machine from the storage policies, open the Hosts and Clusters
tab. Then from the vSphere Web Client session, select the ESXi host from the left
pane.

Module 1 Course Introduction and System Administration


Double-click the image for an enlarged view.

From the drop-down list of the <Actions> top menu, select New Virtual Machine,
and the New Virtual Machine... option.

The wizard is launched, and the administrator can select to create a new virtual
machine.
• Enter a name, select the folder, select the ESXi host on which the virtual
machine is created.
• Then on the storage section of the wizard, the administrator must select the VM
Storage Policy that was previously created from the drop-down list.
• The available datastores are presented as compatible and incompatible. The
administrator must select a compatible datastore to continue.
• The rest of the wizard steps instructs administrators to select the following
parameters:

− The minimum vSphere version compatibility


− The guest operating system for the virtual machine

Module 1 Course Introduction and System Administration


− The option to customize the hardware configuration.
After completion, the wizard displays the new virtual machine.

Module 1 Course Introduction and System Administration


VM Virtual Volumes in Unisphere

Virtual Volumes Management

To manage virtual machine vVols, select VMware from the Storage section, and
then select Virtual Volumes from the top submenu.

From the Virtual Volumes page, it is possible to view the whole list of vVols stored
in the Dell Unity XT storage containers. The list shows the virtual machine each
vVol relates to, the storage container (datastore) used for VM provisioning, and the
capability profile used for storage policy driven provisioning.

Dell Unity XT OE algorithm evenly balances the number of vVols across the SPs
during virtual machine provisioning. When new virtual volumes are created, the
number of resources on each SP is analyzed and the new vVols are used to
balance the counts. To view this information displayed on the list, you must
customize the query to include the SP owner column (shown in the example with a
yellow box).

The Virtual Volumes page also allows the use to view the properties or delete an
individual virtual volume.

Virtual Volume Details

To see details about a virtual volume, select it from the list and its details are
displayed on the right-pane.

Module 1 Course Introduction and System Administration


Virtual Volume Properties

To view the properties of a virtual volume, select if from the list, and click the edit
icon.

Module 1 Course Introduction and System Administration


The General tab of the properties window is common for all types of virtual
volumes. The tab shows the type of VMware object, the Universally Unique
Identifier (UUID) associated with the VMware object, and the capacity utilization.
The tab also displays the datastore where volume is stored, the capability profile
associated with it, and the storage policy used for the VM provisioning. Virtual
machine reference information is among the information also shared in the page.

The Binding Details tab is available on the properties page of all types of virtual
volumes. The tab displays information about the VMware protocol endpoints that
are associated with the access to the provisioned storage.

The Snapshots tab is displayed only on the properties page of a data virtual
volume. Native snapshot of individual VMDK (data) vVols is supported in Dell EMC
Unity XT storage systems. The Snapshots tab displays the list of vVol snapshots
that are created either in vSphere or Unisphere. The user can create manual
snapshots of the VMDK vVol and restore them.

The Host I/O Limit tab is displayed only on the properties page of a data virtual
volume. The tab collects information about the bandwidth and throughput that is
consumed by the ESXi host to storage object access.

Module 1 Course Introduction and System Administration


Demonstration - vVol Datastores

This demo covers how to provisioning vVol (File) and vVol (Block) datastores on a
Dell Unity XT system or a Dell UnityVSA.

The video also demonstrates how to check some details of the datastore properties
and performing the expansion of the datastore.

The demo includes setting the storage system as a VASA provider to use the
provisioned storage container for storage policy-based provisioning of virtual
machines.

Movie:

The web version of this content contains a movie.

Module 1 Course Introduction and System Administration


Unisphere Alerts and Events Monitoring

Storage Provisioning Key Points

5. Storage Resources
d. Dell Unity XT storage resources are categorized in storage pools, block
storage, file storage, and VMware datastores.
e. Dell Unity XT systems share pools with all the resource types: file systems,
LUNs, and the VMware datastores.
f. Storage pools are created using the SAS Flash, SAS, and NL-SAS drives.
6. Dynamic Pools
g. Dynamic Pools are supported on Dell Unity XT physical hardware only
h. All pools that are created with Unisphere on a Dell Unity XT AFA and HFA
systems are dynamic pools by default.
i. Dynamic pools reduce rebuild times by having more drives engaged in the
rebuild process.
- Data is spread out through the drive extent pool.
- The system uses spare space within the pool to rebuild failed drives.
j. A dynamic pool can be expanded by one or more drives to the system
limits.
7. Traditional Pools
a. The Dell UnityVSA uses traditional storage pools by default.
b. Traditional pools can be created on Dell Unity XT systems using the
UEMCLI, or REST API interfaces.
c. Traditional pools can be homogeneous (built from single tier) or
heterogeneous (multi-tiered).
8. Block Storage Provisioning
a. Block storage resources that are supported by the Dell Unity XT platform
include LUNs, Consistency Groups, and Thin Clones.
b. LUNs created from traditional heterogeneous pools, use FAST VP tiering
policies for data relocation: Start High and Then Auto-Tier, Auto-tier,
Highest Available Tier, and Lowest Available Tier.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 309


Unisphere Alerts and Events Monitoring

c. Dell Unity XT front-end interfaces must be prepared for supported host


connectivity protocols.
d. Hosts must have an adapter to communicate over the storage protocol: HBA
(Fibre Channel ) or NIC for (iSCSI).
e. Connected hosts must have an initiator (iSCSI IQN or FC WWN) that is
registered on the Dell EMC Unity XT storage system.
- Host configurations are profiles of the hosts that access storage
resources using the Fibre Channel or iSCSI protocols.
- Before a host can access block storage, you must define a configuration
for the host and associate it with the provisioned storage resource.
- A host group is a logical container which groups multiple hosts and block
storage resources.
9. File Storage Provisioning
• File storage in the Dell Unity XT platform is a set of storage resources that
provide file-level storage over an IP network.
• A NAS Server provides file data transfer and connection ports for users,
clients and applications that access file systems.
• SMB and NFS shares are created for the file systems and provided to
Windows, Linux, and UNIX clients.
- NFS shares use host configurations to grant access to Linux and UNIX
NAS clients.
- NFS clients have a host configuration profile on the storage system with
the network address and operating systems defined.
- SMB clients do not need a host configuration to access the file system
share.
10. VMware Datastores Provisioning

a. Dell Unity family of storage systems supports the provisioning of specialized


VMware storage resources called datastores.
b. The supported datastore types that can be provisioned In Unisphere or
through CLI interfaces are VMFS (BLock), NFS (File), vVol (Block), and vVol
(File).

Module 1 Course Introduction and System Administration

Page 310 © Copyright 2022 Dell Inc.


Unisphere Alerts and Events Monitoring

c. Capability profiles must be associated with storage pools used for vVol
datastores, in order to enable the storage policy-based provisioning of
Virtual Machines.
d. The ESXi host must have an adapter to communicate over the storage
protocol: HBA (Fibre Channel) or standard NIC (iSCSI of NFS).
e. Host configurations for ESXi hosts are configured by adding the vCenter
Server and selecting the discovered hypervisor. The host configuration can
then be associated with a VMware datastore.
f. Protocol Endpoints establish a data path between the ESXi hosts and the
respective vVol datastores.

For more information, see the Dell EMC Unity Family Configuring
Hosts to Access Fibre Channel (FC) or iSCSI Storage, Dell EMC
Unity Family Configuring SMB File Sharing, Dell EMC Unity
Family Configuring NFS File Sharing, Dell EMC Unity Family
Configuring vVols, and Dell EMC Unity Family Configuring
Hosts to Access VMware Datastores on the Dell Technologies
Support site.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 311


Unisphere Alerts and Events Monitoring

Module 1 Course Introduction and System Administration

Page 312 © Copyright 2022 Dell Inc.


FAST Cache

FAST Cache

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 313


FAST Cache

FAST Cache Overview

FAST Cache

SAS Flash 2
RAID 1 pair

Dell Unity XT Hybrid

FAST Cache using SAS Flash 2 drives in RAID 1 pair.

• FAST Cache is a performance feature for Hybrid Unity XT systems that extends
the existing caching capacity.
• FAST Cache can scale up to a larger capacity than the maximum DRAM Cache
capacity.
• FAST Cache consists of one or more RAID 1 pairs [1+1] of SAS Flash 2 drives.

− Provides both read and write caching.


o For reads, the FAST Cache driver copies data off the disks being
accessed into FAST Cache.
o For writes, FAST Cache effectively buffers the data waiting to be written
to disk.
• At a system level, FAST Cache reduces the load on back-end hard drives by
identifying when a chunk of data on a LUN is accessed frequently.
• The system copies the frequently accessed data temporarily to FAST Cache.
• The storage system then services any subsequent requests for this data faster
from the Flash disks that make up FAST Cache.

Module 1 Course Introduction and System Administration

Page 314 © Copyright 2022 Dell Inc.


FAST Cache

− FAST Cache reduces the load on the disks that the LUN is formed from
which will ultimately contain the data.
− The data is flushed out of cache when it is no longer accessed as frequently
as other data.
− Subsets of the storage capacity are copied to FAST Cache in 64 KB chunks
of granularity.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 315


FAST Cache

FAST Cache Components

Host I/O
Policy
Engine
Multicore
Cache
Memory map

FAST Cache

HDD
SSD SSD SSD SSD

FAST Cache components

Policy Engine - The FAST Cache Policy Engine is the software which monitors
and manages the I/O flow through FAST Cache. The Policy Engine keeps
statistical information about blocks on the system and determines what data is a
candidate for promotion. A chunk is marked for promotion when an eligible block is
accessed from spinning drives three times within a short amount of time. The block
is then copied to FAST Cache, and the Memory Map is updated. The policies that

Module 1 Course Introduction and System Administration

Page 316 © Copyright 2022 Dell Inc.


FAST Cache

are defined in the Policy Engine are system-defined and cannot be modified by the
user.

Memory Map - The FAST Cache Memory Map contains information of all 64 KB
blocks of data currently residing in FAST Cache. Each time a promotion occurs, or
a block is replaced in FAST Cache, the Memory Map is updated. The Memory Map
resides in DRAM memory and on the system drives to maintain high availability.
When FAST Cache is enabled, SP memory is dynamically allocated to the FAST
Cache Memory Map. When an I/O reaches FAST Cache to be completed, the
Memory Map is checked. The I/O is either redirected to a location in FAST Cache
or to the pool to be serviced.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 317


FAST Cache

FAST Cache Operations

• Host read/write operation


− During FAST Cache operations, the application gets the acknowledgment
for an I/O operation after it is serviced by FAST Cache. FAST Cache
algorithms are designed such that the workload is spread evenly across all
the Flash drives that have been used for creating the FAST Cache.
• FAST Cache promotion
− During normal operation, a promotion to FAST Cache is initiated after the
Policy Engine determines that 64 KB block of data is being accessed
frequently. For consideration, the 64 KB block of data must have been
accessed by reads and/or writes multiple times within a short amount of
time.
• FAST Cache flush
− A FAST Cache Flush is the process in which a FAST Cache page is copied
to the HDDs and the page is freed for use. The Least Recently Used [LRU]
algorithm determines which data blocks to flush to make room for the new
promotions.
• FAST Cache cleaning

− FAST Cache performs a cleaning process which proactively copies dirty


pages to the underlying physical devices during times of minimal back-end
activity.

Module 1 Course Introduction and System Administration

Page 318 © Copyright 2022 Dell Inc.


FAST Cache

Supported Drives and Configurations

FAST Cache is only supported on the Dell Unity XT hybrid models. This is because
the data is already on flash drives on the All-Flash models. Dell Unity hybrid
models support 200 GB, 400 GB, or 800 GB SAS Flash 2 drives in FAST Cache,
dependent on the model. The Dell Unity XT hybrid models support 400 GB SAS
Flash 2 drives only. See the Dell Unity Drive Support Matrix documentation for
more information.

The table shows each Unity XT hybrid model, the SAS Flash 2 drives supported for
that model, the maximum FAST Cache capacities and the total Cache.

Hybrid System System Supported Maximum Total Cache


Model Memory SAS Flash 2 FAST Cache
(Cache) per Drives Capacity
Array

Dell Unity XT 128 GB Only the 400 800 GB 928


380 GB SAS Flash
2
Dell Unity XT 192 GB 1.2 TB 1.39 TB
480

Dell Unity XT 384 GB 3.2 TB 3.58 TB


680

Dell Unity XT 768 GB 6.0 TB 6.76


880
FAST Cache specifications

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 319


FAST Cache

Create FAST Cache

FAST Cache can only be created on physical Dell Unity XT hybrid systems with
available SAS Flash 2 drives. In Unisphere, FAST Cache is created from the Initial
Configuration Wizard, or from the system Settings page. In this example, there is
no existing FAST Cache configuration on the system and it is being created from
the system Settings page in the Storage Configuration section. From the FAST
Cache page, the Create button is selected. The Create FAST Cache wizard is
launched to configure FAST Cache. The system has 400 GB SAS FLASH 2 drives
available for creating FAST Cache. The drop-down list shows the total number of
eligible drives for the FAST Cache configuration. In this example, two drives are
selected for the FAST Cache configuration. The Enable FAST Cache for existing
pools option is checked in this example. Thus, FAST Cache will be enabled on all
existing pools on the system. Leave the option unchecked if you want to customize
which pools to have FAST Cache enabled and disabled on. The wizard continues
the FAST Cache creation process, creating the RAID group for the FAST Cache
configuration, then enables FAST Cache on the existing storage pools. The status
of the used disks can be seen from the FAST Cache Drives page.

Create FAST Cache via Unisphere

Settings > Storage Configuration > FAST Cache > Create

Module 1 Course Introduction and System Administration

Page 320 © Copyright 2022 Dell Inc.


FAST Cache

Enable FAST Cache

Pool Creation Wizard

Although FAST Cache is a global resource, it is enabled on a per pool basis. You
can enable a pool to use FAST Cache during pool creation. The Create Pool
wizard Tiers step has a checkbox option Use FAST Cache to enable FAST Cache
on the pool being created. The option is disabled if FAST Cache is not created on
the system. If FAST Cache is created on the system, the Use FAST Cache option
is checked by default.

Pool Properties

If FAST Cache was created on the system without the Enable FAST Cache on
existing pools option checked, it can be selectively enabled on a per-pool basis.
Select a specific pool to enable FAST Cache on and go to its Properties page.
From the General tab, check the Use FAST Cache option checkbox to enable
FAST Cache on the pool.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 321


FAST Cache

Module 1 Course Introduction and System Administration

Page 322 © Copyright 2022 Dell Inc.


FAST Cache

Expand FAST Cache

Expand Fast Cache Overview

FAST Cache can be expanded online with the Dell Unity XT system. The
expansion is used to increase the configured size of FAST Cache online, without
impacting FAST Cache operations on the system. The online expansion provides
an element of system scalability, enabling a minimal FAST Cache configuration to
service initial demands. FAST Cache can later be expanded online, growing the
configuration as demands on the system are increased. Each RAID 1 pair is
considered a FAST Cache object. In the example shown, the system is configured
with a single RAID 1 pair providing the FAST Cache configuration.

FAST Cache
RAID 1 pair
Empty page

Dirty page

Clean page

Start
Fast Cache
Expansion

To expand FAST Cache, free drives of the same size and type currently used in
FAST Cache must exist within the system. FAST Cache is expanded in pairs of
drives and can be expanded up to the system maximum. In the example shown, an
extra pair of SSD drives is being added to the existing FAST Cache configuration.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 323


FAST Cache

When a FAST Cache expansion occurs, a background operation is started to add


the new drives into FAST Cache. This operation first configures a pair of drives into
a RAID 1 mirrored set. The capacity from this set is then added to FAST Cache
and is available for future promotions. These operations are repeated for all
remaining drives being added to FAST Cache. During these operations, all FAST
Cache reads, writes, and promotions occur without impact from the expansion. The
amount of time the expand operation takes to complete depends on the size of
drives used in FAST Cache. The number of drives being added to the configuration
also impact the expansion time.

FAST Cache
RAID 1 pair RAID 1 pair
Empty page
Dirty page

Clean page

Fast Cache Expansion Completed

The example shows the completion of the FAST Cache expansion. The
reconfiguration provides the new space to FAST Cache and is available for its
operations.

Module 1 Course Introduction and System Administration

Page 324 © Copyright 2022 Dell Inc.


FAST Cache

FAST Cache
RAID 1 pair RAID 1 pair
Empty page
Dirty page

Clean page

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 325


FAST Cache

Expand FAST Cache Management

When FAST Cache is enabled on the Dell Unity XT system, FAST Cache can be
expanded up to the system maximum. To expand FAST Cache from Unisphere, go
to the FAST Cache page found under Storage Configuration in the Settings
window. From this window, select Expand to start the Expand FAST Cache wizard.
Only free drives of the same size and type currently configured in FAST Cache are
used to expand FAST Cache. In this example, only 400 GB SAS Flash 2 drives are
available to be selected because FAST Cache is currently configured with those
drives. From the drop-down list, you can select pairs of drives to expand the
capacity of FAST Cache up to the system maximum. In this example, two drives
are being added to the current two drive FAST Cache configuration. After the
expansion, FAST Cache is configured with four drives arranged in two RAID 1 drive
pairs.

Expand FAST Cache via Unisphere


Settings > Storage Configuration > FAST Cache > Expand

Module 1 Course Introduction and System Administration

Page 326 © Copyright 2022 Dell Inc.


FAST Cache

Shrink FAST Cache

FAST Cache Shrink Overview

FAST Cache can be shrunk online with the Dell Unity XT system. Shrinking FAST
Cache is performed by removing drives from the FAST Cache configuration and
can be performed while FAST Cache is servicing I/O. In the following series of
examples, FAST Cache is shrunk by removing an existing pair of drives from the
FAST Cache configuration.

A FAST Cache shrink operation can be initiated at any time and is issued in pairs
of drives. A shrink operation allows the removal of all but two drives from FAST
Cache. Removing drives from FAST Cache can be a lengthy operation and can
impact system performance.

FAST Cache
RAID 1 pair RAID 1 pair

Shrink
Empty page

Dirty page

Clean page

Start
FAST Cache
Shrink

When a FAST Cache shrink occurs, a background operation is started to remove


drives from the current FAST Cache configuration. After a shrink operation starts,

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 327


FAST Cache

new promotions are blocked to each pair of drives selected for removal from FAST
Cache. Next, the FAST Cache dirty pages within the drives being removed are
cleaned. The dirty page cleaning ensures that data is flushed to the LUN back-end
disks.

Shrink Shrink

Promotions blocked Promotions blocked

FAST Cache
Shrink Completed

After all dirty pages are cleaned within a set of drives, the capacity of the set is
removed from the FAST Cache configuration. For this example, the FAST Cache
configuration has been shrunk from two drive pairs down to a single drive pair.
Data which existed on FAST Cache drives that were removed may be promoted to
FAST Cache again through the normal promotion mechanism.
FAST Cache FAST Cache

Shrink

Promotions blocked

Module 1 Course Introduction and System Administration

Page 328 © Copyright 2022 Dell Inc.


FAST Cache

Shrink FAST Cache Management

FAST Cache supports online shrink by removing drives from its configuration. It is
possible to remove all but one RAID 1 pair – each RAID 1 pair is considered a
FAST Cache object.

1 2

1: To shrink the FAST Cache, select the system Settings option in Unisphere and
navigate to the Storage Configuration section.

Select the Shrink option and the Shrink FAST Cache window opens.

2: In the drop-down list, select the number of drives to remove from the
configuration. In this example, the current FAST Cache configuration includes four
drives and two drives are being removed.

3: A message is displayed stating that removing the drives from FAST Cache
requires the flushing of dirty data from each set being removed to disk.

Click Yes to confirm the shrink operation.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 329


FAST Cache

Delete FAST Cache

To remove all drives from FAST Cache, the Delete operation is used. FAST Cache
delete is often used when drives must be repurposed to a pool for expanded
capacity. The delete operation is similar to a shrink operation in that any existing
dirty pages must be flushed from FAST Cache to back-end disks. Then the disks
are removed from FAST Cache. The delete operation can consume a significant
amount of time, and system performance is impacted.

1
2

1: To Delete FAST Cache, select the system Settings option in Unisphere and go
to the Storage Configuration section. Select the Delete option and the Delete
message window opens.

2: The message states that deleting FAST Cache requires the flushing all data
from the FAST Cache drives. Click Yes to confirm the delete operation.

Module 1 Course Introduction and System Administration

Page 330 © Copyright 2022 Dell Inc.


FAST Cache

Demonstration

This demonstration covers FAST Cache management. It begins by creating FAST


Cache on a Dell Unity XT hybrid system. Then the system’s FAST Cache capacity
is increased by performing an expand operation. Next, a FAST Cache shrink is
performed to reduce its capacity. Finally, FAST Cache is removed from the system
by performing a delete operation.

Movie:

The web version of this content contains a movie.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 331


Host I/O Limits

Host I/O Limits

Module 1 Course Introduction and System Administration

Page 332 © Copyright 2022 Dell Inc.


Host I/O Limits

Host I/O Limits Overview

Dell Unity XT Host I/O Limits, also referred to Quality of Service [QoS], is a feature
that limits I/O to storage resources: LUNs, attached snapshots, VMFS, and vVol
[Block] datastores. Host I/O Limits can be configured on physical or virtual
deployments of Dell Unity XT systems. Limiting I/O throughput and bandwidth
provides more predictable performance in system workloads between hosts,
applications, and storage resources.

Host I/O Limits are Active when the global feature is enabled, policies are created,
and assigned to a storage resource. Host I/O Limits provides a system-wide or a
specific host pause and resume control feature. Limits can be set by throughput, in
IOs per second [IOPS], or bandwidth, defined by Kilobytes or Megabytes per
second [KBPS or MBPS], or a combination of both types of limits. If both thresholds
are set, the system limits traffic according to the threshold that is reached first.

Only one I/O limit policy can be applied to a storage resource. For example, an I/O
limit policy can be applied to an individual LUN or to a group of LUNs. When an I/O
limit policy is applied to a group of LUNs, it can also be shared. When a policy is
shared, the limit applies to the combined activity from all LUNs in the group. When
a policy is not shared, the same limit applies to each LUN in the group.

Supported on Dell Unity


XT Hardware and Dell UnityVSA

Throughput
IOPs

LUN Snap VMFS


Bandwidth
MB/s

System-wide Pause/Resume
Host I/O Limits are based on a
control and individual policy
user created policy
control

Host I/O overview

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 333


Host I/O Limits

Host I/O Limit Use Cases

The Host I/O Limit feature is useful for service providers to control service level
agreements.

• Mechanism to control the maximum level of


service
− If a customer wants to have an SLA that
specifies 500 IOPS, a limit can be put in place
that allows a maximum of 500 IOPS. A
service provider can create host I/O policies
that meet their requests. Host I/O Limits use cases
• Storage administrators can limit I/O for
followings:

− Billing Rates: Billing rates can be set up for customers or departments


dependent on how much I/O each host requires.
− Run-away Processes and Busy Users – “Noisy” neighbors: These
processes take resources away from other processes.
− Test and Development Environment: A LUN with a database on it may be
used for testing. Administrators can create a snapshot of the LUN and
mount it. Putting a limit on the snapshot would be useful to limit I/O on the
snap since it is not a production volume.

Module 1 Course Introduction and System Administration

Page 334 © Copyright 2022 Dell Inc.


Host I/O Limits

Host I/O Limit Policy Types

Two Host I/O Limit policy types: Absolute and Density

• Absolute:
− An absolute limit applies a maximum threshold to a storage resource
regardless of its size.
• It can be configured to limit the amount of I/O traffic up to a threshold
amount based on IOPS, bandwidth or both. If both thresholds are set, the
storage system limits traffic according to the threshold that is reached
first. The limit is also shared across resources.
• Burst configuration supported
• Density-based:

− A Host I/O Limit policy is configured based on a capacity of a given storage


resource.
• A density-based host I/O limit scales with the amount of storage that is
allocated to the resource. As with the absolute limit, a policy is shared
with other resources. When a density-based policy is in place, the IOPS
and bandwidth are based on a GB [KBPS or MBPS] value, not a
maximum value as with an absolute policy.
• Burst configuration supported

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 335


Host I/O Limits

Host I/O Limit Policy – Examples

Two Host I/O Limit policy examples:

Absolute

In the example, there are three LUNs under the same policy. Setting an absolute
policy for the LUNs would limit each LUN to 1000 IOPS regardless of LUN size.

100 GB 100 GB 100 GB

Absolute policy is set to 1000 IOPS for all LUNS

Absolute-based policies limit I/O regardless of resource size

Density-Based

The density-based Host I/O Limit configuration is calculated by taking the Resource
Size x [multiplied by] the Density Limit that is set by the Storage Administrator.
After set, the Host I/O Limits driver throttles the IOPS based on the calculation.

• LUN A is a 100 GB LUN, so the calculation is 100 [Resource Size] x 10 [Density


Limit]. This calculation sets the maximum number of IOPS to 1000.
• LUN B is 500 GB so the calculation is 500 [Resource Size] x 10 [Density Limit].
This calculation sets the maximum number of IOPS to 5000.
• A Service Provider can add both LUNs under a single density-based Host I/O
Limit to implement the policy.

Module 1 Course Introduction and System Administration

Page 336 © Copyright 2022 Dell Inc.


Host I/O Limits

Host I/O Limit = [Resource Size] x [Density Limit]

Density-based value limit set to


10 IOPS per GB

LUN A 100 GB LUN B 500 GB


Max IOPS = 100 x 10 IOPS = 1000 IOPS Max IOPS = 500 x 10 IOPS = 5000 IOPS

Density-based policies limit I/O based on resource size

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 337


Host I/O Limits

Shared Policies

Host I/O Limit allows administrators to implement a shared policy when the initial
Host I/O policy is created. The policy is in effect for the life of that policy and cannot
be changed. Administrators must create another policy with the Shared check box
cleared if they want to disable the setting. When the Shared check box is cleared,
each individual resource is assigned a specific limit or limits. When the Shared
check box is selected, the resources are treated as a group, and all resources
share the limits that are applied in the policy.

In the example, a Host I/O Limit policy has been created to limit the number of
hosts IOPS to 100. In this case, both LUN 1 and LUN 2 share this limit. Shared
limits do not guarantee the limits are distributed evenly. From the example with a
shared limit of 100 IOPS, LUN 1 can service I/O at 75 IOPS and LUN 2 can service
25 IOPS. Also, if limits are shared across Storage Processors, it does not matter if
the LUNs are owned by different SPs. The policy applies to both.

SPA LUN 1

Host I/O policy of 100


IOPS is created
Set when creating a
Host I/O Limit and
cannot be changed
100 IOPS will be
shared between
LUN 1 and LUN 2

SPB LUN 2

Host I/O limit shared between resources

Module 1 Course Introduction and System Administration

Page 338 © Copyright 2022 Dell Inc.


Host I/O Limits

Shared Density-Based Host I/O Limits

The Density-based Shared Host I/O Limit calculation takes the combined size of all
resources sharing the policy multiplied by the Density Limit set by the Storage
Administrator. After it is set, the Host I/O Limits driver throttles the IOPS based on
the calculation. In the example, LUN A is a 100 GB LUN, LUN B is 500 GB, so the
calculation is 100 + 500 [combined resource size] x 10 [Density Limit]. This sets the
maximum number of IOPS to 6000.

Host I/O Limit =


Combined Size of all Resources Shared Policy * Density Limit

Density-based value limit set to 10


IOPS per GB

LUN A 100 GB LUN B 500 GB

Maximum IOPS = [100 + 500] x 10 = 6000 IOPS

Example of shared Density-based Host I/O Limit policy

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 339


Host I/O Limits

Multiple Resources Within a Single Policy

Multiple resources can be added to a single density-based Host I/O Limit policy.
Each resource in that policy can have a different limit that is based on the capacity
of the resource. If a Storage Administrator decides to change the capacity of a
given resource, the new capacity is now used in the calculation when configuring
the IOPS.

Multiple Resources on a Single Policy

In this example, a LUN resource is configured at 500 GB at initial sizing and the
density-based limit is configured a 10 IOPS. The IOPS is 5000 based on the
calculation [500 x 10 = 5000].

LUN A LUN B LUN C LUN D


100 GB 50 GB 200 GB 500 GB

Multiple resources can be added to a single density-based policy

Each resource is a different size

Example of multiple resources under a single density-based policy

IOPS Change with Resource Size

Expanding the LUN by an additional 100 GB results in a new calculation of 6000


IOPS [600 x 10 = 6000].

Module 1 Course Introduction and System Administration

Page 340 © Copyright 2022 Dell Inc.


Host I/O Limits

LUN A LUN B LUN C LUN D


100 GB 50 GB 200 GB 600 GB

If a resource changes size, the maximum IOPS will change

Example of a resource size change under a single density-based policy

Snapshots and I/O Limits

For attached snapshots, the maximum IOPS is determined by the size of the
resource [LUN] at the point in time that the snapshot was taken. In the example, a
snapshot was created for LUN A. Using the same density limit of 10 the maximum
number of IOPS would be 1000 for the snapshot. [100 x 10 = 1000 IOPS].

LUN A LUN B LUN C LUN D


100 GB 50 GB 200 GB 600 GB

LUN A For attached snapshots, the maximum applies to the size of the
Snap resource at the point in time of the snapshot
100 GB

Example of how snapshots are handled

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 341


Host I/O Limits

Density-Based Host I/O Limits Values

When configuring density-based limits, there are minimum and maximum values
the user interface accepts. The values are shown below. If a user tries to configure
a value outside the limits, the box is highlighted in Red to indicate that the value is
incorrect. An example is shown below. Hover over the box to view the max allowed
value.

Settings Value Range

Maximum IOPS per GB 1 -> 1,000,000 IOPS

Maximum Bandwidth per GB 1.0 KBPS -> 75 GBPS

With Host I/O Limits, maximum values can be set

Module 1 Course Introduction and System Administration

Page 342 © Copyright 2022 Dell Inc.


Host I/O Limits

Burst Feature Overview

The Burst feature typically allows for one-time exceptions that are set at some
user-defined frequency. This allows for circumstances such as boot storms, to
periodically occur. For example, if a limit setting was configured to limit IOPS in the
morning, you may set up an I/O Burst policy for some period to account for possible
increased login traffic. The Burst feature provides Service Providers with an
opportunity to upsell an existing SLA. Service Providers can afford end users the
opportunity to use more IOPS than the original SLA called for. If applications are
constantly exceeding the SLA, they can go back to the end user and sell additional
usage of the extra I/Os allowed.

• Allows for one-time exceptions


− Allow applications with a backlog to catch up periodically
o Example: Boot storms
• Provides the ability for service providers to upsell

− Provides insight into how much more I/O the end user is consuming
o Usage of extra I/Os allowed may warrant a higher limit

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 343


Host I/O Limits

Burst Creation

Users can select the Optional Burst Settings from the Configuration page of the
wizard when creating a Host I/O Limit policy. Also, if there is an existing policy in
place, users can edit that policy anytime to create a Burst configuration. Users can
configure the duration and frequency of when the policy runs. This timing starts
from when the Burst policy is created or edited. It is not tied in any way to the
system or NTP server time. Having the timing run in this manner prevents several
policies from running simultaneously, say at the top of the hour. Burst settings can
be changed or disabled at any time by clearing the Burst setting in Unisphere.

• Burst parameters are configurable:


− At time of policy creation
− Anytime by editing an existing policy
• A created or edited policy dictates the timing or scheduling of the Bursts.
− For example, not just at the top of the hour
• Burst is disabled by clearing the Burst setting in Unisphere

− Can be changed or disable at any time

Module 1 Course Introduction and System Administration

Page 344 © Copyright 2022 Dell Inc.


Host I/O Limits

Burst Configuration

Host Burst configuration parameters can be set at creation or when an existing


policy is edited. The Burst % option is the amount of traffic over the base I/O limit
in percent that can occur during the burst time. This value is configurable from 1%
to 100%.

The For option is the duration in minutes to allow burst to run. This setting is not a
hard limit and is used only to calculate the extra I/O operations that are allocated
for bursting. The actual burst time depends on I/O activity and can be longer than
defined when activity is lower than the allowed burst rate. The For option
configurable values are 1 to 60 minutes.

The Every option is the frequency to allow the burst to occur. The configurable
setting is 1 hour to 24 hours. The example shows a policy that is configured to
allow a 10% increase in IOPS and Bandwidth. The duration of the window is 5
minutes, and the policy will run every 1 hour.

Percentage Increase
Burst %
1% to 100%

Duration for burst

For (Minutes) 1 min to 60 min

How often

Every
(Hours) 1 hr to 24 hrs

Burst settings

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 345


Host I/O Limits

Burst Calculation Example

The example shows how an I/O burst calculation is configured. What the policy
does is allow X number of extra I/O operations to occur based on the percentage
and duration the user input.

In this case, the absolute limit is 1000 IOPS with a burst percentage of 20%. The
policy is allowed for a five-minute period and will reset at 1-hour intervals. The
number of extra I/O operations in this case is calculated as: 1000 x 0.20 x 5 x 60 =
60,000. The policy will never allow the IOPS to go above this 20% limit, 1200 IOPS
in this case. After the additional I/O operations allocated for bursting are depleted,
the limit returns to 1000 IOPS. The policy cannot burst again until the 1-hour
interval ends.

Note that the extra number of burst I/O operations are not allowed to happen all at
once. The system will only allow the 20% increase to the configured I/O limit of
1000 IOPS for the burst. In this case, the system would allow a maximum of 1200
IOPS for the burst duration of 5 minutes.

Extra I/Os = Limit x Burst % x For [mins] x 60 [secs]

Absolute Limit = 1000 IOPS


Burst = 20% Extra I/Os = 1000 x 20% x 5 x 60
For = 5 min
Every = 1 hour
Extra I/Os = 60,000

Burst setting example

Module 1 Course Introduction and System Administration

Page 346 © Copyright 2022 Dell Inc.


Host I/O Limits

Burst Scenarios

Shown here are two scenarios that may be encountered when configuring a Burst
limit. In the first case, the Host target I/O is always above the Host I/O limit and
Burst limit. There are both a Host I/O Limit and Burst Limit that is configured, but
the incoming Host target I/O continually exceeds these values.

In the second scenario, the Host target I/O is above Host I/O Limit, but below the
Burst Limit. The Host IOPS generated are somewhere in between these two limits.

Scenario 1: Host target I/O always above the Host


I/O Limit and Burst Limit

Scenario 2: Host target I/O above Host I/O Limit, but


below the Burst Limit

Introduction to two Burst scenarios

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 347


Host I/O Limits

Burst Scenario 1

Host target I/O is always above the Host I/O limit and Burst limit. There is both a
Host I/O Limit and a Burst Limit that is configured, but the incoming Host target I/O
continually exceeds these values.

Target I/O, Host Limit, Burst Limit

Host target I/O stays above the Host I/O Limit and Burst Limit
• IOPS will never go above the Burst Limit ceiling
− If Burst is 20%, then only 20% more IOPS allowed at any point in time
• Duration of the extra IOPS matches the “For” setting
− Note: This is not a set window
• Once all Extra IOPS has been consumed, burst allowance ends

− Extra IOPS refreshed once next Burst is allowed


In this scenario, the Host I/O being sent is always greater than the Host I/O Limit
and Burst Limit values. When a Burst limit policy is configured, it throttles the Host
I/O as to never allow IOPS to go above the Burst Limit ceiling. If the Burst Limit is
20%, then only 20% more IOPS are allowed at any point in time.

For this scenario, the duration of the extra IOPS matches the “For” setting. For the
scenario where the host target I/O is below the burst limit, the burst duration
window will be longer. Once all the extra I/O operations have been consumed, the
burst allowance ends and only the Host I/O Limit is applied for the remainder of the
defined burst limit policy period. Extra burst I/O will be available again in the next
burst limit policy period.

Total IOPS in 60 Minutes

Here is a graph showing the Total incoming IOPS on the “Y” axis and the time in
minutes (60 min) on the “X” axis. The Host I/O Limits are configured to be a
maximum of 1000 IOPS with a burst percentage of 20 (1200 IOPS). The duration of
the burst is 5 minutes and will refresh every hour. The graphics show that the Host

Module 1 Course Introduction and System Administration

Page 348 © Copyright 2022 Dell Inc.


Host I/O Limits

target IOPS is around 1500, well above the Host I/O and Burst Limit settings. This
is the I/O that the host is performing. The blue line is what the Host I/O limit is, so
we will try to keep the I/O rate at this limit of 1000 IOPS. The Burst Limit is the limit
that was calculated from the user input and is at 1200 IOPS. The policy will never
allow the IOPS to go above the burst limit. It also means that you will match the
“For” window for the duration period since the Host I/O is always above the other
limits. The I/O comes in and are throttled by the Host I/O Limit of 1000 IOPS. I/O
continues up until the 42-minute mark where it comes to the 5-minute window.
During this period, the I/O is allowed to burst to 1200 IOPS.

Burst Limit

Host I/O Limit

Total
IOPS

Host Actual IOPS

(Y) Host Target IOPS

Host I/O Limit:

1,000 IOPS Max

Burst: 20%

For: 5 minutes

Every: 1 Hour

(X) Time (Minutes)


5 Minute Burst

Total IOPS in 60 Minutes

Burst Limit and Extra IOPS

Let us look a bit closer at how the Burst feature throttles the I/Os. The calculations
are the same as the previous scenario where the total number of extra I/O
operations was based on the Limit x % x For x Frequency. So, we calculated
60,000. The I/O burst period starts, and a calculation is taken between minute 39
and 40 (60 seconds). In that 60 seconds, an extra 200 IOPS is allowed (1200 –
1000), so 200 x 60 produces the value of 12,000 I/O operations. So, every 60-
second sample period allows 12,000 I/O operations.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 349


Host I/O Limits

Extra I/Os To Allow:


Host I/O Limit: 1000 X .20 x 5 x 60 = 60,000
Burst: 20%
1,000 IOPS Max
For: 5 minutes Burst Limit 20%
Every: 1 Hour
1000 + (1000 * .20) = 1200 IOPS
60,000 I/Os

Burst Limit and Extra IOPS

Minute 1

Our “For” value is 5 minutes, so in a 5-minute period we should use our 60,000
extra I/O operations. (12000 * 5 = 60,000). The 12,000 is subtracted from our total
of 60,000 for each 60 sec. period (60,000 – 12,000 = 48,000). This continues for
the frequency of the burst. Every 60-second period subtracts an additional 12,000
I/O operations until the allotted extra I/O operations value is depleted.

Module 1 Course Introduction and System Administration

Page 350 © Copyright 2022 Dell Inc.


Host I/O Limits

Extra I/Os To Allow:


Host I/O Limit: 1000 X .20 x 5 x 60 = 60,000
Burst: 20%
1,000 IOPS Max
For: 5 minutes Burst Limit 20%
Every: 1 Hour 1000 + (1000 * .20) = 1200
60,000 I/Os
-12,000 = 48,000

200 Extra IOPS * 60 = 12,000 I/Os


(1200 - 1000 = 200)

Burst Limit and Extra IOPS at minute 1

Minute 2

Again, this continues for the frequency of the burst. Every 60-second period
subtracts an additional 12,000 I/O operations until the allotted extra I/O value is
depleted. Here, another 12,000 is subtracted from our total of 60,000 for this 60
sec. period (60,000 – 12,000 – 12,000 = 36,000).

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 351


Host I/O Limits

Extra I/Os To Allow:


Host I/O Limit: 1000 X .20 x 5 x 60 = 60,000
Burst: 20%
1,000 IOPS Max
For: 5 minutes Burst Limit 20%
Every: 1 Hour 1000 + (1000 * .20) = 1200
60,000 I/Os
-12,000 = 48,000
200 Extra IOPS * 60 = 12,000 I/Os
-12,000 = 36,000
(1200 - 1000 = 200)

Burst Limit and Extra IOPS at minute 2

Minute 3

The burst is continuing therefore another 12,000 is subtracted from our total of
60,000 (60,000 – 12,000 – 12,000 -12,000 = 24,000).

Extra I/Os To Allow:


Host I/O Limit: 1000 X .20 x 5 x 60 = 60,000
Burst: 20%
1,000 IOPS Max
For: 5 minutes Burst Limit 20%
Every: 1 Hour 1000 + (1000 * .20) = 1200
60,000 I/Os
-12,000 = 48,000
200 Extra IOPS * 60 = 12,000 I/Os -12,000 = 36,000
(1200 - 1000 = 200) -12,000 = 24,000

Burst Limit and Extra IOPS at minute 3

Module 1 Course Introduction and System Administration

Page 352 © Copyright 2022 Dell Inc.


Host I/O Limits

Minute 4

Since the burst continues, another 12,000 is subtracted from our total of 60,000
(60,000 – 12,000 – 12,000 -12,000 – 12,000 = 12,000). This happens as long as
the Host I/O rate is above our calculated values during the period. The extra I/O
operations are used within the 5-minute window.

Extra I/Os To Allow:


Host I/O Limit: 1000 X .20 x 5 x 60 = 60,000
Burst: 20%
1,000 IOPS Max
For: 5 minutes Burst Limit 20%
Every: 1 Hour 1000 + (1000 * .20) = 1200
60,000 I/Os
-12,000 = 48,000
200 Extra IOPS * 60 = 12,000 I/Os -12,000 = 36,000
(1200 - 1000 = 200) -12,000 = 24,000
-12,000 = 12,000

Burst Limit and Extra IOPS at minute 4

Minute 5

Since the burst is still continuing, an additional and final 12,000 I/O operations are
subtracted and now the allotted extra I/O value is depleted. During the burst, since
the Host I/O rate was always above our calculated values during this period, the
extra I/O operations were used within the 5-minute window. Once the burst
frequency ends, it will start again in 1 hour as determined by the “Every” parameter.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 353


Host I/O Limits

Extra I/Os To Allow:


Host I/O Limit:
1000 X .20 x 5 x 60 = 60,000
Burst: 20%
1,000 IOPS Max
For: 5 minutes Burst Limit 20%
Every: 1 Hour 1000 + (1000 * .20) = 1200
60,000 I/Os
-12,000 = 48,000
200 Extra IOPS * 60 = 12,000 I/Os -12,000 = 36,000
(1200 - 1000 = 200) -12,000 = 24,000
-12,000 = 12,000
-12,000 = 0

Burst Limit and Extra IOPS at minute 5

Module 1 Course Introduction and System Administration

Page 354 © Copyright 2022 Dell Inc.


Host I/O Limits

Animation - Burst Scenario 1

In this scenario, a Host I/O Limit and Burst Limit are configured, and the incoming
Host target I/O continually exceeds these values.

Movie:

The web version of this content contains a movie.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 355


Host I/O Limits

Burst Scenario 2

In this case where the Host target I/O is above Host I/O Limit, but below the Burst
Limit. The Host IOPS generated are somewhere in between these two limits.

Target I/O, Host Limit, Burst Limit

In this second scenario, the same calculations are used as in the previous slides
however the Host I/O being generated is around 1100 IOPS, right between our two
limits of 1000 and 1200. As Host I/O continues, we see at the 39-minute mark the
start of the I/O burst that in this case is a 10-minute period. The thing to note is the
I/O does not cross the 1100 IOPS since this is all the I/O the host was attempting to
do. Also, since the number of IOPS is smaller, it continues to run for a longer
period before the total Extra I/O count is reached.

Burst Limit

Host I/O Limit


Total
IOPS
Host I/O Limit:
1,000 IOPS Max
Burst: 20%
For: 5 Minutes
Every: 1 Hour

Time (Minutes)

10 Minute Burst

Target I/O, Host Limit, and Burst Limit

Total IOPS in 60 Minutes

Look at the calculations for this scenario. The Host I/O is between the two limits
and is only generating 1100 IOPS. The difference between the Host I/O Limit of
1000 and the actual Host I/O is 100 IOPS. So, the calculation is based on 100 x 60
= 6,000 I/O operations.

The total number of I/O operations calculated based on the original numbers is
60,000 I/O operations. So, for each 60-second period 6,000 I/O operations get
subtracted from the 60,000 I/O operation total. Effectively, this doubles the “For”

Module 1 Course Introduction and System Administration

Page 356 © Copyright 2022 Dell Inc.


Host I/O Limits

time since it will take 10 minutes to deplete the 60,000 I/O operations that the burst
limit allows. So even though the “For” period was 5 minutes, the number of IOPS
allowed were smaller, thus allowing for a longer period of the burst than the
configured time.

Extra I/Os To Allow:


Host I/O Limit: 1,000 x .20 x 5 x 60 = 60,000
1,000 IOPS Max 60,000 I/Os
Burst: 20%
For: 5 Minutes
Every: 1 Hour
Total
IOPS

Time (Minutes)

Burst Limit and Extra IOPS

Minute 1

Since the “For” period is set for 5 minutes, and that the number of IOPS allowed
were smaller this allows for a longer period of 10 minutes of burst than the
configured 5 minutes.

Therefore, for a 10-minute period we should use our 60,000 extra I/O operations.
(12000 x 5 = 60,000). Now only 6,000 is subtracted from our total of 60,000 for
each 60-second period (60,000 – 6,000 = 54,000). This continues for the frequency
of the burst. Every 60-second period will subtract an additional 6,000 I/O operations
until the allotted extra I/O value is depleted.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 357


Host I/O Limits

Extra I/Os To Allow:


Host I/O Limit: 1,000 x .20 x 5 x 60 = 60,000
1,000 IOPS Max 60,000 I/Os
Burst: 20%
60,000 I/Os
For: 5 Minutes
-6,000 I/Os = 54,000
Every: 1 Hour
Total
IOPS
100 Extra IOPS * 60 = 6,000 I/Os
(1100 - 1000 = 100)

Time (Minutes)

Burst Limit and Extra IOPS at minute 1

Minute 2

Here, another 6,000 is subtracted from our total of 60,000 for this 60-second period
[60,000 – 6,000 – 6,000 = 48,000].

Extra I/Os To Allow:


Host I/O Limit: 1,000 x .20 x 5 x 60 = 60,000
1,000 IOPS Max 60,000 I/Os
Burst: 20%
60,000 I/Os
For: 5 Minutes
Every: 1 Hour -6,000 I/Os = 54,000
Total -6,000 I/Os = 48,000
IOPS
100 Extra IOPS * 60 = 6,000 I/Os
(1100 - 1000 = 100)

Time (Minutes)

Burst Limit and Extra IOPS at minute 2

Module 1 Course Introduction and System Administration

Page 358 © Copyright 2022 Dell Inc.


Host I/O Limits

Minute 3

Another 6,000 is subtracted from our total of 60,000 for this 60-second period
(60,000 – 6,000 – 6,000 – 6000 = 42,000).

Extra I/Os To Allow:


Host I/O Limit: 1,000 x .20 x 5 x 60 = 60,000
1,000 IOPS Max 60,000 I/Os
Burst: 20%
60,000 I/Os
For: 5 Minutes
-6,000 I/Os = 54,000
Every: 1 Hour
Total -6,000 I/Os = 48,000
IOPS -6,000 I/Os = 42,000
100 Extra IOPS * 60 = 6,000 I/Os
(1100 - 1000 = 100)

Time (Minutes)

Burst Limit and Extra IOPS at minute 3

Minute 4

Another 6,000 is subtracted from our total of 60,000 for this 60-second period
(60,000 – 6,000 – 6,000 – 6000 – 6000 = 36,000).

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 359


Host I/O Limits

Extra I/Os To Allow:


Host I/O Limit: 1,000 x .20 x 5 x 60 = 60,000
1,000 IOPS Max 60,000 I/Os
Burst: 20%
For: 5 Minutes 60,000 I/Os
-6,000 I/Os = 54,000
Every: 1 Hour
Total -6,000 I/Os = 48,000
IOPS -6,000 I/Os = 42,000
100 Extra IOPS * 60 = 6,000 I/Os -6,000 I/Os = 36,000
(1100 - 1000 = 100)

Time (Minutes)

Burst Limit and Extra IOPS at minute 4

Minutes 5 through 10

This continues until the extra I/O operations for the burst are depleted. As you can
see, even though the “For” period was 5 minutes, the number of I/O operations per
60 seconds were smaller and allowed for a longer period of burst than the
configured time.

Module 1 Course Introduction and System Administration

Page 360 © Copyright 2022 Dell Inc.


Host I/O Limits

Extra I/Os To Allow:


Host I/O Limit: 1,000 x .20 x 5 x 60 = 60,000
1,000 IOPS Max 60,000 I/Os
Burst: 20%
60,000 I/Os
For: 5 Minutes
-6,000 I/Os = 54,000
Every: 1 Hour
-6,000 I/Os = 48,000
Total
IOPS -6,000 I/Os = 42,000
100 Extra IOPS * 60 = 6,000 I/Os -6,000 I/Os = 36,000
(1100 - 1000 = 100) -6,000 I/Os = 30,000
-6,000 I/Os = 24,000
-6,000 I/Os = 18,000
-6,000 I/Os = 12,000
-6,000 IOPS = 6,000
-6,000 IOPS = 0
Time (Minutes)

Burst Limit and Extra IOPS at minutes 5-10

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 361


Host I/O Limits

Animation - Burst Scenario 2

In this scenario, the Host target I/O is above the Host I/O Limit, but below the Burst
Limit.

Movie:

The web version of this content contains a movie.

Module 1 Course Introduction and System Administration

Page 362 © Copyright 2022 Dell Inc.


Host I/O Limits

Policy Level Controls

Here are the available policy level controls and status conditions that are displayed
in Unisphere.

Host I/O Limits provides the ability to pause and resume a specific host I/O limit.
This feature allows each configured policy to be paused or resumed independently
of the others, whether or not the policy is shared. Pausing the policy stops the
enforcement of that policy. Resuming the policy immediately starts the enforcement
of that policy and throttles the I/O accordingly. There are three status conditions for
Host I/O Limit policies: Active, Paused, or Global Paused.

Each policy can be paused or resumed


independent of the others (shared or not)
Active

Policy
Pause stops the enforcement of the policy Status Paused

Global
Paused
Resume starts the enforcement of the policy

Policies can be paused, resumed, or global paused

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 363


Host I/O Limits

Policy Level Controls Defined

System Settings and Policy Status

The table looks at the policies and their relationship for both System Settings and
Policy Status. When a policy is created, the policy is displayed as Active by default.
System Settings are global settings and are displayed as either Active or Paused.
When the System Settings are displayed as Active, the Policy Status will be
displayed as “Active” or “Paused” depending on the status of the policy when the
System Settings were changed.

System Setting Policy Status

Active Active/Paused

Paused Global/Paused

Paused Paused

System settings and policy status

Changing System Settings to Paused

For example, if the System Setting was “Active” and the user had configured three
policies A, B and C. A user could pause A, and the system would update the status
of “A” to “Paused.” The other two policies B and C would still display an “Active”
status. At this point if the user decided to change the System settings to “Pause”
the Policy status will be displayed as “Global Paused” on policies B and C but
“Paused” on A.

Module 1 Course Introduction and System Administration

Page 364 © Copyright 2022 Dell Inc.


Host I/O Limits

System Setting Policy Status

Active/Paused
Active

Paused Global/Paused

Paused Paused

Changing System settings to paused

Changing Policy Settings to Paused

When both the System setting and Policy Setting are “Paused,” the Policy Status
will be shown as “Paused.”

System Setting Policy Status

Active Active/Paused

Paused Global/Paused

Paused Paused

Changing Policy settings to paused

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 365


Host I/O Limits

Host I/O Limits System Pause – Settings

Host I/O - System Level Settings

To set a Host I/O Limit at the System level, click the Settings icon then go to
Management > Performance. The Host I/O Limits Status is Active. All host I/O
limits are being enforced now. The limits can be temporarily lifted by clicking
Pause.

Host I/O - System Level Settings

Pausing Host I/O Limits

If there are “Active” policies, you can pause the policies on a system-wide basis.
Once you select “Pause,” you will be prompted to confirm the operation. (Not
shown)

Module 1 Course Introduction and System Administration

Page 366 © Copyright 2022 Dell Inc.


Host I/O Limits

The Performance > Host I/O Limits page shows the policies that are affected by
the Pause. In the example, three policies display a Status of “Global Paused”
indicating a System-wide enforcement of those policies.

Select Pause and confirm


the operation

Host I/O Limits Page

Pausing Host I/O Limits

Resuming Host I/O Limits

The Host I/O Limits Status now displays a “Paused” Status, and users can
“Resume” the policy. Select Resume to allow the system to continue with the
throttling of the I/O according to the parameters in the policies.

Host I/O Limits Page

Resuming Host I/O Limits

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 367


Host I/O Limits

Host I/O Limits Policy Pause – Unisphere

1 - Select Policy

The example displays the Host I/O Limits policies from Unisphere under System >
Performance > Host I/O Limits window. There are several policies that are
created, three of which show a default status of Active. The Density_Limit_1 policy
is selected.

Select policy

2 - Pause Policy

From the More Actions tab, users have a chance to Pause an Active session
(resume will be unavailable).

Module 1 Course Introduction and System Administration

Page 368 © Copyright 2022 Dell Inc.


Host I/O Limits

Pause policy

3 - Confirm Pause

Once the Pause option is selected, a warning message is issued to the user to
confirm the Pause operation.

Confirm pause

4 - Verify Pause

Selecting Pause will start a background job and after a few seconds, causes the
Status of the policy to be displayed a Paused. All other policies are still Active since
the pause was done at the Policy level, not the System level.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 369


Host I/O Limits

Verify pause

Module 1 Course Introduction and System Administration

Page 370 © Copyright 2022 Dell Inc.


Host I/O Limits

Demonstration

These demos show how to setup different types of host I/O limit policies. Click the
associated links to view the videos.

Topics Link

Creating an Absolute Host I/O Limit Launch


policy

Creating a shared Absolute Host I/O Launch


Limit policy

Creating a shared Density-based Host Launch


I/O Limit policy

Configuring I/O Burst settings for an Launch


Absolute Host I/O Limit policy

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 371


UFS64 File System Extension and Shrink

UFS64 File System Extension and Shrink

Module 1 Course Introduction and System Administration

Page 372 © Copyright 2022 Dell Inc.


UFS64 File System Extension and Shrink

File System Extension Overview

In Dell Unity XT systems, the UFS64 architecture allows users to extend file
systems. Performing UFS64 file system extend operations is transparent to the
client meaning the array can still service I/O to a client during extend operations.

• On a physical Dell Unity XT systems, the maximum size a file system can be
extended to is 256 TB
− The maximum file system size on Dell UnityVSA is defined by its license.
• The capacity of thin and thick file systems can be extended by manually
increasing their total size.
• Auto-extension works only on thin file systems

− Thin file systems are automatically extended by the system based on the
ratio of used-to-allocated space.
− File systems automatically extend when used space exceeds 75% of the
allocated space.
− Auto-extension operation happens without user intervention and does not
change the advertised capacity.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 373


UFS64 File System Extension and Shrink

Manual UFS64 File System Extension

For thin-provisioned file systems, the manual extend operation increases visible
or virtual size without increasing the actual size allocated to the file system from the
storage pool.

For thick file systems, the manual extend operation increases the actual space
allocated to the file system from the storage pool.

Old Visible Size

Thin UFS64
Extend

New Visible Size

Old Size
Thick UFS64
Extend
New Size

Comparison of manual extension on thick and thin provisioned file systems

Module 1 Course Introduction and System Administration

Page 374 © Copyright 2022 Dell Inc.


UFS64 File System Extension and Shrink

Automatic UFS64 File System Extension

Thin-provisioned file systems are automatically extended by the system when


certain conditions are met. A thin-provisioned file-based storage resource may
appear full when data copied or written to the resource is greater than the space
available at that time. When this occurs, the system begins to automatically extend
the storage space and accommodate the write operation. If there is enough
extension space available, this operation will complete successfully.

The system automatically allocates space for a thin UFS64 file system along with
space consumption. Auto extend happens when the space consumption threshold
is reached. The threshold is the percentage of used space in the file system
allocated space (system default value is 75%). It cannot exceed the file system
visible size. Only allocated space increases, not the file system provisioned size.
The file system cannot auto-extend past the provisioned size.

Thin UFS64 Threshold

Auto Extend

Thin provisioned file system automatic extension

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 375


UFS64 File System Extension and Shrink

Storage Space Reclamation Overview

In Dell Unity XT, the UFS64 architecture enables the reduction of the space the file
system uses from a storage pool.

• UFS64 architecture allows the underlying released storage of the file system to
be reclaimed.
• The storage space reclamation is triggered by the UFS64 file system shrink
operations.
• UFS64 shrink operations can be:

− Manually initiated by user for both thin and thick file systems.
− Automatic initiated only on thin file systems when the storage system
identifies allocated, but unused, storage space that can be reclaimed back to
the storage pool.

Module 1 Course Introduction and System Administration

Page 376 © Copyright 2022 Dell Inc.


UFS64 File System Extension and Shrink

UFS64 Thin File System Manual Shrink

A storage administrator can manually shrink the provisioned size of a thin or thick-
provisioned file system into, or within, the allocated space.

In this example, a thin-provisioned 1 TB file system is being shrunk by 700 GB to a


new thin-provisioned size of 300 GB. A thick-provisioned file system can be shrunk
in a similar manner.

Thin FS Manual Shrink Overview

The thin-provisioned file system currently has 450 GB of space allocated from the
storage pool. The allocated space consists of 250 GB of Used Space and 200 GB
of Free Space. The system performs any evacuation that is necessary to allow the
shrinking process on the contiguous free space.

Used Space

1. Shrink Request: Shrink provisioned size to 300 GB Free Space


75GB 50GB 100GB 25GB
Virtual Space
Removes 700 GB from 1 TB size

Used Space

450 GB
Reclaimed Space
UFS64
1 TB

2. Evacuation: Evacuate portions of the file system block address space Allocated Space

75G 25G 50G 100GB

450 GB

1 TB

Pool

1 TB

Thin-provisioned file System shrink operation

Thin FS Manual Shrink Completed

In the example, the provisioned space for the thin-provisioned file system is
reduced by 700 GB. The total storage pool free space is increased by 150 GB. The
file system Allocated Space and Pool Used Space is decreased. The Allocated
space after the shrink drops below the original allocated space, enabling the

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 377


UFS64 File System Extension and Shrink

storage pool to reclaim the space. Observe that the only space that is reclaimed is
the portion of the shrink that was in the original allocated space of 450 GB. This is
because the remaining 550 GB of the original thin file system was virtual space that
is advertised to the client.

Used Space

Free Space
3. Truncate File System: Provisioned Space is reduced by 700 GB
100GB 50GB 100GB
Virtual Space

Used Space

300 GB Reclaimed Space


UFS64
4. Space Reclaimed: Storage pool free space increased by 150 GB
Allocated Space
100GB 50GB 100GB

300 GB

Poo
l

300 GB

Storage pool space reclamation after file system shrink operation

Module 1 Course Introduction and System Administration

Page 378 © Copyright 2022 Dell Inc.


UFS64 File System Extension and Shrink

UFS64 File System Automatic Shrink

Thin-provisioned file systems are automatically shrunk by the system when certain
conditions are met. Automatic shrink improves space allocation by releasing any
unused space back to the storage pool. The file system is automatically shrunk
when the used space is less than 70% [system default value] of the allocated
space after a period of 7.5 hours. The file system provisioned size does not shrink,
only the allocated space decreases.

Thin UFS64 file system only


Used Space
Releases unused space back to the storage pool

Based on the ratio of used-to-allocated space Free Space


- Initiated when the used space is less than 70%

- Time-based Virtual Space

- Check ratio every 1.5 hours


Used Space
- Initiate shrink after 5 checks or a 7.5 hours period

Reclaimed Space

Allocated Space

UFS64
Auto Shrink

Threshold

Thin-provisioned file system automatic shrink

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 379


UFS64 File System Extension and Shrink

File System Extension and Shrink Operations

To change the size of a file system, select the File page under the Storage section
in Unisphere. Then select the File System tab from the top menu. The properties of
the File System can be launched by double-clicking the File System from the list or
by clicking the pencil icon from the menu on the top of the File Systems list. From
the General tab, the size of the file system can be extended by increasing the Size
field. To shrink the file system, you must decrease the Size field. The Apply button
must be selected to commit the changes. The change to the file system
configuration [size and percentage of allocation space] will be displayed in the list.
In this example, the fs02 file system size is manually set to 110 GB.

Unisphere file system properties window

Module 1 Course Introduction and System Administration

Page 380 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

File-level Retention (FLR)

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 381


File-level Retention (FLR)

FLR Overview

NAS Server

FLR enabled file system

2086

2086 2056

Use case: Archive/compliance

Reference: "Dell EMC Unity: File-Level Retention (FLR)" white paper

File-Level Retention is enabled at the file system level

• Locks files to protect from deletion/modification


– SMB, NFS, or FTP clients
– For specified retention date and time
• Enabled at file system creation
– Cannot be disabled
– FLR clock and activity log
• Two different retention types
– FLR-E (Enterprise)
– FLR-C (Compliance)
o SEC rule 17a-4(f)
• FLR file states

– Not locked
– Append-only
– Locked (WORM)

Module 1 Course Introduction and System Administration

Page 382 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

– Expired
File-level Retention (FLR) protects files from modification or deletion through SMB,
NFS, or FTP access based on a specified retention date and time. The retention
period can be increased but cannot be reduced. The FLR use case is for file data
content archival and compliance needs. FLR is also beneficial in preventing users
from accidental file modification and deletion.

For full details of the FLR feature, reference the Dell EMC Unity: File-Level
Retention (FLR) white paper available on Dell EMC Online Support.

FLR can only be enabled during the creation of a file system. Once FLR is enabled
for a file system, it cannot be disabled after the file system is created. Therefore, it
is critical to know if FLR is required at file system creation time. When a file system
is enabled for FLR, a nonmodifiable FLR clock is started on the file system. The
FLR clock is used to track the retention date. An FLR activity log is also created on
the file system when it is FLR enabled. The activity log provides an audit record for
files stored on the file system.

There are two different types of FLR; FLR-E (Enterprise) and FLR-C (Compliance).

FLR-E protects file data that is locked from content changes that are made by SMB
and NFS users regardless of their administrative rights and privileges. An
appropriately authorized Dell EMC Unity administrator (with the Unisphere
Administrator or Storage Administrator role) can delete an FLR-E enabled file
system, even if it contains locked files.

FLR-C protects file data that is locked from content changes that are made by SMB
and NFS users regardless of their administrative rights and privileges. File systems
containing locked files cannot be deleted by any authorized Dell EMC Unity
administrative role. FLR-C enabled file systems are compliant the Securities and
Exchange Commission (SEC) rule 17a-4(f) for digital storage. FLR-C also includes
a data integrity check for files that are written to an FLR-C enabled file system. The
data integrity check affects write performance to an FLR-C enabled file system.

Files within an FLR enabled file system have different states; Not Locked, Append-
only, Locked and Expired.

Not Locked: All files start as not locked. A not locked file is an unprotected file that
is treated as a regular file in a file system. In an FLR file system, the state of an
unprotected file can change to Locked or remain as not locked.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 383


File-level Retention (FLR)

Append-only: Users cannot delete, rename, and modify the data in an append-
only file, but users can add data to it. A use case for an append-only file is to
archive logfiles that grow over time. The file can remain in the append-only state
forever. However, a user can transition it back to the Locked state by setting the file
status to read-only with a retention date.

Locked: Also known as “Write Once, Read Many” (WORM). A user cannot modify,
extend, or delete a locked file. The path to locked files is protected from
modification. That means a user cannot delete or rename a directory containing
locked files. The file remains locked until its retention period expires. An
administrator can perform two actions on a locked file: 1. Increase the file retention
date to extend the existing retention period. 2. If the locked file is initially empty,
move the file to the append-only state.

Expired: When the retention period ends, the file transitions from the locked state
to the expired state. Users cannot modify or rename a file in the expired state, but
can delete the file. An expired file can have its retention period extended such that
the file transitions back to the locked state. An empty expired file can also transition
to the append-only state.

Module 1 Course Introduction and System Administration

Page 384 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

FLR Capabilities and Interoperability

• Supported on entire Dell Unity Family of storage systems


– Dell Unity XT platform
– Dell UnityVSA
• Supports Replication
– Destination FLR type must match source
– FLR changes at source are replicated to destination
• Supports NDMP
– Backups include retention period and permissions but not lock status
– Restores lock read-only files
– Append-only files are normal files after restore
• Supports Data Reduction

• Supports CTA tiering as a destination


– Cannot be a tiering source
• Supports File Import from VNX
– If source VNX file system is FLR enabled, target Dell EMC Unity file system
is FLR type matched
– VNX is DHSM enabled
• Supports Snapshots
– FLR-C supports read-only snapshots
– FLR-C does not support snapshot restores
– FLR-E supports read-only and R/W snapshots
– FLR-E supports snapshot restores
• VMware NFS datastores not supported

The FLR feature is supported on the entire Dell Unity family of storage systems. It
is available on all physical Dell Unity XT models and the Dell UnityVSA.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 385


File-level Retention (FLR)

The Dell Unity Replication feature supports FLR. When replicating an FLR enabled
file system, the destination file system FLR type must match the source. If the
replication session is created with the Unisphere GUI, the system automatically
creates the destination file system to match the source file system FLR type. If the
replication session is being created with UEMCLI, the destination file system
provisioning and FLT type selection are done manually.

FLR enabled file systems are supported with NDMP backup and restore
operations. The retention period and permissions of files are captured in the
backup but the file lock status is not. When an FLR enabled file system is restored
with NDMP, read-only files are restored as locked files. Append-only files are
restored as normal files.

FLR fully supports the Dell Unity Data Reduction feature.

FLR is supported as a tiering destination for CTA archive operations. However,


FLR enabled file system are not supported as a CTA tiering source.

FLR supports the Dell EMC File Import feature. If the source VNX file system
imported is FLR enabled, the target Dell Unity file system is migrated as a type
matched FLR enabled file system. The source VNX must be DHSM enabled. The
DHSM credentials are used when the import session is created on the Dell Unity
system.

FLR supports the Dell Unity Snapshots feature. FLR-C file systems support read-
only snapshots but do not support snapshot restore operations. FLR-E file systems
support read-only and R/W snapshots, and support snapshot restores. When an
FLR-E file system is restored from a snapshot, the FLR file system clock is set back
in time, corresponding to the snapshot time. Note that the change to the FLR clock
effectively extends the retention period of locked files.

FLR is not supported on VMware NFS datastores.

Module 1 Course Introduction and System Administration

Page 386 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

Process to Enable and Manage FLR

Enable FLR on file system

Define retention period limits


FLR-E
FLR-C Set file lock/append-only state
Minimum
- Enable writeverify
Default
NFS
Maximum
SMB
- FLR Toolkit
Automated

Process to enabled and manage FLR on a file system

There is a process to enable and manage FLR on a file system.

The first step in the process is to enable FLR on the file system. It must be done at
file system creation time. The file system creation wizard includes a step to enable
FLR where either the FLR-E or FLR-C type can be selected. If FLR-C is selected,
there is a separate step to enable its data integrity check. The data integrity check
is controlled by the writeverify NAS Server parameter.

The next step is to define retention period limits for the file system and is done
within the FLR step of file system creation wizard. The retention period limits can
also be defined after the file system is created from the FLR tab of the file system
Properties. A minimum limit, a default limit, and a maximum limit are defined for the
FLR enabled file system.

The next step is to set a lock or append-only state for files on the file system. There
is a process to set a file to the lock state and a process to set a file to the append-
only state. For NFS files, setting the file state is done from an NFS client. For SMB

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 387


File-level Retention (FLR)

files, setting the file state is done using the FLR Toolkit application. A retention time
can also be placed on files in an automated fashion by the system. This is enabled
from the FLR tab of the file system Properties.

Module 1 Course Introduction and System Administration

Page 388 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

Enable FLR on a File System

Unisphere Create File System wizard step to enable FLR

Enabling a file system for FLR is only done during the creation of the file system in
the Create File System wizard. The FLR step of the wizard by default has the FLR
option Off. Select either Enterprise to enable FLR-E or select Compliance to enable
FLR-C. The example illustrates FLR-C being enabled for the file system.

When either type is selected, a confirmation window is displayed indicating to the


user that FLR will protect files from modification and deletion. The message also
informs the user that once enabled FLR cannot be disabled or enabled later.

When the user confirms to enable FLR, options are exposed for defining the
minimum, default, and maximum retention periods for FLR. Shown in the example
are the default retention periods for FLR-C. The retention period values can also be
defined after the file system creation from the FLR tab of the file system Properties.
The retention periods for the file system are covered on a following slide.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 389


File-level Retention (FLR)

Enable writeverify for FLR-C

~ writeverify disabled ~

~ writeverify disabled ~

CLI command to enable data integrity check

When FLR-C is enabled on a file system, the user must also turn on the data
integrity check. It is required for compliance before files are locked on the file
system. The NAS Server FLRCompliance.writeverify parameter controls the
data integrity check. The parameter is set using the svc_nas CLI command from
an SSH session to the system. When the parameter is enabled, all write operations
on all FLR Compliance file systems mounted on the NAS Server are read back and
verified. The integrity check ensures that the data has been written correctly. The
system performance may degrade during this procedure due to the amount of work
being performed.

In the example, the first svc_nas command is used to check if the parameter is
enabled. From its output, the current value is set to 0 indicating that writeverify is
disabled.

The second svc_nas command sets the value of the parameter to 1, to enable
writeverify.

The third svc_nas command verifies that writeverify is enabled.

Module 1 Course Introduction and System Administration

Page 390 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

Define FLR Retention Periods

Default Value Minimum Value Maximum Value

1 Day 0 Days 87 Years or Unlimited

Default Value Minimum Value Maximum Value


Unlimited (FLR-E)
0 Days 87 Years or Unlimited
1 Year (FLR-C)

Default Value Minimum Value Maximum Value

Unlimited 1 Day 87 Years or Unlimited

File system properties FLR tab with FLR retention periods

This example illustrates the information and retention period configuration available
from the FLR tab of the file system Properties.

The FLR Type for the file system is shown. In this example,, the file system has
been enabled for Compliance. Also displayed are the number of protected files. In
this example file system has no protected files. The FLR clock time is displayed.
The tab also displays the date when the last protected file expires.

An FLR enabled file system has retention period limits that can be customized to
user needs. Retention periods define how short or long a user can define a file to
be locked. The retention periods can be set within the FLR step of the File System
Creation wizard as seen previously. The Retention periods can also be configured
any time after the file system is created. This example illustrates the retention
periods that are defined for the file system. The respective tables show the default,
minimum and maximum values for each of the retention periods.

The Minimum Retention Period value specifies the shortest time period a user can
specifically lock files for. The value of the Minimum Retention Period must be less
than or equal to the Maximum Retention Period value.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 391


File-level Retention (FLR)

The Default Retention Period specifies the time period a file is locked for when the
user does not explicitly set a retention time for files. The Default Retention Period is
also used when automation is configured to lock files on the file system. The
Default Retention Period value must be greater than or equal to the Minimum
Retention Period value. It must also be less than or equal to the Maximum
Retention Period value.

The Maximum Retention Period specifies the longest time period that files can be
locked for. The value must be greater than of equal to the Minimum Retention
Period value.

Note: The FLR retention periods can be modified at any time. The modification only
affects the retention times for newly locked files by the user or automation.
Previously locked files remain unchanged.

Module 1 Course Introduction and System Administration

Page 392 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

Set File State - NFS

Set file Lock state

Set file Append-only state

CLI command to set file state to locked or append-only

The file state of locked or append-only is set using an NFS client that is mounted to
the exported file system.

A file lock state is achieved by setting the last access time of the file to the wanted
file retention date and time, and then change the file permission bits to read-only.
To set the file last access date and time, use the touch command with the –at
option and the wanted retention date and time. In the example, a file that is named
lockedfile has its last access time set to 23:59, Dec 31, 2024 as shown in the ls
output for the file. Then the file is set to read-only using the chmod command with
the –w option to remove the write permission.

When setting the locked file retention date and time, it must be equal to or less than
the Maximum Retention Period defined on the file system. Any attempt to set a file
retention date and time greater than the Maximum Retention Period results in the
retention date and time setting equal to the Maximum Retention Period setting. In a
similar manner, any attempt to set a file retention date and time less than the
Minimum Retention Period results in the retention date and time setting equal to
the Minimum Retention Period setting. Files that are locked without specifying a

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 393


File-level Retention (FLR)

retention date and time results in the retention date and time setting equal to the
Default Retention Period setting.

Module 1 Course Introduction and System Administration

Page 394 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

Set File State - FLR Toolkit for SMB

CLI: flrapply

Windows Explorer: FLR Attributes Tab

Properties

Requires DHSM enabled on NAS Server

Setting file state using FLR toolkit

Windows does not have a native UI/CLI to set retention date and time to lock files.
The Dell FLR Toolkit is an application available for download from Dell Online
Support. Install the application on a Windows client in the same domain as the FLR
enabled file system to be accessed. The application uses the Windows API
SetFileTime function for setting retention date and time to lock files on FLR enabled
file systems. The toolkit includes a CLI function called flrapply. Another aspect of
the FLR toolkit is an enhancement to Windows Explorer. An FLR Attributes tab is
available in Windows Explorer file Properties. The FLR toolkit also has an FLR
Explorer which has FLR related reporting and retention time capabilities. FLR
Explorer is not shown in this training.

FLR Toolkit requires that DHSM be enabled on the NAS Server that is associated
with the FLR enabled file system. Do not check Enforce HTTP Secure when
enabling DHSM on the NAS Server.

The examples illustrate setting retention date and time on a file to set its lock state.
In the flrapply CLI example, an SMB file is set to the lock state with a retention data
and time of 12:00 PM May 8, 2024. The second example illustrates the Windows
Explorer FLR Attributes tab enhancement in the file properties window. The tab
displays the FLR expiration date of the file. The example illustrates the retention

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 395


File-level Retention (FLR)

date and time being extended on the file to 12:00 PM Aug 8, 2024. As with NFS,
when specifying file retention dates and times, they must be within the Minimum
and Maximum Retention Period values. If not, the settings defined for the Retention
Period are used to lock the file.

Module 1 Course Introduction and System Administration

Page 396 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

Set File State - Automated

Automation to lock unmodified files


Scan interval for unmodified files
Automatically deletes expired files

Setting automatic lock and delete

Files can be locked through automation on FLR enabled file systems using options
available on the FLR tab of the file system Properties. The automation options are
disabled by default.

When the Auto-lock New Files option is enabled, the Auto-lock Policy Interval
configuration is exposed. The system automatically locks files if they are not
modified for a user specified time period, defined by the Auto-lock Policy Interval.
Automatically locked files use the Default Retention Period setting. Files in append-
only mode are also subject to automatic locking.

When enabled, the Auto-delete Files When Retention Ends option automatically
deletes locked files after their retention date and time have expired. The auto-
delete happens at 7-day intervals. Its timer starts when the auto-delete option is
enabled.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 397


File-level Retention (FLR)

Scalability, Performance and Compliance Key Points

54. FAST Cache


e. FAST Cache is a secondary cache created from SAS Flash 2 drives that
extends the storage system caching capacity.
f. The storage system identifies LUN data that that is more frequently
accessed and service any subsequent requests from the FAST Cache.
g. FAST Cache operations includes host reads/writes, FAST Cache promotion,
FAST Cache flush, and FAST Cache cleaning. In addition, the FAST Cache
can be expanded, shrunk or deleted.
55. Host I/O Limits
h. Dell Unity XT Host I/O Limits is a feature that limits I/O to storage resources:
LUNs, attached snapshots, and VMFS datastores.
i. Only one Host I/O limit policy can be applied to a storage resource. Policies
can be set by:
- Throughput, in IOs per second (IOPS)
- Bandwidth, defined by Kilobytes or Megabytes per second (KBPS or
MBPS)
- A combination of both types of limits
j. There are two Host I/O Limit policy types: Absolute and Density-based. In
addition, a policy can share the same limit(s) with all assigned storage
resources.
k. The Burst feature allows for one-time exceptions to Host I/O Limits. The
feature can be set at some user-defined frequency for each Host I/O Limit
policy.
56. UFS64 File System Extension and Shrink
l. In Dell Unity XT systems, the UFS64 architecture enables the extension of
file systems.
– A storage administrator can manually extend the size of a provisioned file
system.
– Thin file systems are automatically extended by the system based on the
ratio of used-to-allocated space.

Module 1 Course Introduction and System Administration

Page 398 © Copyright 2022 Dell Inc.


File-level Retention (FLR)

m. In Dell Unity XT, the UFS64 architecture enables the reduction of the space
the file system uses from a storage pool.
- A storage administrator can manually shrink the size of a provisioned file
system.
- Thin-provisioned file systems are automatically shrunk by the system
when the used space is less than 70%.
57. File-level Retention (FLR)

n. Dell Unity XT supports the configuration of File-level Retention (FLR) during


the creation of a file system.
o. File-level Retention (FLR) protects files from modification or deletion through
SMB, NFS, or FTP access based on a specified retention date and time.
p. There are two different types of FLR; FLR-E (Enterprise) and FLR-C
(Compliance).
q. Files within an FLR enabled file system have different states; Not Locked,
Append-only, Locked and Expired.

For more information, see the Dell EMC Unity Family Configuring
Pools, Dell EMC Unity: NAS Capabilities, and Dell EMC Unity:
File-Level Retention (FLR) on the Dell Technologies Support site.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 399


Data Reduction

Data Reduction

Module 1 Course Introduction and System Administration

Page 400 © Copyright 2022 Dell Inc.


Data Reduction

General Data Reduction Overview

In general, Data Reduction reduces the amount of physical storage that is required
to save a dataset. Data Reduction helps reduce the Total Cost of Ownership (TCO)
of a Dell Unity XT storage system.

Data Reduction is achieved using the following methods:


• Deduplication uses algorithms to analyze, perform pattern detection, and
attempts to store only a single instance of a data pattern.
• Zero Detection logically detects and discards consecutive zeros, saves only
one instance, and uses pointers.
• Compression encodes data using fewer bits than the original representation.
• Advanced Deduplication deduplicates data blocks within a given storage
resource that do not contain internally-defined data patterns.

When Data Reduction is selected on Dell Unity XT systems, it enables


Deduplication, Zero Detection, and Compression.

Advanced Deduplication requires Data Reduction to be enabled on the resource


but can be enabled or disabled independently of the Data Reduction setting.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 401


Data Reduction

Data Reduction Overview in Dell Unity XT

Thin LUNs Thin File Systems

Thin LUNs within Consistency Groups Thin VMware VMFS and NFS Datastores

Data Reduction Algorithm

All-Flash and Hybrid Storage


Resources

Dell Unity XT Data Reduction:


• Is licensed with all physical Dell Unity XT systems at no additional cost.
• Is easy to manage and is intelligently controlled by the storage system.
• Is configured through Unisphere, Unisphere CLI, or REST API.
• Is supported on All-Flash and Hybrid pools.
• Is supported on Thin LUNs, Thin LUNs within a Consistency Group, Thin File
Systems, and Thin VMware VMFS and NFS Datastores.
• Provides Data Reduction savings on Snapshots and Thin Clones.
• Is not available on the Dell UnityVSA.

Data Reduction can also be enabled on Block and File storage resources
participating in replication sessions. The source and destination storage resources
in a replication session are independent. Data Reduction with or without the
Advanced Deduplication option can be enabled or disabled separately on the
source and destination resource.

Module 1 Course Introduction and System Administration

Page 402 © Copyright 2022 Dell Inc.


Data Reduction

Considerations

• Hybrid pools created on Unity XT model systems support Data Reduction with
and without Advanced Deduplication enabled on Traditional or Dynamic pools.
• To support Data Reduction, the pool must contain a flash tier and the total
usable capacity of the flash tier must meet or exceed 10% of the total pool
capacity.
− Data Reduction can be enabled on an existing resource if the flash capacity
requirement is met.
• The Advanced Deduplication switch is available only on:

− Dynamic or Traditional pools in Unity XT 380F, 480F, 680F, and 880F


systems
− Dynamic pools in Unity All-Flash 450F, 550F, and 650F systems.
− All-Flash and Hybrid pools in Unity Hybrid 380, 480, 680, and 880 systems.

Go to: The About Data Reduction and Advanced Deduplication


section of the Dell Unity XT Family Configuring Pools documentation.
For a deeper dive, review the Dell Unity XT: Data Reduction
whitepaper.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 403


Data Reduction

Data Reduction Theory of Operation

For Data Reduction enabled storage resources, the Data Reduction process occurs
during the System Cache proactive cleaning operations or when System Cache is
flushing cache pages to the drives within a Pool. The data in this scenario may be
new to the storage resource, or the data may be an update to existing blocks of
data currently residing on disk.

In either case, the Data Reduction algorithm occurs before the data is written to the
drives within the Pool. During the Data Reduction process, multiple blocks are
aggregated together and sent through the algorithm. After determining if savings
can be achieved or data must be written to disk, space within the Pool is allocated
if needed, and the data is written to the drives.

Data Reduction Algorithm

All-Flash and Hybrid Flash Pool


Storage Resources

Data Reduction process

Process:
72. System write cache sends data to the Data Reduction algorithm during
proactive cleaning or flushing.
73. Data Reduction logic determines any savings.
74. Space is allocated in the storage resource for the dataset if needed, and the
data is sent to the disk.

Module 1 Course Introduction and System Administration

Page 404 © Copyright 2022 Dell Inc.


Data Reduction

Data Reduction - Deduplication

Data is sent to the Data Reduction algorithm during proactive cleaning or flushing
of write path data.

In the example, an 8 KB block enters the Data Reduction algorithm and Advanced
Deduplication is disabled.
• The 8 KB block is first passed through the deduplication algorithm. Within this
algorithm, the system determines if the block consists entirely of zeros, or
matches a known pattern within the system.
• If a pattern is detected, the private space metadata of the storage resource is
updated to include information about the pattern, along with information about
how to re-create the data block if it is accessed in the future.
• Also, when deduplication finds a pattern match, the remainder of the Data
Reduction feature is skipped for those blocks which saves system resources.
None of the 8 KB block of data is written to the Pool at this time.
• If a block was allocated previously, then the block can be freed for reuse. When
a read for the block of data is received, the metadata is reviewed, and the block
will be re-created and sent to the host.
• If a pattern is not found, the data is passed through the Compression Algorithm.
If savings are achieved, space is allocated on the Pool to accommodate the
data.
• If the data is an overwrite, it may be written to the original location if it is the
same size as before.

The example displays the behavior of the Data Reduction algorithm when
Advanced Deduplication is disabled.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 405


Data Reduction

Deduplication algorithm
looks for zeros and
8 KB block common patterns Update private space
Pattern Detected metadata to include pattern
reference

End

Pattern not found


Compression algorithm

Data Reduction - Enabled


Advanced Deduplication - Disabled

Data Reduction algorithm behavior when Advanced Deduplication is disabled

Module 1 Course Introduction and System Administration

Page 406 © Copyright 2022 Dell Inc.


Data Reduction

Data Reduction - Advanced Deduplication

If an 8 KB block is not deduplicated by the zero and common pattern deduplication


algorithm, the data is passed into the Advanced Deduplication algorithm.

Each 8 KB block receives a fingerprint, which is compared to the fingerprints for the
storage resource. If a matching fingerprint is found, deduplication occurs. The
private space within the resource is updated to include a reference to the block of
data residing on disk. No data is written to disk at this time.

Storage resource savings are compounded as deduplication can reference


compressed blocks on disk. If a match is not found, the data is passed to the
compression algorithm. Advanced Deduplication only compares and detects
duplicate data that is found within a single storage resource, such as a LUN or File
System.

The fingerprint cache is a component of the Advanced Deduplication algorithm.


The fingerprint cache is a region in system memory that is reserved for storing
fingerprints for each storage resource with Advanced Deduplication enabled. There
is one fingerprint cache per storage processor, and it contains the fingerprints for
storage resources residing on that SP.

Deduplication algorithm

Pattern not found

Fingerprint Calculation

Compare
Fingerprint Compare
Match

Fingerprint
Update private space
No Match
Cache
to include data
resource
Compression algorithm
Update

End

Data Reduction algorithm behavior when Advanced Deduplication is enabled

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 407


Data Reduction

Through machine learning and statistics, the fingerprint cache determines which
fingerprints to keep, and which ones to replace with new fingerprints. The
fingerprint cache algorithm learns which resources have high deduplication rates
and allows those resources to consume more fingerprint locations.
• If no fingerprint match is detected, the blocks enter the compression algorithm.
• If savings can be achieved, space is allocated within the Pool which matches
the compressed size of the data, the data is compressed, and the data is written
to the Pool. When Advanced Deduplication is enabled, the fingerprint for the
block of data is also stored with the compressed data on disk.
• The fingerprint cache is then updated to include the fingerprint for the new data.

Compression does not compress data if no savings can be achieved. In this


instance, the original block of data will be written to the Pool. Waiting to allocate
space within the resource until after the compression algorithm is complete helps to
not over-allocate space within the storage resource.

Module 1 Course Introduction and System Administration

Page 408 © Copyright 2022 Dell Inc.


Data Reduction

Read Operation

The example shows the process for a host Read operation.

Host request a read to an LBA on a storage


resource

Where is the data located?


Cache or Pool?
Data in Cache in original form? =
Yes send to the host

Data not in Cache but on the


Pool = Normal read
Data Reduction Algorithm

Copy data into system cache

All-Flash and Hybrid Flash Pool


Storage Resources

Read operation with Data Reduction enabled

Important: Logical Block Addressing (LBA) specifies the location of


blocks of data that are stored on a storage resource.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 409


Data Reduction

Enable Data Reduction and Advanced Deduplication on


Supported Storage Resources

Hybrid Flash Pools

Data Reduction can be enabled on resources that are built from hybrid flash pools
within the Dell Unity XT 380, 480, 680 and 880 systems.

The properties page of a multi-tiered pool that includes SAS Flash 3 and SAS drives

Flash Capacity

To support Data Reduction, the proportion of flash capacity on the pool must be
equal to or exceed 10% of the total capacity of the pool.

Module 1 Course Introduction and System Administration

Page 410 © Copyright 2022 Dell Inc.


Data Reduction

This Flash Percent (%) value allows enabling data reduction for resources that are built from the
pool

Storage Resource

Enabling Data Reduction is only possible if the pool flash percent value
requirements are met.
• For pools with a lower flash capacity, the feature is unavailable and a message
is displayed. Go here to see an example.
• Advanced Deduplication is also supported for the data reduction enabled
resources.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 411


Data Reduction

Both Data Reduction and Advanced Deduplication can be enabled for a LUN built from Pool 0

Module 1 Course Introduction and System Administration

Page 412 © Copyright 2022 Dell Inc.


Data Reduction

Verify Pool Flash Capacity Utilization

The proportion of flash capacity utilization can also be verified from the Details
pane of a selected pool.

Pools page with selected pool showing the flash capacity utilization on the details pane

In the example, Pool 1 is selected and the details pane shows that the pool has
57% of flash capacity utilization.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 413


Data Reduction

Identify Data Reduction Savings in Storage Resources

List of Storage Resources

Add the Data Reduction and Advanced Deduplication columns to the resources
page, to verify which resources have the features enabled.

LUNs page showing the Data Reduction and Advanced Deduplication columns

In the example, all the LUNs created on the dynamic pool Pool 0 are configured
with data reduction and advanced deduplication.

Storage Resource Properties

To monitor the Data Reduction Savings, select the storage resource and open the
properties page. The savings are reported in GBs, percent savings, and as a ratio
on the General tab.

Module 1 Course Introduction and System Administration

Page 414 © Copyright 2022 Dell Inc.


Data Reduction

LUN properties showing the Data Reduction Savings on the General tab

In the example, the properties of LUN-1 show a data reduction savings of 38.5 GB.
The percentage that is saved and ratio reflect the savings on the storage resource.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 415


Data Reduction

Configuring Data Reduction on an Existing Storage Resource

Data Reduction and Advanced Deduplication on a LUN can be enabled or disabled


at any time. As noted earlier, Advanced Deduplication setting is only available once
Data Reduction is enabled and the configuration supports it.

To enable and disable Data Reduction and Advanced Deduplication on an existing


storage resource (LUN in the example), review the properties of the LUN from the
Block page. Depending if Data Reduction and Advanced Deduplication are
disabled or enabled on the storage resource, the respective boxes are either
cleared or checked.

Advanced Deduplication can be enabled or disabled independently from the Data


Reduction setting.
• When enabled on an existing LUN, all existing data is left in its current state and
only incoming data is subject to Data Reduction. Existing data is subject to Data
Reduction upon a rewrite of new data.
• If disabling Data Reduction, data is left in its current state on the disk until the
data is overwritten or migrated.

To remove Data Reduction savings for block resources, use the Move operation.
For file resources, since there is no Move operation, users can use host-based
migration or replication. For example, you can asynchronously replicate a file
system to another file system within the same pool using UEMCLI commands.

Data Reduction stops for new writes when sufficient resources are not available
and resumes automatically after enough resources are available.

Module 1 Course Introduction and System Administration

Page 416 © Copyright 2022 Dell Inc.


Data Reduction

Enable DR by
selecting the box. the
Advanced
Deduplication box now
becomes available for
selection

No DR on the LUN

Configuring Data Reduction on an Existing Storage Resource

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 417


Data Reduction

Data Reduction and Advanced Deduplication with Consistency


Groups

To review which Consistency Groups contain Data Reduction enabled LUNs, select
the Consistency Group tab, which is found on the Block page.

Data Reduction and Advanced Deduplication with Consistency Groups

On this page, columns that are named Data Reduction and Advanced
Deduplication can be added to the current view.
• Click the Gear Icon and select Data Reduction or Advanced Deduplication
under Column.
• The Data Reduction and Advanced Deduplication columns have three potential
entries, No, Yes, and Mixed.

− No is displayed if none of the LUNs within the Consistency Group has the
option enabled.
− Yes is displayed if all LUNs within the Consistency Group have the option
enabled.
− Mixed is displayed when the Consistency Group has some LUNs with Data
Reduction enabled and other LUNs with Data Reduction disabled.

Module 1 Course Introduction and System Administration

Page 418 © Copyright 2022 Dell Inc.


Data Reduction

Data Reduction and Advanced Deduplication Using Local LUN


Move

Data Reduction and Advanced Deduplication Using Local LUN Move

The Local LUN Move feature, also known as Move, provides native support for
moving LUNs and VMFS Datastores online between pools or within the same pool.
This ability allows for manual control over load balancing and rebalancing of data
between pools.

Local LUN Move leverages Transparent Data Transfer (TDX) technology, a multi-
threaded, data copy engine. Local LUN Move can also be leveraged to migrate a
Block resource's data to or from a resource with Data Reduction or Advanced
Deduplication enabled.

If Advanced Deduplication is supported and enabled, the data also passes through
the Advanced Deduplication algorithm. This allows space savings to be achieved
during the migration.

When migrating to a resource with Data Reduction disabled, all space savings that
are achieved on the source are removed during the migration.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 419


Data Reduction

The example shows DR_AD_LUN-2 with no Data Reduction or Advanced


Deduplication set, is being moved to a pool to pass the existing data on the LUN
through the Data Reduction and Advanced Deduplication algorithms.

Module 1 Course Introduction and System Administration

Page 420 © Copyright 2022 Dell Inc.


Data Reduction

Expand an All-Flash Pool with Data Reduction Enabled


Storage Resources

Launch Wizard

Dell Unity XT supports the expansion and conversion of an All-Flash pool to a


Hybrid pool.

To add drives from different tiers to an All-Flash pool (with Data Reduction enabled
resources), select the pool and Expand Pool.

Details pane of a dynamic pool showing the Flash Percent, and number of storage resources built
from it

In the example, the All-Flash pool Pool 2 has four LUNs with Data Reduction and
Advanced Deduplication enabled.

Select Tiers

Select the storage tiers with available drives to expand the pool, and optionally
change the hot spare capacity for new added tiers.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 421


Data Reduction

Expand Pool wizard with the selected tiers

The Performance and Capacity tiers are selected. The Extreme Performance
tier cannot be selected since there are no unused drives.

Select Drives

Select the amount of storage from each tier to add to the All-Flash pool. The
number of drives must comply with the RAID width plus the hot spare capacity that
is defined for the tier.

Module 1 Course Introduction and System Administration

Page 422 © Copyright 2022 Dell Inc.


Data Reduction

Expand Pool wizard selection of drives from selected tiers

In the example, six SAS drives comply with the RAID 5 (4+1) plus one hot spare
requirement. Seven NL-SAS drives comply with the RAID 6 (4+2) plus one hot
spare requirement.

If the expansion does not cause an increase in the spare space the pool requires,
the new free space is made available for use. When extra drives increase the spare
space requirement, a portion of the space being added is reserved. The reserved
space is equal to the size of one or two drives per 32.

Summary

Verify the proposed configuration with the expanded drives. Select Finish to start
the expansion process.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 423


Data Reduction

Expand Pool wizard summary

In the example, the expansion process adds 13 drives to the All-Flash pool and
converts it to a multi-tiered pool.

Pool Expanded

Verify the pool conversion on the details pane. The number of tiers and drives
increased. Observe that the flash percent supports the Data Reduction.

Module 1 Course Introduction and System Administration

Page 424 © Copyright 2022 Dell Inc.


Data Reduction

Details pane of a selected pool showing the Flash Percent, number of tiers and drives

In the example, the expansion included two new tiers and added 13 new drives to
the pool. The flash percent is over 10% which ensures that Data Reduction is
supported.

If the pool contains Data Reduction enabled resources and the resulting flash
percent would be below 10%, the expansion would not be allowed.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 425


Data Reduction

Considerations About Expansion of Dynamic Pools with Data


Reduction Enabled Resources

If the additional capacity drops the Flash Percent value below 10%, the expansion
of a dynamic pool with Data Reduction enabled resources is blocked.
• To support Data Reduction, the proportion of flash capacity must be equal to or
exceed 10% of the total capacity of the pool.
• If the requirement is not met, the wizard displays a warning message when
trying to advance to the next step.

Advance to the next


step is not allowed.

Expand Pool wizard with warning message

In the example, the Flash Percent capacity utilization of Pool 0 is 19%, and the
addition of 14 NL-SAS drives reduces the value below the 10% requirement.

To conclude the expansion, select a number of drives that keep the flash percent
capacity utilization within the Data Reduction requirements.

Module 1 Course Introduction and System Administration

Page 426 © Copyright 2022 Dell Inc.


Data Reduction

Identify Flash Tier Free Space Considerations

Each resource that is created on the system uses metadata.


• The Data Reduction algorithm creates additional metadata on a resource for
tracking purposes.
• Data Reduction and Advanced Deduplication generates metadata with
additional space consumption.

In Hybrid pools, the metadata has priority over user data for placement on the
fastest drives.
• Algorithms ensure that metadata go to the tier that provides the fastest access.
Usually this is the Extreme Performance (SAS Flash) tier.
• If necessary, user data is moved to the next available tier (Performance or
Capacity), to give space to metadata created on the pool.

The system monitors how much metadata is created as the resources grow, and
the pool space is consumed.
• The system automatically estimates how much metadata can be created based
on the free capacity.

− The estimate considers the amount of metadata that is generated, the pool
usable capacity, and the free space within each tier.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 427


Data Reduction

Flash Tier Free Space Considerations - Metadata Relocation

FAST VP tries to relocate user data out of the Flash tier to free space for metadata.

FAST VP tab showing the relocation of data blocks

In extreme cases, when FAST VP cannot evacuate enough space, performance


issues may be seen. When the amount of metadata on the pool exceeds the
capacity of the Flash tier, the metadata is placed on the next available tier. The
metadata is moved to spinning media.

There is a significant impact to the performance of the Data Reduction algorithm


with the access of metadata blocks from SAS or NL-SAS drives. The process takes
CPU cycles away from user I/O.

In this situation, the pool status changes to OK, Needs Attention (seen from the
Pools page, and the pool properties General tab). A warning informs the
administrator to increase the amount of flash capacity within the pool to address
the issue.

In the example, the system identifies metadata blocks to move from the
Performance to the Extreme Performance tier.

Module 1 Course Introduction and System Administration

Page 428 © Copyright 2022 Dell Inc.


Data Reduction

Note: The access to metadata blocks that are written on spinning


media is slow in comparison to Flash drives. There is a noticeable
performance difference with the metadata paging into memory.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 429


Data Reduction

Viewing Storage Resource Properties - LUN

To review the status of Data Reduction and Advanced Deduplication on each of the
LUNs created on the system, go to the Block page in Unisphere. The page can be
accessed by selecting Block under Storage in the left pane.

The page contains three columns specific for Data Reduction.


• The Data Reduction column, which shows if Data Reduction is enabled or not
on the resource.
• The Advanced Deduplication column, which shows if Advanced Deduplication
is enabled.
• The Data Reduction Savings (GB) column, which shows the amount of
savings in GBs for the resource.

To add these and other columns to the view, click the Gear Icon in the upper right
portion of the LUNs tab and select the columns to add under the Columns option.

View LUN Data Reduction properties on LUNs page

Module 1 Course Introduction and System Administration

Page 430 © Copyright 2022 Dell Inc.


Data Reduction

Understanding Savings Reporting

View Data Reduction saving on Main tab

Data Reduction provides savings information at many different levels within the
system, and in many different formats.
• Savings information is provided at the individual storage resource, pool, and
system levels.
• Savings information is reported in GBs, percent savings, and as a ratio.
• Total GBs saved includes the savings due to Data Reduction on the storage
resource, Advanced Deduplication savings, and savings which are realized on
any Snapshots and Thin Clones taken of the resource.
• The percentage that is saved and the ratio reflect the savings within the storage
resource itself. All savings information is aggregated and then displayed at the
Pool level and System level.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 431


Data Reduction

Storage Resource Level - Block

Space savings information in the three formats is available within the Properties
window of the storage resource.

For LUNs, you must either access the Properties page from the Block page, or on
the LUN tab from within the Consistency Group Properties window.

Shown is the total GBs saved, which includes savings within data used by
Snapshots and Thin Clones of the storage resource. Also shown is the % saved
and the Data Reduction ratio, which both reflect the savings within the storage
resource. File System and VMware VMFS Datastores display the same
parameters.

Module 1 Course Introduction and System Administration

Page 432 © Copyright 2022 Dell Inc.


Data Reduction

View Data Reduction properties on General tab within LUN Properties

Data Reduction savings are shown on the General tab within the LUN Properties
Window.

The storage resource properties terms are described below.


• Size/Capacity – The client visible size of a storage resource
• Allocated – The amount of storage that has been allocated to host data
• Total Pool Space Used – Space that is used by the resource on disk, including
space for Snapshots, Thin Clones, and overhead (metadata)
• Preallocated – Space that is reserved but not yet used by the resource and non-
reclaimed (freeable) space

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 433


Data Reduction

• Non-base Space used – Space that is consumed by Snapshots and Thin


Clones
• Data Reduction Savings - The % saved and the Ratio reflect the average space
that is saved across all Data Reduction enabled storage resources

Module 1 Course Introduction and System Administration

Page 434 © Copyright 2022 Dell Inc.


Data Reduction

Data Reduction Savings - Pool Level

Data Reduction information is also aggregated at the Pool level on the Usage tab.

Savings are reported in the three formats, including the GBs saved, % savings, and
ratio.
• The GBs savings reflect the total amount of space saved due to Data Reduction
on storage resources and their Snapshots and Thin Clones.
• The % saved and the Ratio reflect the average space that is saved across all
Data Reduction enabled storage resources.

View Data Reduction Savings on Usage tab of Pool Properties

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 435


Data Reduction

Data Reduction Savings - System Level

Data Reduction Savings information is also available at the System Level.

Data Reduction Savings at the system level

System level Data Reduction Savings information is displayed within the System
Efficiency view block that is found on the system Dashboard page. If the view
block is not shown on your system, you can add it by selecting the Main tab,
clicking Customize, and adding the view block.

The system level aggregates all savings across the entire system and displays
them in the three formats available, GBs saved, % saved, and ratio.
• For the GBs saved, this value is the total amount of space saved due to Data
Reduction, along with savings achieved by Snapshots and Thin Clones of Data
Reduction enabled storage resources.
• The % savings and ratio are the average savings that are achieved across all
data reduction enabled storage resources.

Module 1 Course Introduction and System Administration

Page 436 © Copyright 2022 Dell Inc.


Data Reduction

Calculating Data Reduction Savings

Overview

The space reporting updates affect the System, Pool, and Storage Resource
values. Users can use the formulas, displayed here to calculate, and verify the
Data Reduction Savings percentage and ratio for the System, Pools, and Storage
Resources.

The system level aggregates all savings across the entire system and displays
them in the three formats available, GBs saved, % saved, and ratio.
• For the GBs saved, this value is the total amount of space saved due to Data
Reduction, along with savings achieved by Snapshots and Thin Clones of Data
Reduction enabled storage resources.
• The % savings and ratio are the average savings that are achieved across all
Data Reduction enabled storage resources.

Calculating Data Reduction savings

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 437


Data Reduction

Data Reduction Savings - Savings Ratio

Example

The example shows the calculation for Data Reduction savings ratio on a LUN.

Total Pool Space Used + Data Reduction Savings


Savings Ratio = :1
Total Pool Space Used

44.7 GB + 45.7 GB
=:1
44.7 GB

90.4
= 2.02:1
44.7 GB

Calculating savings ratio example

Module 1 Course Introduction and System Administration

Page 438 © Copyright 2022 Dell Inc.


Data Reduction

Data Reduction Savings - Saving Percentage

Example

The example displays the formula for calculating the Data Reduction percentage
savings.

Data Reduction Savings


Savings Percentage = *100
Data Reduction Savings + Total Pool Space Used

45.7 GB
*100
45.7 GB +44.7 GB

45.7 GB
*100 = 51%
90.4 GB

Calculating savings percentage example

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 439


Data Reduction

Data Reduction and Advanced Deduplication with Replication

Overview

Storage resources using Data Reduction can be replicated using any supported
replication software. Native Synchronous Block Replication or Native
Asynchronous Replication to any supported destination is supported.

All data replicated, local, or remote, is first re-hydrated (deduplicated data is


reconstructed, compressed data is uncompressed), and then replicated to the
destination. This method of replicating Data Reduction enabled storage resources
ensures that all replication topologies are supported as if Data Reduction is not
enabled on the resource.

Replication considerations with Data Reduction and Advanced Deduplication

Some considerations when using Replication are listed below.


• Replication is supported on resources which support Data Reduction and/or
Advanced Deduplication.
• If replicating to a destination storage system with no efficiencies applied, the
data is not deduplicated or compressed.

Module 1 Course Introduction and System Administration

Page 440 © Copyright 2022 Dell Inc.


Data Reduction

• Replication can occur to or from a Dell Unity XT that does not support Data
Reduction.
• Data Reduction can be enabled or disabled on source or destination
independently.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 441


Data Reduction

Data Reduction and Advanced Deduplication with Native File


and Block Import

When configuring an Import Session, Data Reduction and Advanced Deduplication


are supported on the destination as long as the destination system and Pool
configuration supports it.

When creating an Import Session, if the destination resource supports Data


Reduction, a checkbox is available to enable it on the destination resource. An
option for Advanced Deduplication is also shown for configurations which support
it.

As data is migrated from the source VNX system to the Dell Unity XT system, it
passes through the Data Reduction algorithm as it is written to the Pool.

File Import Native SAN Copy

Text
Text

File Import

SAN Copy Import

Dell EMC Unity XT system Dell EMC Unity XT system

Dell EMC VNX2 Series


Dell EMC VNX2 Series

Data Reduction and Advanced Deduplication with Native File and Block Import

Module 1 Course Introduction and System Administration

Page 442 © Copyright 2022 Dell Inc.


FAST VP

FAST VP

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 443


FAST VP

FAST VP Overview

When reviewing the access patterns for data within a system, most access patterns
show a basic trend. Typically, the data is most heavily accessed near the time it
was created, and the activity level decreases as the data ages. This trending is
also seen as the lifecycle of the data. Dell EMC Unity Fully Automated Storage
Tiering for Virtual Pools - FAST VP monitors the data access patterns within pools
on the system.

FAST VP dynamically matches the performance requirements to sets of drives.

Before Pool After

Flash
drives

SAS
drives

NL-SAS
drives

Most active data

Moderately active data

Least active data

FAST VP classifies drives into three categories, called tiers. These tiers are:
• Extreme Performance Tier – Comprised of Flash drives

Module 1 Course Introduction and System Administration

Page 444 © Copyright 2022 Dell Inc.


FAST VP

• Performance Tier – Comprised of Serial Attached SCSI - SAS drives


• Capacity Tier - Comprised of Near-Line SAS - NL-SAS drives

FAST VP helps to reduce the Total Cost of Ownership - TCO by maintaining


performance and efficiently using the configuration of a pool. Users can create
pools with a mix of Flash, SAS, and NL-SAS drives. Creating mixed pools reduces
the cost of a configuration by reducing drive counts and using larger capacity
drives. Data requiring the highest level of performance is tiered to Flash, while data
with less activity resides on SAS or NL-SAS drives.

Dell EMC Unity has a unified approach to create storage resources on the system.
Block LUNs, file systems, and the VMware datastores can all exist within a single
pool, and can all benefit from using FAST VP. In system configurations with
minimal amounts of Flash, FAST VP uses the Flash drives for active data,
regardless of the resource type. For efficiency, FAST VP uses low cost spinning
drives for less active data. Access patterns for all data within a pool are compared
against each other. The most active data is placed on the highest performing drives
according to the storage resource’s tiering policy. Tiering policies are explained
later in this document.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 445


FAST VP

Tiering Policies

FAST VP Tiering policies determine how the data relocation takes place within the
storage pool. The available FAST VP policies are displayed here.

The Tier label is used to describe the various categories of media used within a
pool. In a physical system, the tier directly relates to the drive types used within the
pool. The available tiers are Extreme Performance Tier using Flash drives, the
Performance Tier using SAS drives, and the Capacity Tier using NL-SAS drives.

On a Dell EMC UnityVSA system, a storage tier of the virtual drives must be
created manually. The drives should match the underlying characteristics of the
virtual disk.

The table shows the available tiering policies with its description, and the initial tier
placement which corresponds to a selected policy.

Tiering Policy Corresponding Initial Tier Description


Placement

Highest Available Highest Available Tier Initial data placement and


Tier subsequent data
relocations set to the
highest performing tier of
drives with available
space

Auto-Tier Optimized for pool Performance Initial data placement


optimizes the pool
capacity, then relocates
slices to different tiers
based on the activity
levels of the slices

Module 1 Course Introduction and System Administration

Page 446 © Copyright 2022 Dell Inc.


FAST VP

Start High then Highest Available Tier Initial data placed on


Auto-Tier [Default] slices from the highest tier
with available space, then
relocates data based on
performance statistics and
slice activity

Lowest Available Lowest Available Tier Initial data placement and


Tier subsequent relocations
preferred on the lowest
tier with available space.

There are four Tiering Policies:


• Use the Highest Available Tier policy when quick response times are a
priority.
• The Auto-Tier policy automatically relocates data to the most appropriate tier
based on the activity level of each data slice.
• The Start High, then Auto-Tier is the recommended policy for each newly
created pool. The tier takes advantage of the Highest Available Tier and Auto-
Tier policies.
• Use the Lowest Available Tier policy when cost effectiveness is the highest
priority. With this policy, data is initially placed on the lowest available tier with
capacity.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 447


FAST VP

Supported RAID Types and Drive Configurations

Users can select the RAID protection for each one of the tiers being configured
when creating a pool. A single RAID protection is selected for each tier, and after
the RAID configuration is selected, and the pool is created, it cannot be changed.
Only when you expand the pool with a new drive type, you can select a RAID
protection.

This table shows the supported RAID types and drive configurations.

Remember to consider the performance, capacity and protection levels these


configurations provide, when deciding on the RAID configuration to adopt.
• RAID 1/0 is suggested for applications with large amounts of random writes, as
there is no parity write penalty in this RAID type.
• RAID 5 is preferred when cost and performance are a concern.
• RAID 6 provides the maximum level of protection against drive faults of all the
supported RAID types.

When considering a RAID configuration which includes many drives - 12+1, 12+2,
14+2, consider the tradeoffs that the larger drive counts contain. Using larger drive
sizes can lead to longer rebuild times and possible faulted domains.

RAID Type Default Configuration Supported Configurations

RAID 1/0 4+4 1+1*, 2+2, 3+3, 4+4

RAID 5 4+1 4+1, 8+1, 12+1

RAID 6 6+2 4+2, 6+2, 8+2, 10+2, 12+2, 14+2

Module 1 Course Introduction and System Administration

Page 448 © Copyright 2022 Dell Inc.


FAST VP

FAST VP Management

System Global Settings

The user can change the system-level data relocation configuration using the
Global settings window.

Select the Settings option on the top of the Unisphere page to open the Settings
window.

Users have the option to:


• Manually pause and resume the scheduled data relocations
• Change the data relocation rate
• Disable and re-enable scheduled data relocations
• Modify the data relocation schedule.

Storage Pool

FAST VP relocation at the pool level can be also verified from the pool properties
window.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 449


FAST VP

In Unisphere, select a pool and click the edit icon to open its properties window.
Then select the FAST VP tab.

Users can view the following information for a pool:


• Pool participation in scheduled data relocations
• Estimated time needed for scheduled data relocations
• Start and end time for the most recent data relocation
• Number and type of disks in each tier
• Amount of data in the pool scheduled to move to higher and lower tiers
• Amount of data in the pool scheduled to be rebalanced within a tier

You also have the option to manually start a data relocation by clicking the Start
Relocation button. To modify the FAST VP settings, click the Manage FAST VP
system settings link in the upper right side of the window.

Block or File Resource

At the storage resource level, the user can change the tiering policy for the data
relocation.

Module 1 Course Introduction and System Administration

Page 450 © Copyright 2022 Dell Inc.


FAST VP

In Unisphere, select the block or file resource and click the pencil icon to open its
properties window. Then select the FAST VP tab.

From this page, it is possible to edit the tiering policy for the data relocation.

The example shows the properties for LUN_2. The FAST VP page displays the
information of the tiers that are used for data distribution.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 451


Thin Clones

Thin Clones

Module 1 Course Introduction and System Administration

Page 452 © Copyright 2022 Dell Inc.


Thin Clones

Thin Clones Overview

A Thin Clone is a read/write copy of a thin block storage resource that shares
blocks with the parent resource. Thin Clones created from a thin LUN, Consistency
Group, or the VMware VMFS datastore form a hierarchy.

A Base LUN family is the combination of the Base LUN, and all its derivative Thin
Clones and snapshots. The original or production LUN for a set of derivative
snapshots, and Thin Clones is called a Base LUN. The Base LUN family includes
snapshots and Thin Clones based on child snapshots of the storage resource or its
Thin Clones.

Data available on the source snapshot is immediately available to the Thin Clone.
The Thin Clone references the source snapshot for this data. Data resulting from
changes to the Thin Clone after its creation is stored on the Thin Clone.

A snapshot of the LUN, Consistency Group, or VMFS datastore that is used for the
Thin Clone create and refresh operations is called a source snapshot. The original
parent resource is the original parent datastore or Thin Clone for the snapshot on
which the Thin Clone is based.

Thin Clones are created from attached read-only or unattached snapshots with no
auto-deletion policy and no expiration policy set. Thin Clones are supported on all
Dell Unity models including Dell UnityVSA.

In the example, the Base LUN family for LUN1 includes all the snapshots and Thin
Clones that are displayed in the diagram.

LUN 1

RO Thin
Clone 1

Snap 1
Application 1
RO Thin
Clone 2

Snap 2 Snap 3
Application 2

Snap 5
Snap 4

Base LUN Family Snap 6

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 453


Thin Clones

Thin Clone Capabilities

A thin clone is displayed in the LUNs page. The page shows the details and
properties for the clone.

You can expand a thin clone by selecting the clone, then selecting the View/Edit
option. If a thin clone is created from a 100 GB Base LUN, the size of the thin clone
can be later expanded.

All data services remain available on the parent resource after the creation of the
thin clone. Changes to the thin clone do not affect the source snapshot, because
the source snapshot is read-only.

With thin clones, users can make space-efficient copies of the production
environment. Thin clones are based on pointer-based technology, which means a
thin clone does not consume much space from the storage pool. The thin clones
share the space with the base resource, rather than allocate a copy of the source
data for itself, which provide benefits to the user.

Users can also apply data services to thin clones. Data services include; host I/O
limits, host access configuration, manual or scheduled snapshots, and replication.
With the thin clone replication, a full clone is created on the target side which is an
independent copy of the source LUN.

A maximum of 16 thin clones per Base LUN can be created. The combination of
snapshots and thin clones cannot exceed 256.

Thin Clone Capabilities

Thin Clone operations Users can create, refresh, view, modify, expand, and
delete a thin clone.

Data Services All data services remain available on the parent


resource after the thin clone creation.

Space Savings Only changed data consumes space.

Data Services Most LUN data services can be applied to thin clones:
host I/O limits, host access configuration,
manual/scheduled snapshots, replication.

Module 1 Course Introduction and System Administration

Page 454 © Copyright 2022 Dell Inc.


Thin Clones

Maximum number of thin 16 thin clones per Base LUN


clones per Base LUN

Snapshots per LUN 256 snapshot per LUN

Snapshots + Thin Clones 256 snapshots + thin clones per LUN


per LUN

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 455


Thin Clones

Recommended Uses for Thin Clones

The use of Thin Clones is beneficial for the types of activities that are explained
here.

Thin Clones allow development and test personnel to work with real workloads and
use all data services that are associated with production storage resources without
interfering with production.

For parallel processing applications which span multiple servers the user can use
multiple Thin Clones of a single production dataset to achieve results more quickly.

An administrator can meet defined SLAs by using Thin Clones to maintain hot
backup copies of production systems. If there is corruption of the production
dataset, the user can immediately resume the read/write workload by using the
Thin Clones.

Thin Clones can also be used to build and deploy templates for identical or near-
identical environments.

Recommended Uses for Thin Clones

Development and Work with real workloads and all data services that are
test environments associated with production storage with no effect to
production.

Parallel Processing Parallel processing applications which span multiple servers


can use multiple Thin Clones of a single production dataset
to achieve results more quickly.

Online backup Maintain hot backup copies of production systems; if there is


production data corruption, immediately resume the
read/write workload by using Thin Clones.

System deployments Build and deploy templates for identical or near-identical


environments.

Module 1 Course Introduction and System Administration

Page 456 © Copyright 2022 Dell Inc.


Thin Clones

Technical Comparison – Snapshots and Thin Clones

Description Snapshots Thin Clones

Space-Efficient Data Yes Yes

Creation Time Instantaneous Instantaneous

Delete Limitations Automatic deletion of Any copy in the tree can be


source snapshot of a Thin deleted. Base LUN cannot
Clone not allowed be deleted.

Topology Snap-of-snap Nested hierarchy of snap


and Thin Clones

Any-Any Refresh From base LUN only Yes, any Thin Clone can
be refreshed from any
snapshot.

Restore Yes, Snap to base LUN Yes, must create a snap


first, then restore primary.

Use Cases Data Protection Test/Dev

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 457


Thin Clones

Theory of Operations: Create Thin Clones

The Create operation uses a Base LUN to build the set of derivative snapshots,
and Thin Clones.

In this example, LUN1 is the Base LUN.


140. The first snapshot - Snap1 is created with read-only access. Snap1
is going to be the source snapshot to create Thin Clone1 [TC1]. The original
parent resource is the original parent datastore or Thin Clone for the snapshot
on which the Thin Clone is based. For Thin Clone1, the original parent is LUN1.
141. To create another Thin Clone, snapshot [Snap2] is created from
Thin Clone1 [TC1]. Snap2 is the source snapshot for the creation of Thin
Clone2 [TC2]. Observe that for this Thin Clone the original parent is Thin
Clone1.
142. To create the third Thin Clone, another snapshot is created. Snap3
is created from Thin Clone2, and is used as the source snapshot for creating
Thin Clone3 [TC3]. Observe that the original parent for this Thin Clone is Thin
Clone2.

The creation of snapshots of the production LUN works independently of Thin


Clones and the snapshots of Thin Clones creation.

Module 1 Course Introduction and System Administration

Page 458 © Copyright 2022 Dell Inc.


Thin Clones

Theory of Operations: Refresh Thin Clones - 1 of 2

Refreshing a Thin Clone updates the Thin Clone’s data with data from a different
source snapshot. The new source snapshot must be related to the base LUN for
the existing Thin Clone. In addition, the snapshot must be read-only, and it must
have expiration policy or automatic deletion disabled.

This example shows that the user is refreshing Thin Clone3 with the contents of
source Snap1.

After the Thin Clone is refreshed, the existing data is removed and replaced with
the Snap1 data. There are no changes to the data services configured in the Thin
Clone, and if the Thin Clone has derivative snapshots they remain unchanged.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 459


Thin Clones

Theory of Operations: Refresh Thin Clones - 2 of 2

In this example, the source snapshot of the Thin Clone changes. So instead of
being Snap3, the source snapshot is now Snap1.

Observe that the original parent resource does not change when a Thin Clone is
refreshed to a different source snapshot. The new source snapshot comes from the
same base LUN.

Module 1 Course Introduction and System Administration

Page 460 © Copyright 2022 Dell Inc.


Thin Clones

Theory of Operations: Refresh Base LUN

Refreshing a Base LUN updates the LUNs data with data from any eligible
snapshot in the Base LUN family including a snapshot of a Thin Clone. The new
source snapshot must be related to the Base LUN family for the existing Thin
Clone. In addition, the snapshot must be read-only, and the retention policy must
be set to no automatic deletion.

This example shows the user refreshing LUN1 with the data from Snap3. When the
LUN is refreshed, the existing data is removed from LUN1 and replaced with the
data from Snap3.

There are no changes to the data services configured on the Thin Clone. If the Thin
Clone has derivative snapshots, the snapshots remain unchanged.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 461


Thin Clones

Thin Clone Considerations

Thin Clone Considerations


• Certain properties cannot be changed at the Thin Clone level and are
dependent upon the Base LUN. If a user changes these properties on the Base
LUN, then the changes are reflected on the Thin Clone. These properties are:
− SP ownership
− FAST VP
− Data reduction settings
• Thin Clones are not supported for snapshots of thick LUNs. A Thin Clone is a
read/write copy of a thin block storage resource and cannot be created using
thick LUNs.
• At the time of the Thin Clone creation:
− The source snapshot must be read-only and expiration policy / automatic
deletion must be disabled.
− After the Thin Clone is created, the source snapshot can be deleted.
• A Thin Clone of a Thin Clone cannot be created before an intermediate
snapshot is created.
• A replicated Thin Clone becomes a Full clone on the destination storage
system.
• A production storage resource cannot be deleted if it has associated Thin
Clones.
• Unisphere Move option is not supported for Thin Clones.

Module 1 Course Introduction and System Administration

Page 462 © Copyright 2022 Dell Inc.


Thin Clones

LUN Refresh Operation - 1 of 2

This page shows the Unisphere LUNs page with the example of a Base LUN and
its respective Thin Clones.

In the example, Base_LUN1 has two Clones that are taken at different times:
• The Base_LUN1 has an allocated percentage of 63.1.
• TC1OriginalData was a clone of the original Base_LUN1 and has an allocation
of 2.1 percent.
• TC2 AddedFiles were taken after adding files to the Base_LUN1.

For the top window, a snapshot that is taken of TC1OriginalData is used to


populate the Base_LUN1 with the original data.

For the bottom window, the Base_LUN1 has been selected and the Refresh option
is used to populate the base LUN.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 463


Thin Clones

LUN Refresh Operation - 2 of 2

In the top window, the SnapOriginalData resource has been selected. Note the
Attached and Auto-Delete options must display a No status state.

The bottom window shows that the results after the Base_LUN1 has been updated
with the SnapOriginalData snapshot. The properties of Base_LUN1 show that the
Allocated space is only 2.1% after the refresh.

Module 1 Course Introduction and System Administration

Page 464 © Copyright 2022 Dell Inc.


Thin Clones

Data Reduction and Advanced Deduplication with Snapshots


and Thin Clones

Thin Clones

Dell Unity Snapshots and Thin Clones are fully supported with data reduction and
Advanced Deduplication. Snapshots and Thin Clones also benefit from the space
savings that are achieved on the source storage resource.

Both Snapshots and Thin Clones support deduplicated blocks.


• Snapshot and Thin Clone metadata can reference deduplicated blocks so when
reading from a snapshot which references a deduplicated block, the block is re-
created and sent to the host.
• When a source resource receives a write to a deduplicated block which is
shared with a snapshot or Thin Clone, a normal redirect on write occurs. The
software then determines if a block must be allocated.

When writing to a Snapshot or Thin Clone, the I/O is subject to the same data
efficiency mechanism as the storage resource. Which efficiency algorithms are
applied depends on the Data Reduction and Advanced Deduplication settings of
the parent resource.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 465


File System Quotas

File System Quotas

Module 1 Course Introduction and System Administration

Page 466 © Copyright 2022 Dell Inc.


File System Quotas

File System Quotas overview

• Track file system usage


− Dell EMC Unity storage systems support file system quotas which enable
storage administrators to track and limit usage of a file system. Limiting
usage is not the only application of quotas. The quota tracking capability can
be useful for tracking and reporting usage by simply setting the quota limits
to zero.
• Limits usage
− Quota limits can be designated for users, or a directory tree. Limits are
stored in quota records for each user and quota tree. Limits are also stored
for users within a quota tree.
• Usage determined by Quota Policy
− Quota policies ensure that the file system is configured to use the quota
policy that best suits the client environment. Users have a choice of File Size
[the default], or the Blocks policy. The File Size quota policy calculates the
disk usage based on logical file sizes in 1 KB increments. The block quota
policy calculates the disk usage in file system blocks in 8 KB units.
• Quota limits

− Hard and soft limits set on the amount of disk space allowed for
consumption.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 467


File System Quotas

File System Quotas Configuration

Dell EMC recommends that quotas are configured before the storage system
becomes active in a production environment. Quotas can be configured after a file
system is created.

Default quota settings can be configured for an environment where the same set of
limits are applied to many users.

Open the Manage Quota Settings window:


159. Select the file system to edit.
160. Click the pencil icon to edit the selected file system.
161. Select the Quotas tab.
162. Click Manage Quota Settings.

These parameters can be configured from the Manage Quota Settings window:
• Quota policy: File size [default] or Blocks
• Soft limit
• Hard limit
• Grace period

The soft limit is a capacity threshold. When file usage exceeds the threshold, a
countdown timer begins. The timer, or grace period, continues to count down as

Module 1 Course Introduction and System Administration

Page 468 © Copyright 2022 Dell Inc.


File System Quotas

long as the soft limit is exceeded. However, data can still be written to the file
system. If the soft limit remains exceeded and the grace period expires, no new
data may be added to the particular directory. Users associated with the quota are
also prohibited from writing new data. When the capacity is reduced beneath the
soft limit before the grace period expires, access is allowed to the file system

The grace period can be limited by days, hours and minutes, or unlimited. When
the grace period is unlimited, data can be written to file system until the quota hard
limit is reached.

A hard limit is also set for each quota configured. When the hard limit is reached,
no new data can be added to the file system or directory. The quota must be
increased, or data must be removed from the file system before more data can be
added.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 469


File System Quotas

Quota Usage

File system quotas can track and report usage of a file system.

• When the Soft Quota limit is reached

− Storage administrator receives notification of the event


− Grace period is invoked, if the quota policy is set with a defined limit of days,
hours or minutes.
In this example, a user quota was configured on a file system for a particular user.
The Soft Limit is 20 GB, the Grace Period is one day, and the Hard Limit is 25 GB.
The user copies 16 GB of data to the file system. Since the capacity is less than
the user’s quota, the user can still add more files to the file system.

File System

Soft Limit Hard Limit


(20 GB) (25 GB)

Quota Usage

Module 1 Course Introduction and System Administration

Page 470 © Copyright 2022 Dell Inc.


File System Quotas

Quota Limit

Soft Limit

When the Soft Quota limit is reached,

• Storage administrator receives notification of the event


• Grace period is invoked

Grace Period
File System (One Day)

Block soft quota crossed


Soft Limit Hard Limit
(20 GB) (25 GB)
Quota Usage

In this example, the user crosses the 20 GB soft limit. The storage administrator
receives an alert in Unisphere stating that the soft quota for this user has been
crossed.

The Grace Period of one day begins to count down. Users are still able to add data
to the file system. Before the expiration of the Grace Period, file system usage
must be less than the soft limit.

Grace Period

When the Grace Period is reached,

• The system issues a warning when the grace period is reached


• Storage administrator receives notification of the event

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 471


File System Quotas

Grace Period
(One Day)
File System

Soft Limit Hard Limit


Block soft quota crossed and grace period expired (20 GB) (25 GB)

Quota Usage

When the Grace Period is reached and the usage is still over the soft limit, the
system issues a warning. The storage administrator receives a notification of the
event.

The transfer of more data to the file system is interrupted until file system usage is
less than the allowed soft limit.

Hard Limit

When the Hard Limit is reached,

• Error message is sent to client, and user requests are denied


• Storage administrator receives notification of the event

Grace Period
File System (One Day)

Soft Limit
Block hard quota reached / exceeded Hard Limit
(20 GB)
(25 GB)
Quota Usage

If the Grace Period has not expired and data remains being written to the file
system, eventually the Hard Limit is reached.

Module 1 Course Introduction and System Administration

Page 472 © Copyright 2022 Dell Inc.


File System Quotas

When the hard limit is reached, users can no longer add data to the file system and
the storage administrator receives a notification.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 473


File System Quotas

Storage Efficiency Key Points

174. Data Reduction


kk. Dell Unity XT Data Reduction provides space savings by using data
deduplication and compression.
ll. Data reduction is supported on All Flash pools created on Dell Unity XT
Hybrid Flash systems or Dell Unity XT All Flash systems.
mm. Data reduction is supported on thin storage resources: LUNs, LUNs
within a Consistency Group, file systems, and VMware VMFS and NFS
datastores.
175. FAST VP
nn. Dell Unity Fully Automated Storage Tiering for Virtual Pools (FAST VP)
monitors the data access patterns within heterogeneous pools on the
system.
oo. In storage pools with Flash, SAS and NL-SAS, FAST VP uses the Flash
drives for active data, and low cost spinning drives for less active data.
pp. There are four tiering policies: Start High then Auto-tier (default), Highest
Available tier, Auto Tier, Lowest Available Tier.
qq. RAID levels for each tier can be selected when creating a pool. The
supported RAID levels are RAID 1/0, RAID 5 and RAID 6.
rr. Data relocation can be schedule the system level, or manually started at the
storage pool level.
176. Thin Clones
ss. A Thin Clone is a read/write copy of a thin block storage resource that
shares blocks with the parent resource.
– The resource is built from a snapshot of thin block storage resources:
LUNs, LUNs member of a Consistency Group, or VMFS datastores.
– Thin Clones can be created from Attached read-only or Unattached
Snapshots with no auto-deletion policy and no expiration policy.
– Thin Clones are supported on all Dell EMC Unity XT models including
Dell EMC UnityVSA.
177. File System Quotas

Module 1 Course Introduction and System Administration

Page 474 © Copyright 2022 Dell Inc.


File System Quotas

tt. Dell Unity XT systems support file system quotas which enable storage
administrators to track and limit usage of a file system.
uu. Quota limits can be designated for users, a directory tree, or users within a
quota tree.
vv. Quota policy can be configure to determine usage per File Size (the default),
or Blocks.
ww. The policies use hard and soft limits set on the amount of disk space
allowed for consumption.

For more information, see the Dell EMC Unity: Data Reduction,
Dell EMC Unity: FAST Technology Overview, Dell EMC Unity:
Snapshots and Thin Clones A Detailed Review, and Dell EMC
Unity: NAS Capabilities on the Dell Technologies Info Hub.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 475


Local LUN Move

Local LUN Move

Module 1 Course Introduction and System Administration

Page 476 © Copyright 2022 Dell Inc.


Local LUN Move

Local LUN Move Overview

The local LUN Move is a native feature of Unity XT to move LUNs within a single
physical or virtual Unity XT system. It moves LUNs between different pools within
the system. Or it can be used to move LUNs within the same pool of a system. The
move operation is transparent to the host and has minimal performance impact on
data access.

There are several use cases for the feature. It provides load balancing between
pools. For example, if one pool is reaching capacity, the feature can be used to
move LUNs to a pool that has more capacity. It can also be used to change the
storage characteristics for a LUN. For example, a LUN could be moved between
pools composed of different disk types and RAID schemes. The feature can also
be used to convert a thin LUN to a thick LUN, or a thick LUN to a thin LUN.

Another use of the feature is for data reduction of an existing thin LUN. For
example, an existing thin LUN without Data Reduction enabled can be moved to an
All-Flash pool where data reduction can be enabled. The data reduction process is
invoked during the move operation resulting in data reduction savings on the
existing LUN data.

Pool 1 LUN Move feature Pool 2


Move LUNs
LUN between pools LUN

LUN move

LUN Within a pool LUN

Pool 3

Use cases:
Load balancing
Change storage
characteristics
Data reduction

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 477


Local LUN Move

What Gets Moved

When a LUN is moved, the moved LUN retains its LUN attributes and some extra
LUN feature configurations. For example, if a LUN is moved that is configured with
snapshots, its existing Snapshot schedule is moved. But any existing snapshots
are not moved. The system deletes any existing snapshots of the LUN after the
move completes.

Also, if Replication is configured on the LUN, the system prevents the LUN move
operation. The LUN replication must be deleted before the LUN move operation is
permitted. After the LUN move operation completes, a reconfiguration of the LUN
replication is permitted. The graphic details the LUN attributes that are and are not
imported.

Size Name Size Name

LUN type LUN type

LUN metrics LUN metrics


LUN Move
HLU HLU

SP ownership SP ownership

Unique ID Unique ID
CG container

Snapshot schedule Snapshot schedule

Snapshots

Replication config

Host I/O Limit Host I/O Limit

LUN attributes LUN attributes

Module 1 Course Introduction and System Administration

Page 478 © Copyright 2022 Dell Inc.


Local LUN Move

Local LUN Move Process

Host Access Before

Before using the local LUN Move feature, a host has access the LUN created from
a specific pool in a normal fashion. The following series of slides illustrates the
process of the local LUN Move operation.
1. Host access before move

Pool 1 Pool 2

LUN

Move Uses TDX

The local LUN Move feature uses Transparent Data Transfer (TDX) technology. It
is a transparent data copy engine that is multithreaded and supports online data
transfers. The data transfer is designed so its impact to host access performance is
minimal. TDX makes the LUN move operation transparent to a host.
1. Host access before move
2. Move uses Transparent
Data Transfer (TDX) technology

TDX
Pool 1 Pool 2

LUN

Start LUN Move

When a move operation is initiated on a LUN, the move operation uses TDX and
the move begins.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 479


Local LUN Move

1. Host access before move 3. Start LUN Move operation


2. Move uses Transparent
Data Transfer (TDX) technology

TDX
Pool 1 Pool 2

LUN Move

LUN

Move Begins

As TDX transfers the data to move the LUN, the move operation is transparent to
the host. Even though TDX is transferring data to move the LUN, the host still has
access to the whole LUN as a single entity.
1. Host access before move 3. Start LUN Move operation

2. Move uses Transparent 4. Move begins transparent to host


Data Transfer (TDX) technology

TDX
Pool 1 Pool 2
LUN Move

LUN LUN LUN

Move Completes

Eventually TDX transfers all of the data, and the LUN move completes.

1. Host access before move 3. Start LUN Move operation 5. Move completes

2. Move uses Transparent 4. Move begins transparent to host


Data Transfer (TDX) technology

TDX
Pool 1 Pool 2

LUN LUN LUN

Module 1 Course Introduction and System Administration

Page 480 © Copyright 2022 Dell Inc.


Local LUN Move

Host Access After

The original LUN will no longer exist, and the host has access the moved LUN in its
normal fashion.

1. Host access before move 3. Start LUN Move operation 5. Move completes
2. Move uses Transparent 4. Move begins transparent to host 6. Host access after move
Data Transfer (TDX) technology

TDX
Pool 1 Pool 2

LUN LUN

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 481


Local LUN Move

Local LUN Move Requirements

There are requirements for the local LUN Move feature.

• Storage resources
− Standalone LUNs
− LUNs within a Consistency Group
− VMware VMFS datastore LUNs
− Not thin clones or have derived thin clones
• To successfully move a LUN, it cannot be:
− In a replication session
− Expanding/shrinking
− Restoring from snapshot
− Being imported from VNX
− Offline/requiring recovery
• System cannot be upgraded during a LUN move session

Module 1 Course Introduction and System Administration

Page 482 © Copyright 2022 Dell Inc.


Local LUN Move

Local LUN Move Capabilities

The local LUN Move feature capabilities are the same for all physical Dell Unity XT
models and the Dell UnityVSA systems.

• All Dell Unity XT systems support 100 move sessions.


− 16 active sessions at a time
• Move sessions have the Priority settings defined when the session is created.
The possible priority settings are:
− Idle, Low, Below Normal, Normal, Above Normal, High
• Multithreaded TDX resources are used in move operations.

− Multiplexes 16 active sessions into 10 concurrent sessions based on a


session priority

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 483


Local LUN Move

Local LUN Move Session Configuration

The local LUN Move feature has a “Set and Forget” configuration. There is nothing
to preconfigure to perform a LUN Move operation. To initiate the move, select
LUNs (1), select the storage resource to move (2), and then select More Actions -
> Move (3). Next, select a Session Priority from the drop-down (4). The priority
defines how the move session is treated in priority compared to production data
access, thus affecting the time to complete the move session. With an Idle priority
selection, the move runs during production I/O idle time. A High selection runs the
move session as fast as possible. The next session configuration selection is the
Pool. It defines where the storage resource is moved to. Its drop-down list is
populated with pools available on the system. Another configuration for the move
session is the Thin check box option. It is checked by default and can be cleared to
make the moved resource thick provisioned. The Data Reduction option is exposed
if the selected pool is an All-Flash pool. The data moved is processed through the
Data Reduction algorithms.

After the move is started, the operation runs automatically. The operation then
continues to completion and cannot be paused or resumed. When a session is in
progress it can be canceled.

The move is transparent to the host. There are no actions or tasks needed on the
host for the move. After the move is completed, the session is automatically
cutover and the host data access to the LUN continues normally.

Module 1 Course Introduction and System Administration

Page 484 © Copyright 2022 Dell Inc.


Local LUN Move

Set and
Forget

Pool 1 TDX Pool 2 Pool 1 TDX Pool 2

LUN Move LUN Move


LUN LUN LUN LUN

Automatic operation
No Pause/Resume

Automatic transparent cutover

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 485


Local LUN Move

Monitoring Move Session

When a move session is started, its progress can be monitored from a few
locations.

From the LUNs page, with the LUN selected that is being moved, the right side
pane displays move session information. The move Status and Progress are
displayed. The Move Session State, its Transfer Rate, and Priority are also shown.

From the General tab of the LUN Properties page, the same information is
displayed. The page does provide the added ability to edit the session Priority
setting.

Module 1 Course Introduction and System Administration

Page 486 © Copyright 2022 Dell Inc.


Local LUN Move

LUN Cancel Move Operation

A LUN Cancel Move operation cancels a move session. The operation is only
available if a move session is ongoing. The operation cancels the move. Select
More Actions -> Cancel Move (1). The operation then returns any moved data to
the original location (2), where you can access it normally from its pool.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 487


Local NAS Server Mobility

Local NAS Server Mobility

Module 1 Course Introduction and System Administration

Page 488 © Copyright 2022 Dell Inc.


Local NAS Server Mobility

Local NAS Server Mobility Overview

The local NAS Server mobility feature moves a NAS Server between the Dell Unity
XT Storage Processors. The move effectively changes the ownership of the NAS
Server to the peer Storage Processor. The entire configuration, file systems,
services, and features of the NAS Server remain the same, it is only the Storage
Processor ownership that is changed.

The move is transparent to NFS clients and the SMB3 clients configured with
Continuous Availability. Clients running either SMB2, or SMB3 without CA are
disrupted due to their protocols’ stateful nature. However, most current client
operating systems will automatically retry the connection and reconnect to the NAS
Server after the move is complete.

The NAS Server mobility feature can be used for balancing the load across the Dell
Unity XT system Storage Processors. It can also be used to provide data access
during maintenance events. For example, during network connectivity maintenance
for a Storage Processor, the NAS Server could be moved to the peer SP allowing
continued client access to data.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 489


Local NAS Server Mobility

Local NAS Server Mobility Capabilities

The local NAS Server mobility feature supports moving a single NAS Server at a
time. Multiple simultaneous moves of NAS Servers are not supported.

Only move a NAS Server that is in a healthy OK state. The system prevents
moving any NAS Server when its state would cause a problem being moved, such
as faulted or not accessible states.

A NAS Server that is a destination of a File Import session cannot be moved. The
NAS Server can only be moved after the File Import session completes.

When a NAS Server is moved that is in a replication session, the replication


session is transferred with the NAS Server. However, to move the NAS Server, the
active replication session must be manually paused. During the NAS Server move,
replication commands are rejected. After the NAS Server move has completed, the
replication session needs to be manually restarted.

If a NAS Server is moved that is actively running an NDMP job, the move stops the
job. After the NAS Server move completes, the NDMP job must be manually
restarted.

Module 1 Course Introduction and System Administration

Page 490 © Copyright 2022 Dell Inc.


Local NAS Server Mobility

Moving a NAS Server to Peer SP

To perform a NAS Server move operation, there is no configuration needed. First,


from the NAS Server Properties page, verify that the NAS Server is in a healthy
state. Then verify that it is not a destination NAS Server to a File Import session.
Finally verify it is not involved in an active replication session.

Then, from the properties page, select the peer SP for ownership. A confirmation
window is displayed stating the move disrupts running NDMP jobs. The message
also states the operation disrupts data access to clients other than NFS and SMB3
CA configured clients. After the confirmation is accepted and the NAS Server
configuration change is applied, the move operation runs in a “set and forget”
fashion. It has no pause, resume or cancel functions.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 491


Local NAS Server Mobility

Monitoring a NAS Server Move

The status of the NAS Server move operation is displayed in several locations
within Unisphere.

• From the NAS Server Properties page, a status of Transitioning to other


Storage Processor is displayed when the move is in progress.
• A similar status is displayed from the NAS Server page when the specific NAS
Server is selected.
• If the Job Properties page is displayed, it shows a Modify NAS Server Settings
job running when the move is in progress.

As previously mentioned, the move operation has no pause, resume or cancel


function. The move just runs to completion. When the move completes, the NAS
Server displays a status of: The component is operating normally. No action is
required.

NAS Server Properties page


Job page

NAS Server page

Module 1 Course Introduction and System Administration

Page 492 © Copyright 2022 Dell Inc.


Local NAS Server Mobility

Demonstration: Local NAS Server Move

This demo covers the local NAS Server mobility feature. A NAS Server is moved to
the peer Storage Processor.

Movie:

The web version of this content contains a movie.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 493


Local NAS Server Mobility

Data Mobility Key Points

187. Local LUN Move


jjj. The local LUN Move is a native feature of Dell Unity to move LUNs within a
single physical or virtual Dell Unity system.
– The feature moves LUNs between different pools within the system, or
within the same pool.
– The move operation is transparent to the host and has minimal
performance impact on data access.
kkk. The moved LUN retains its LUN attributes and some extra LUN
feature configurations, such as snapshot schedules.
lll. The local LUN Move can be used to move LUNs to a pool that has more
capacity, or to change the storage characteristics for a LUN.
188. Local NAS Server Mobility

mmm. The local NAS Server mobility feature moves a NAS Server between
the Dell Unity Storage Processors.
– The operation effectively changes the ownership of the NAS Server to
the peer Storage Processor.
– The move is transparent to NFS clients and the SMB3 clients configured
with Continuous Availability.
nnn. The NAS Server mobility feature can be used for balancing the load
across the Dell Unity system Storage Processors.
ooo. The local NAS Server mobility feature supports moving only a single
NAS Server at a time. The NAS server must be in a healthy OK state.

For more information, see the Dell EMC Unity Family Configuring
and managing LUNs, and Dell EMC Unity: NAS Capabilities on
the Dell EMC Unity Family Technical Documentation portal at Dell
Technologies site.

Module 1 Course Introduction and System Administration

Page 494 © Copyright 2022 Dell Inc.


Local NAS Server Mobility

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 495


Snapshots Overview

Snapshots Overview

Module 1 Course Introduction and System Administration

Page 496 © Copyright 2022 Dell Inc.


Snapshots Overview

Snapshots Overview

Snap images Image use


- Read-only - Backups
- Read/write - Restores
- Mining
- Testing

LUN File system

Pool

Disks

Snapshots are derived from their storage resource

The Snapshots feature is enabled with the Local Copies license which enables
space efficient point-in-time snapshots of storage resources for block, file, and
VMware "data" vVols. The snap images can be read-only or read/write and used in
various ways. They provide an effective form of local data protection. If data is
mistakenly deleted or corrupted, the production data can be restored from a
snapshot to a known point-in-time data state. Hosts access snapshot images for
data backup operations, data mining operations, application testing, or decision
analysis tasks. The upcoming slides detail the feature architecture, capabilities,
benefits, and specifics of its operations and uses.

Caution: Snapshots are not full copies of the original data. Dell
Technologies recommends that you do not rely on snapshots for
mirrors, disaster recovery, or high-availability tools. Snapshots of
storage resources are partially derived from the real-time data in the
relevant storage resource. If the primary storage becomes
inaccessible, snapshots can also become inaccessible (not readable).

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 497


Snapshots Overview

Snapshot Redirect on Write Architecture

Storage resource
Production data access Snapshot image access

Existing data Existing data


reads reads
Existing data

Redirected writes Free space Redirected writes

Reads of new writes Reads of new writes

Thick file system


Snapshots transition to thin
require pool 256 MB slice for 256 MB slice for
file system
new 8 KB writes new 8 KB writes
space for new performance
slice allocation

Parent pool

Snapshots of storage resources [block LUNs, file systems, and VMware


datastores] are architected using Redirect on Write technology. This architecture
avoids a performance penalty that Copy on First Write technology has when
existing data is changed.

With Redirect on Write technology, when a snapshot is taken, the existing data on
the storage resource remains in place. The snapshot provides a point-in-time view
of the data. Production data access also uses this view to read existing data.

Another benefit of Redirect on Write is that no storage resource is required to


create a snapshot. With Copy on First Write technology, a storage resource must
be reserved to hold original data that changed to preserve the point-in-time view.

With Redirect on Write technology, when writes are made to the storage resource,
those writes are redirected. A new location is allocated as needed from the parent
pool in 256 MB slices. New writes are stored in 8 KB chunks on the newly allocated
slice. Reads of the new writes are serviced from this new location as well.

If the snapshot is writable, any writes are handled in a similar manner. Slice space
is allocated from the parent pool, and the writes are redirected in 8 KB chunks to
the new space. Reads of newly written data are also serviced from the new space.

Module 1 Course Introduction and System Administration

Page 498 © Copyright 2022 Dell Inc.


Snapshots Overview

Storage space is needed in the pool to support snapshots as slices are allocated
for redirected writes.

Because of the on-demand slice allocation from the pool, snapped thick file
systems transition to thin file system performance characteristics.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 499


Snapshots Overview

Combined (LUN and File System) Snapshot Capabilities

The table defines various combined snapshot capabilities for each of the Dell Unity
XT models. These combined limits have an interaction between each other. For
example, if a model 380 system had 20 LUNs and 20 file systems, each LUN and
file system could not have 256 user snapshots. The number of user snapshots
would exceed the maximum of 8000 for the system.

Dell Unity XT Models VSA 380/380F 480/480F 680/680F 880/880F

Max snapshots per LUN 164 256 256 256 256

Max snapshots per file 31 256 256 256 256


system

Max snapshots source LUNs 64 1000 1500 2000 6000


per system

Max source LUNs in 50 75 75 75 75


Consistency Group

Max Consistency Groups per 64 1000 1500 2000 6000


system

Max user snapshots 128 8000 14000 20000 30000

Max user visible fs + snaps 64 1000 1250 2000 2500


per system

Hierarchical snap level - 10 10 10 10 10


LUN and file system

Module 1 Course Introduction and System Administration

Page 500 © Copyright 2022 Dell Inc.


Snapshots Overview

Apply a Snapshot Schedule

To apply a Snapshot schedule to a storage resource, open the Snapshots tab on


the Properties page of the resource. Then, click the Snapshot Schedule option.
From the Snapshot Schedule drop-down list, select a schedule to apply to the
resource.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 501


Snapshots Overview

Snapshot Settings for a Pool

Snapshots consume space from the parent storage pool that the storage resource
uses. To prevent the pool from running out of space due to snapshots, there are
two options for automatically deleting the oldest snapshots. These options can be
set from the Properties page of the pool and selecting the Snapshot Settings tab.
One option triggers the automatic deletion that is based on the total pool space
consumed. Another option triggers the automatic deletion that is based on the total
snapshot space consumed. Either option can be used singularly, or they can both
be used in combination. Both options allow the configuration of a space threshold
value to start the deletion and a space threshold value for stopping the automatic
deletion. When a pool is created, the Total pool consumption option is set by
default. The option cannot be changed during pool creation but can be modified
after the pool is created. If both options are cleared, this setting disables the
automatic deletion of the oldest snapshots based on space that is used. Automatic
snapshot deletion is still configurable based on snapshot retention values.

Module 1 Course Introduction and System Administration

Page 502 © Copyright 2022 Dell Inc.


Snapshots Overview

Creating Snapshots

CG

LUNs

LUN File system

VMFS datastore NFS datastore

Snapshots are created on storage resources for block, file, and VMware. All are
created in a similar manner. For block, the snapshot is created on a LUN or a
group of LUNs within a Consistency Group. For file, the snapshot is configured on
a file system. For VMware, the storage resource is either going to be a LUN for a
VMFS datastore or a file system for an NFS datastore. When each of these storage
resources are created, the system provides a wizard for their creation. Each wizard
provides an option to automatically create snapshots on the storage resource.
Each resource snapshot creation is nearly identical to the other resources. For
storage resources already created, snapshots can be manually created for them
from their Properties page. As with the wizard, the snapshot creation from the
storage resource Properties page is nearly identical to the other resources.

More details on the snapshot creation within the block storage LUN creation wizard
and the file storage file system creation wizard are shown in separate topics. Each

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 503


Snapshots Overview

topic also details the creation of manual snapshots from the LUN and file system
properties pages.

Module 1 Course Introduction and System Administration

Page 504 © Copyright 2022 Dell Inc.


Snapshots Overview

Snapshot Operations

CG

LUNs File system

VMFS datastore NFS datastore

Snapshot Operations Snapshot Operations


Restore Restore
Attach to host Copy

Detach from host Refresh


Copy
Refresh and Replicate

The operations that can be performed on a snapshot differ based on the type of
storage resource the snapshot is on. Operations on LUN-based snapshots are
Restore, Attach to host, Detach from host, Copy, Refresh, and Replicate.
Operations on file system-based snapshots are Restore, Refresh, and Copy.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 505


Snapshots Overview

Snapshot Schedules

To apply a Snapshot schedule to a storage resource, go to the Snapshots tab on


the Properties page of the resource. From there, select the Snapshot Schedule
option. From the Snapshot schedule drop-down list, select a schedule to apply to
the resource.

Apply a Snapshot Schedule

Note: Snapshot schedules are not supported on vVols.

Schedules and UTC Time

• Internally, Unity systems use the UTC time zone for time and scheduling.
− The operating system, logs, schedules, and other features all use UTC.
• When you connect to Unisphere, the times that are displayed are adjusted by
Unisphere to the time zone of the browser.
• When changing time on the system, or the timing on a feature, it is stored
internally in UTC format.
• By default, Unity systems do not consider Daylight Savings Time (DST).

− Also known as Daylight Time and Summer Time.

Module 1 Course Introduction and System Administration

Page 506 © Copyright 2022 Dell Inc.


Snapshots Overview

− Daylight Savings Time is not observed by the entire world.

Snapshots and Time Zones

When you create a schedule, the time to take the snapshot is stored
internally in UTC time.

UTC time does not consider Daylight Savings Time.

Because snapshot schedules are stored internally in UTC time, the time that a
snapshot is taken is not adjusted when Daylight Savings Time begins or ends.

Example: On August 2, 2020 a user in Massachusetts (US) creates a snapshot


schedule to create snapshots at 1:00 AM local time.

• Massachusetts is UTC -4 hours at this time of year.


• The snapshot schedule stores the time as 5:00 AM UTC.
• From this point forward, all snapshots will occur at 5:00 AM UTC.

Daylight Saving Time ends Sunday, November 7 at 2:00 AM.

• "Wall clock" time is adjusted backwards 1 hour at this point.

Snapshots still occur at 5:00 AM UTC.

Time August 2, 2020 November 2, 2020

12:00 AM 4:00 AM 11:00 PM 4:00 AM

1:00 AM 5:00 AM 12:00 AM 5:00 AM

Wall Clock UTC Wall Clock UTC

Take Snapshot

• The timing of snapshots “shifts” 1 hour (wall clock time).


• The direction of the shift depends on whether DST is beginning or ending.

To continue having snapshots taken at 1:00 AM local time, the operator must edit
the schedule.

Beginning with Dell EMC Unity OE 5.1, users can enable Time Zone support.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 507


Snapshots Overview

• Newly created schedules adjust automatically if the local time zone implements
Daylight Saving Time.
• NOTE: Schedules that were created before upgrading to OE 5.1 must be edited
one time to update the snapshot time.

Schedule Before Enabling Time Zone Support

Before enabling Schedule Time Zone support, the schedule appears as shown.

Module 1 Course Introduction and System Administration

Page 508 © Copyright 2022 Dell Inc.


Snapshots Overview

Enabling Time Zone Support

Beginning with Dell EMC Unity XT OE 5.1, the system supports a time zone option
to correct timing issues for snapshot schedules and asynchronous replication
throttling.

The setting applies to system defined and user created snapshot schedules. The
setting is not a system setting, and does not apply to other features such as logs or
FAST VP.

After upgrading to version 5.1, the schedule time zone is set to UTC Legacy. No
changes are made to schedules when upgrading.

If the user enables the time zone settings, the internal snapshot schedule is NOT
updated to the same absolute time. The user must check to see whether the
snapshot schedule must be updated after enabling time zone support.

To enable the feature, choose System Settings > Management > Schedule Time
Zone.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 509


Snapshots Overview

Schedule After Time Zone Support Enabled

When Schedule Time Zone is enabled, the system updates the schedule with the
value that had been stored internally, in UTC time. The user must check the
schedule, and edit it if necessary.

In the example shown, the schedule must be edited to return the snapshot time to
its previous value, 1:00 AM.

Module 1 Course Introduction and System Administration

Page 510 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

LUNs and Consistency Groups Snapshots

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 511


LUNs and Consistency Groups Snapshots

LUN Consistency Group Snapshots

LUN CG: CG snapshot:


A group of LUNs Suspends writes to
forming an LUN group
instance of capturing I/O
addressable LUN consistency across
storage the LUNs
CG

LUNs

A LUN Consistency Group (CG) is a grouping of multiple LUNs to form a single


instance of LUN storage. They are primarily designed for host applications that
access multiple LUNs, such as a database application. Snapshots provide a
mechanism for capturing a snapshot of the multiple LUNs within a consistency
group. When a Consistency Group snapshot is taken, the system completes any
outstanding I/O to the group of LUNs. Then writes to the LUNs are suspended until
the snap operation completes. The snapshot therefore captures a write-order
consistent image of the group of LUNs.

Module 1 Course Introduction and System Administration

Page 512 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

Multiple LUN Snapshots

Attach
Multiple hosts attach
Multiple snapshots to snapshots for RO
capture different point-in- or RW access
time data states

Copies of snapshots can Attach

be created and capture


identical data states
Copied RW
snapshots can nest
10 level
hierarchically to 10
max levels
LUN

With Dell Unity XT Snapshots, it is possible to create multiple snapshots of a LUN


to capture multiple point-in-time data states. In this example, the three o’clock and
the four o’clock snapshots are two different “child” snapshots of a common parent.
They capture two different data states of a common storage resource.

It is also possible to copy a snapshot. In this example, the four o’clock snapshot is
copied. Other than having a unique name, the copy is indistinguishable from the
source snapshot and both capture identical data states.

Multiple hosts can be attached to any specific LUN snapshot or multiple snapshots
within the tree. When a host is attached to a snapshot for access to its data, the
attach can be defined for read-only access or read/write access. In the example, a
host attaches to the three o’clock snapshot for read-only access and the snapshot
remains unmodified from its original snapped data state. A different host is
attached to the four o’clock snapshot copy for read/write access. By default, the
system creates a copy of the snapshot to preserve its original data state. The user
can optionally not create the snapshot copy. When the snap is read/write attached,
its data state is marked as modified from its source.

It is also possible to nest copied read/write attached snapshots that form a


hierarchy of snapshots to a maximum of 10 levels deep.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 513


LUNs and Consistency Groups Snapshots

Creating LUN Snapshots - Create LUNs Wizard

LUN snapshots can easily be created in several ways. Within the wizard to Create
LUNs, there is an option to automatically create snapshots for the LUN based on a
schedule. The wizard contains a drop-down list selection that has three different
system defined schedules to select from to create the LUN snapshots. There is
also a snapshot retention value that is associated with each of the three schedules.
A customized schedule can also be created for use. The scheduler has the
granularity to configure a snapshot frequency by the hour, day, or week. A
snapshot retention policy can also be defined.

Note: Configuration fields that are annotated with a red asterisk are required.

Module 1 Course Introduction and System Administration

Page 514 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

Creating LUN Snapshots - LUN Properties

For existing LUNs, snapshots are easily created from the LUN Properties page by
selecting the Snapshots tab. To create a snapshot of the LUN, select the + icon.
The snapshot must be configured with a name; by default the system provides a
name having a year, month, day, hour, minute, second format. Customized names
can also be configured. A Description field for the snapshot can be annotated as
an option. One of three Retention Policies must be configured. The default
retention configuration is the Pool Automatic Deletion Policy. It automatically
deletes the snapshot if pool space reaches a specified capacity threshold that is
defined on the pool. A customized retention time can alternately be selected and
configured for snapshot deletion on a specified calendar day and time. The other
alternative is to select the No Automatic Deletion option if the snapshot must be
kept for an undetermined amount of time.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 515


LUNs and Consistency Groups Snapshots

LUN Snapshot Restore - Process

Initial State

LUN Snapshot Restore Process – 1 of 4

The Snapshot Restore operation rolls back the storage resource to the point-in-
time data state that the snapshot captures. In this restore example, a LUN is at a
five o’clock data state. It is restored from a snapshot with a four o’clock data state.

Attach

Attach

LUN

Detach Hosts Quiesce I/O Disconnect Hosts

LUN Snapshot Restore Process – 2 of 4

Before performing a restore operation, detach any host attached to the LUN
snapshot been restored. Also ensure that all hosts have completed all read and
write operations to the LUN you want to restore. Finally, disconnect any host
accessing the LUN. This action may require disabling the host connection on the
host-side.

Module 1 Course Introduction and System Administration

Page 516 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

Before the restore:


1. Detach hosts from LUN
snapshots
2. Quiesce host I/O
to LUN
3. Disconnect host LUN
from the LUN

Start Restore

LUN Snapshot Restore Process – 3 of 4

Now the restore operation can be performed. From the four o’clock snapshot,
select the Restore operation. The system automatically creates a snapshot of the
current five o’clock data state of the LUN. This snapshot captures the current data
state of the LUN before the restoration operation begins.

Before the restore:


Perform Restore:
1. Detach hosts from LUN
1. Select snapshot
snapshots
Restore
2. Quiesce host I/O 2. System creates snap of
to LUN current LUN data state
3. Disconnect host LUN
from the LUN

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 517


LUNs and Consistency Groups Snapshots

Reconnect Hosts

LUN Snapshot Restore Process – 4 of 4

The LUN is restored to the four o’clock data state of the snapshot. The hosts can
now be reconnected to the resources they were connected to before the restore
operation and resume normal operations.

Attach

Attach
Before the restore:
1. Detach hosts from LUN Perform Restore:
snapshots
1. Select snapshot Restore
2. Quiesce host I/O to LUN

2. System creates snap of


3. Disconnect host from current LUN data state
the LUN 3. LUN is restored to snapshot
LUN data state

Module 1 Course Introduction and System Administration

Page 518 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

LUN Snapshot Restore - Operation

To restore a LUN from a snapshot, access the Properties page for the LUN.

On the Properties page:


201. Go to the Snapshots tab.
202. Check the box for the snapshot you want to restore from.
203. On the More Actions dropdown menu, click Restore.
204. The Restore window opens. Click OK.
205. The restore completes. The restore point snapshot is listed on the
Snapshots tab.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 519


LUNs and Consistency Groups Snapshots

Attach LUN Snapshot to Host - Process

Establish Host Connectivity to Unity

The Snapshot Attach to host operation attaches a connected host to a LUN


snapshot. In this attach example, a secondary host is going to attach to the three
o’clock snapshot of the LUN. Prior to performing an Attach to host operation, the
host being attached must have connectivity to the storage array and registered on
the storage system. Now the attach operation can be performed.

Attach

Before the attach:

1. Establish host connectivity


to Dell EMC Unity

LUN

Select Snapshot Select Access Type Select Hosts

The first step is to select a snapshot to attach to. The next step is to select an
Access Type, either read-only or read/write. Then the host or hosts are selected to
be attached.

Module 1 Course Introduction and System Administration

Page 520 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

Attach

Perform Attach:
1. Select snapshot
2. Select Access Type Read-
only or Read/write

3. Select hosts
LUN

Data State of Snapshot Preserved Host attached

Next, the system optionally creates a copy of the snapshot if a read/write Access
Type was selected. The snapshot copy preserves the data state of the snapshot
before the attach. Finally, the selected host is attached to the snapshot with the
Access Type selected.

Attach

Attach

Perform Attach:
1. Select snapshot
2. Select Access Type Read-
only or Read/write
3. Select hosts
4. System optionally creates
copy of snapshot
LUN
5. Snapshot attached

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 521


LUNs and Consistency Groups Snapshots

Attach LUN Snapshot to Host - Operation

LUN Snapshot Attach to Host Operation

To attach a host to a snapshot of a LUN, access the Properties page for the LUN.

On the Properties page:


206. Go to the Snapshots tab.
207. Check the box for the snapshot that you want to attach to a host. In
this example, the Monday_AM_snap is selected.
208. On the More Actions dropdown menu, click Attach to host.
209. The Attach to Host window opens. Click the plus sign + icon to add
hosts and configure access to the snap.
210. Choose the Access Type which can be read-only or read/write. In
this example the access type is Read/Write.
211. Check the box next to the name of the host or hosts to be attached
to the snapshot. In this example WIN10B is selected for access.

Module 1 Course Introduction and System Administration

Page 522 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

LUN Snapshot Attach to Host Operation Continued

The attach configuration is displayed.


212. In this example, the WIN10B host is attached to the
Monday_AM_snap with Read/Write access.
213. By default, the system optionally creates a copy of the snapshot
being attached with read/write access to preserve its original point-in-time data
state.
214. The snapshot is attached to the host and its attach status is
displayed.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 523


LUNs and Consistency Groups Snapshots

Detach LUN Snapshot from Host - Process

LUN Snapshot - Detach from Host Process – 1

The Snapshot Detach operation detaches a connected host from a LUN snapshot.
In this detach example, a secondary host is going to detach from the three o’clock
snapshot of the LUN.

Attach

Attach

LUN

LUN Snapshot - Detach from Host Process – 2

Before performing a detach operation, allow any outstanding read/write operations


of the snapshot attached host to complete.

Attach

Attach

Before the detach:


1. Quiesce I/O of snapshot
attached host

LUN

Module 1 Course Introduction and System Administration

Page 524 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

LUN Snapshot - Detach from Host Process – 3

Now the detach operation can be performed. From the three o’clock snapshot,
select the Detach from host operation.

Attach

Attach

Before the detach:


1. Quiesce I/O of snapshot
attached host

Perform the Detach:


1. Select snapshot to detach
LUN from

LUN Snapshot - Detach from Host Process – 4

The secondary host is detached from the three o’clock snapshot of the LUN.

Attach

Before the detach:


1. Quiesce I/O of snapshot
attached host
Perform the Detach:
1. Select snapshot to
detach from
LUN 2. Snapshot is detached

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 525


LUNs and Consistency Groups Snapshots

Detach LUN Snapshot from Host - Operation

To detach a host from a snapshot, first access the Properties page of the storage
resource.

From the Properties page:


215. Go to the Snapshots tab.
216. Check the box next to the snapshot to detach from.
217. The action for Attach to host and Detach from host are mutually
exclusive operations. The Detach from host operation is only available for
snapshots that are attached.
218. On the More Actions drop-down list, select Detach from host.
219. On the Detach Confirmation window, click Yes.

Module 1 Course Introduction and System Administration

Page 526 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

LUN Snapshot Copy - Process

Copy of Snapshot Created

The Snapshot Copy operation makes a copy of an existing snapshot that is either
attached or detached from a host. In this example, a copy of an existing four
o’clock snapshot is being made.

Attach

Attach

Before the copy:

1. Can copy attached or


detached snapshot

LUN

Select Snapshot to Copy

Select the snapshot to copy.

Attach
Attach

Perform the Copy:


1. Select snapshot to copy

LUN

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 527


LUNs and Consistency Groups Snapshots

Snapshot Copied

A copy of the selected snapshot is made. The copy inherits the parent snapshot
data state of four o’clock and its retention policy.

Attach

Attach

Perform the Copy:


1. Select snapshot to copy

2. Snapshot copied
3. Copy inherits parent data
LUN
state and retention policy

Module 1 Course Introduction and System Administration

Page 528 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

LUN Snapshot Copy - Operation

To copy a snapshot of a LUN, first access the Properties page of the LUN.

From the Properties page:


220. Go to the Snapshots tab.
221. Check the box next to the snapshot to copy.
222. On the More Actions drop-down list, select Copy.
223. On the Copy Snapshot window, the system provides a unique
name for the snapshot copy that is based on the time of creation, or a
customized name can be assigned. Click OK to create the copy.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 529


LUNs and Consistency Groups Snapshots

Accessing a LUN Snapshot

Establish Host Connectivity to Unity


Register Host

The process of accessing a LUN snapshot requires performing tasks on the


storage system and on the host that accesses the snapshot. The host must have
connectivity to the storage, either using Fibre Channel or iSCSI, and be registered.
In this example, a secondary host accesses the three o’clock LUN snapshot.

Attach

Dell EMC Unity


Host

LUN

Select LUN Snapshot

Next, from the Snapshots tab, a snapshot is selected and the snapshot operation
Attach to host is performed.

Module 1 Course Introduction and System Administration

Page 530 © Copyright 2022 Dell Inc.


LUNs and Consistency Groups Snapshots

Attach

Attach
Dell EMC Unity tasks:

1. Perform snapshot Attach Host


to host operation

LUN

Discover Disk Device


Access Snapshot

Now tasks from the host must be completed. The host must discover the disk
device that the snapshot presents to it. After the discovery, the host can access the
snapshot as a disk device.

Attach

Attach
Dell EMC Unity tasks:

1. Perform snapshot Attach to Host tasks:


host operation
1. Discover snapshot disk
device
2. Access the disk device
LUN

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 531


LUNs and Consistency Groups Snapshots

Activity: LUN Snapshots

Virtual lab for facilitated sessions:


• Create a snapshot of a LUN.
• Create a snapshot schedule.
• Attach a host to access the LUN
snapshot.
• Perform a snapshot restore operation.

Module 1 Course Introduction and System Administration

Page 532 © Copyright 2022 Dell Inc.


File System Snapshots

File System Snapshots

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 533


File System Snapshots

Multiple File System Snapshots

Multiple file system


Snapshots of a file system can
snapshots capture
be created either read-only or
different point-in-time data
read/write
states

Snapshot copies are


read/write. Read/write
Hierarchical RW
snapshots are
10 level snapshot copies can
shareable max be shared

File system

As with LUN snapshots, it is possible to create multiple snapshots of a file system


to capture multiple point-in-time data states. The three o’clock and the four o’clock
snapshots in this example are two different “child” snapshots of the same file
system parent. They capture two different point-in-time data states of the file
system. Snapshots of a file system can be created either as read-only or read/write
and are accessed in different manners which are covered in other topics. Copies of
snapshots are always created as read/write snapshots. The read/write snapshots
can be shared by creating an NFS or SMB share to them. When shared, they are
marked as modified to indicate that their data state is different from the parent
resource. It is also possible to nest copied and shared snapshots that form a
hierarchy of snapshots to a maximum of 10 levels deep.

Module 1 Course Introduction and System Administration

Page 534 © Copyright 2022 Dell Inc.


File System Snapshots

Creating File System Snapshots - Create File System Wizard

File system snapshots can easily be created in several ways. Within the wizard to
Create a File System, there is an option to automatically create snapshots for the
file system based on a schedule. File system snapshots that are created with a
schedule are read-only. The wizard contains a drop-down list selection that has
three different system defined schedules to select from to create the file system
snapshots. Each schedule includes a snapshot retention value. A customized
schedule can also be created for use. The scheduler includes the granularity to
configure a snapshot frequency by the hour, day, or week. A snapshot retention
policy can also be defined. Configuration fields that are annotated with a red
asterisk are required.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 535


File System Snapshots

Creating File System Snapshots - File System Properties

Snapshots of existing file systems are easily created from the file system
Properties page by selecting the Snapshots tab. A manually created file system
snapshot can be read-only or read/write. To create a snapshot of the file system,
select the + icon. The snapshot must be configured with a Name. By default the
system provides a name that is based on the creation time in a year, month, day,
hour, minute, second format. Customized names can also be configured. A
Description field for the snapshot can optionally be configured. One of three
Retention Policies must be configured. The default retention configuration is the
Pool Automatic Deletion Policy. That policy automatically deletes the snapshot if
pool space reaches a specified capacity threshold defined on the pool. A
customized Retention Time can alternately be selected and configured for
snapshot deletion on a specified calendar day and time within a year of creation.
The other alternative is to select the No Automatic Deletion option if the snapshot
must be kept for an undetermined amount of time. The Access Type section
requires configuration by selecting one of the two options for the snapshot; read-
only or read/write.

Module 1 Course Introduction and System Administration

Page 536 © Copyright 2022 Dell Inc.


File System Snapshots

File System Snapshot Restore - Process

Initial State

The Snapshot Restore operation for a file system is similar to the restore operation
of a LUN. It rolls back the file system to a point-in-time data state that a read-only
or read/write snapshot captures. This example restores a file system from a
snapshot. The file system is at a five o’clock data state and is restored from a read-
only snapshot with a four o’clock data state.

File system

Disconnect Clients
Quiesce I/O

Before performing a restore operation, disconnect the client connected to the File
system snapshot been restored. Also quiesce I/O to the file system being restored.
Clients can remain connected to the file system but should close any opened files.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 537


File System Snapshots

Before the restore:


1. Disconnect clients from file
system snapshots
2. Quiesce I/O to file
system

File system

Restore from Snap

Now the Restore operation can be performed. From the four o’clock snapshot,
select the Restore operation.

Before the restore:


1. Disconnect clients from file
system snapshots Perform Restore:
2. Quiesce I/O to file 1. Select snapshot
system Restore
File system

Current Data
State Snapped

The system automatically creates a snapshot of the current five o’clock data state
of the file system. It captures the current data state of the file system before the
restoration operation begins.

Module 1 Course Introduction and System Administration

Page 538 © Copyright 2022 Dell Inc.


File System Snapshots

Before the restore:


1. Disconnect clients from file Perform Restore:
system snapshots
1. Select snapshot
2. Quiesce I/O to file Restore
system
2. System creates snap
File system of current file system
data state

Connections and
I/O Resumed

The file system is restored to the four o’clock data state of the snapshot. The
connections and I/O to the resources can now be resumed for normal operations.

Before the restore:


Perform Restore:
1. Disconnect clients from file
system snapshots 1. Select snapshot Restore

2. Quiesce I/O to file system 2. System creates snap of


current file system data state

File system 3. File system is restored to


snapshot data state

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 539


File System Snapshots

File System Snapshot Restore - Operation

To restore a file system from a snapshot, access the Properties page for the
snapshot.

228. Go to the Snapshots tab.


229. Check the box for the snapshot you want to restore from.
230. On the More Actions dropdown menu, click Restore.
231. The Restore window opens. The system creates a restore point
snapshot of the current data state of the file system before the restoration
operation. Click OK.
232. Once complete, the new restore point snapshot is listed.

Module 1 Course Introduction and System Administration

Page 540 © Copyright 2022 Dell Inc.


File System Snapshots

File System Snapshot Copy - Process

Initial State

The Snapshot Copy operation makes a copy of an existing file system snapshot
that is either read-only or read/write shared or unshared. In this example, a copy of
an existing four o’clock read-only snapshot is being made.

Before the copy:


1. Can copy read-only
snapshot or read/write
shared or unshared
snapshost
File system

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 541


File System Snapshots

Select Snapshot
to Copy

Select the snapshot to copy.

Before the copy:


1. Can copy read-only Before the copy:
snapshot or read/write 1. Select snapshot to
shared or unshared copy
snapshost
File system

Copy Created

The snapshot copy is created and is read/write. It also inherits the parent snapshot
data state of four o’clock and its retention policy.

Before the copy:


1. Can copy read-only Before the copy:
snapshot or read/write 1. Select snapshot
shared or unshared to copy
snapshost 2. Snapshot is copied
3. Copy is
File system
read/write, inherits
parent data state, and
retention policy

Module 1 Course Introduction and System Administration

Page 542 © Copyright 2022 Dell Inc.


File System Snapshots

File System Snapshot Copy - Operation

To copy a snapshot of a file system, first access the Properties page of that file
system.

233. Go to the Snapshots tab.


234. Check the box next to the snapshot to copy.
235. On the More Actions drop-down list, select Copy.
236. On the Copy Snapshot window, the system provides a unique name
for the snapshot copy that is based on the time of creation, or a customized
name can be assigned. Click OK to create the copy.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 543


File System Snapshots

Accessing a File System Read/Write Snapshot

Initial State

The process of accessing a file system read/write snapshot requires performing


tasks on the storage system and on the client that accesses the snapshot. In this
example, a client accesses the three o’clock read/write file system snapshot.

Dell EMC Unity


Client

File system

Configure Share to Snapshot

On the storage system, an NFS and or SMB share must be configured on the
read/write snapshot of the file system. This task is completed from their respective
pages.

Module 1 Course Introduction and System Administration

Page 544 © Copyright 2022 Dell Inc.


File System Snapshots

Dell EMC Unity tasks:


Client
1. Configure NFS/SMB
share to the read/write file
system snapshot

File system

Connect and Access Share

Now tasks from the client must be completed. The client must be connected to the
NFS/SMB share of the snapshot. After connection to the share, the client can
access the snapshot resource.

Dell EMC Unity tasks:


Client tasks:
1. Configure NFS/SMB
share to the read/write file 1. Connect to the
system snapshot NFS/SMB share

2. Access the shared


File system
resource

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 545


File System Snapshots

Accessing a File System Read-Only Snapshot

Initial State

The process of accessing a file system read-only snapshot is different than


accessing a read/write snapshot. The read-only file system snapshot is exposed to
the client through a checkpoint virtual file system (CVFS) mechanism that
Snapshots provide. The read-only snapshot access does not require performing
any tasks on the storage system. All the tasks are performed on the client through
its access directly to the file system. The tasks for NFS clients are slightly different
than the tasks for SMB clients. The example shows NFS and SMB access of a
read-only snapshot.

NFS client SMB client

File system

Connect NFS Client


to NFS Share

The first task for an NFS client is to connect to an NFS share on the file system.

Module 1 Course Introduction and System Administration

Page 546 © Copyright 2022 Dell Inc.


File System Snapshots

NFS client tasks:


SMB client
1. Connect to file system
NFS share

File system

Access Snapshot

Access to the read-only snapshot is established by accessing the snapshot’s


hidden .ckpt data path. This path will redirect the client to the point-in-time view that
the read-only snapshot captures.

.ckpt
path

NFS client tasks:


SMB client
1. Connect to file system
NFS share

2. Access the snapshot


hidden .ckpt data path
File system

Connect SMB Client


to SMB Share

Similarly, the first task for an SMB client is to connect to an SMB share on the file
system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 547


File System Snapshots

.ckpt
path

NFS client tasks: SMB client tasks:

1. Connect to file system 1. Connect to file


NFS share system SMB share

2. Access the snapshot


hidden .ckpt data path File system

SMB Previous Versions Tab

Access to the read-only snapshot is established by the SMB client accessing the
Previous Versions tab of the SMB share. It redirects the client to the point-in-time
view that the read-only snapshot captures.

.ckpt Previous
path versions

NFS client tasks: SMB client tasks:

1. Connect to file system 1. Connect to file


NFS share system SMB share

2. Access the
2. Access the snapshot
File system snapshot previous
hidden .ckpt data path
versions tab

Module 1 Course Introduction and System Administration

Page 548 © Copyright 2022 Dell Inc.


File System Snapshots

Through CVFS

The read-only snapshot is exposed to the clients through the CVFS mechanism.
Therefore the clients can directly recover data from the snapshot without any
administrator intervention. For example, if a user either corrupted or deleted a file
by mistake, that user could directly access the read-only snapshot. Then from the
snapshot the user can get an earlier version of the file and copy it to the file system
for recovery.

.ckpt Previous
path versions
NFS client tasks: SMB client tasks:
1. Connect to file system 1. Connect to file
NFS share system SMB share
2. Access the snapshot 2. Access the snapshot
hidden .ckpt data path previous versions tab

File system

Clients can recover data directly from the snapshot via CVFS

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 549


File System Snapshots

Activity: File System Snapshots

Virtual lab for facilitated sessions:


• Enable a snapshot schedule during file
system creation
• Create snapshots of an existing file
system
• Configure access to a read/write
snapshot and perform write operations
to it
• Access read-only snapshots from an
SMB Windows client and from an NFS
Linux client

Module 1 Course Introduction and System Administration

Page 550 © Copyright 2022 Dell Inc.


Native vVol Snapshots

Native vVol Snapshots

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 551


Native vVol Snapshots

Native vVol Snapshots - Overview

Dell Unity XT storage systems support creating snapshots of “data” (.vmdk) vVols
in Unisphere, Unisphere CLI, or REST API.

• Snapshots that are created in Unisphere are not listed in vSphere.


• You cannot create snapshots for “Config” or “Swap” vVols.
• You can also create snapshots in vSphere, using the VMware VASA Provider.

vVol snapshot restoration is also supported using Unisphere, Unisphere CLI, and
REST API.

• Snapshots that are created in vSphere using the VASA Provider can be
restored using vSphere, Unisphere, Unisphere CLI, or REST API.
• Snapshots that are created using Unisphere, Unisphere CLI, or REST API
cannot be restored using vSphere.
• Virtual Machines should be powered off before issuing a restore operation.

Snapshot schedules are not supported on vVols.

Module 1 Course Introduction and System Administration

Page 552 © Copyright 2022 Dell Inc.


Native vVol Snapshots

Creating a vVol Snapshot in Unisphere

Creating a vVol Snapshot in Unisphere

View Virtual Volumes

Click VMware, and choose the Virtual Volumes tab.

Open vVol Properties

From Unisphere, you can create snapshots only for "data" vVols. Check the box to
select a data vVol. Then click the pencil icon to edit the selected virtual volume.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 553


Native vVol Snapshots

Select Snapshots

Go to the Snapshots tab.

Module 1 Course Introduction and System Administration

Page 554 © Copyright 2022 Dell Inc.


Native vVol Snapshots

Create Snapshot

Initially, no snapshots exist for the vVol. Click the plus sign + to create a snapshot.

Note: vVol snapshots cannot be scheduled.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 555


Native vVol Snapshots

Define Snapshot Name

The system automatically generates a name for the snapshot. You can accept the
name, or enter a different name for the snapshot. Click OK to create the snapshot.

Module 1 Course Introduction and System Administration

Page 556 © Copyright 2022 Dell Inc.


Native vVol Snapshots

View Listed Snapshots

The new snapshot is shown on the Snapshots tab.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 557


Native vVol Snapshots

Restore a vVol from a Snapshot in Unisphere

Open vVol Properties

To restore a data vVol from a snapshot in Unisphere:


247. Click VMware, and choose the Virtual Volumes tab.
248. Check the box for the data vVol, then click the pencil icon to view
Properties.

Module 1 Course Introduction and System Administration

Page 558 © Copyright 2022 Dell Inc.


Native vVol Snapshots

Select Snapshots

249. Select the Snapshots tab to view available snapshots for the data
vVol.

Restore Snapshot

250. Check the box to select the wanted snapshot. From the More
Actions drop-down menu, select Restore.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 559


Native vVol Snapshots

Confirm Operation

251. Verify that the virtual machine is powered off, and click Yes to
restore the vVol from the snapshot.

Module 1 Course Introduction and System Administration

Page 560 © Copyright 2022 Dell Inc.


Native vVol Snapshots

Data Protection with Snapshots Key Points

252. Snapshots Overview


sss. The Dell Unity XT Snapshots feature enables space efficient point-
in-time copies of storage resources for block, file, and VMware "data" vVols.
– The snap images can be read-only or read/write and used in various
ways.
– The production data can be restored from a snapshot to a known point-
in-time data state.
ttt. Snapshots of storage resources (LUNs, file systems, and VMware
datastores) are architected using Redirect on Write technology.
uuu. Snapshots can be scheduled at the time the storage resource is
created, or manually created from the resource properties page.
vvv. Snapshot operations include Restore, Attach to host, Detach from
host, and Copy, depending on the source storage resource.
253. LUNs and Consistency Groups Snapshots
www. Dell EMC XT Snapshots provide a mechanism for capturing a
snapshot of the multiple LUNs within a consistency group.
– The operation captures a write-order consistent image of the group of
LUNs.
– Multiple hosts can be attached to any specific LUN snapshot or multiple
snapshots within the tree.
xxx. Dell Unity XT system operations on LUN-based snapshots are
Restore, Attach to host, Detach from host, and Copy.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 561


Native vVol Snapshots

- The Restore operation rolls back the LUN or Consistency Group LUN
members to the point-in-time data state captured in the snapshot.
- The Snapshot Attach to host operation grants a host a defined access
level to a LUN snapshot.
- The Snapshot Dettach from host operation revokes the host access to
the LUN snapshot.
- The Copy operation makes a replica of an existing LUN snapshot which
inherits the parent snapshot data state and retention policy.
254. File System Snapshots
yyy. Dell Unity XT Snapshots provide a mechanism for capturing a
snapshot of file systems.
– Multiple snapshots of a file system capture different point-in-time data
states.
– Snapshots of file systems can be scheduled or manually created.
zzz. Snapshots of a file system can be created either as read-only or
read/write, and are accessed in different manners.
- Read/write snapshots of a file system can be shared by creating an NFS
or SMB share.
- The read-only file system snapshot is exposed to the client through a
checkpoint virtual file system (CVFS) mechanism that Snapshots
provides.
aaaa. Dell Unity XT system operations on file system-based snapshots are
Restore and Copy.
- The Restore operation rolls back the file system to the point-in-time data
state captured in the read-only or read/write snapshot.
- The Copy operation makes a replica of an existing file system snapshot
that is either read-only or read/write shared or unshared.
255. Native vVol Snapshots

bbbb. Dell Unity XT storage systems support creating snapshots of “data”


(.vmdk) vVols in Unisphere, Unisphere CLI, or REST API.
– Snapshots of "Config" and "Swap" vVols are not supported.
– Snapshot schedules are not supported on vVols.

Module 1 Course Introduction and System Administration

Page 562 © Copyright 2022 Dell Inc.


Native vVol Snapshots

cccc. Dell EMC XT vVol snapshot restore operations are supported using
Unisphere, Unisphere CLI, and REST API, or the vCenter Server.
- vVol snapshots created in the vSphere environment can be restored
using either vCenter Server or the Dell Unity XT management interfaces.
- vVol snapshots created using the Dell EMC Unity XT management
interfaces cannot be restored using vCenter Server.

For more information, see the Dell EMC Unity Family Configuring
and managing LUNs, Dell EMC Unity: NAS Capabilities and Dell
EMC Unity Family Configuring vVols on the Dell EMC Unity
Family Technical Documentation portal at Dell Technologies site.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 563


Native vVol Snapshots

Module 1 Course Introduction and System Administration

Page 564 © Copyright 2022 Dell Inc.


Replication Overview

Replication Overview

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 565


Replication Overview

Replication Overview

This topic provides an overview of the Dell Unity XT Replication feature. The
architectures of Asynchronous and Synchronous Replications are discussed and
the benefits and capabilities are listed.

Replication

Replica
Storage Resource

Synchronized storage resource replica

• Replicates storage resource


− Synchronized redundant data
− Within same or remote system
• Synchronous replication
− Distance limited

Module 1 Course Introduction and System Administration

Page 566 © Copyright 2022 Dell Inc.


Replication Overview

o Recommended 100 KMs/60 mile limit


o Distance increases latency
o Recommend latency below 10 ms
− Provides zero data loss DR solution
− Available on physical systems
• Asynchronous replication

− Long-distance replication
o Does not impact latency
− DR solution using Recovery Point Objective
o Data amount measured in units of time
− Available on physical and virtual systems
Dell Unity XT Replication is a data protection feature that replicates storage
resources to create synchronized redundant data. With replication, it is possible to
replicate storage resources within the same system or to a remote system. The
replication feature is included in the Unity XT licensing at no additional cost.

There are two replication types available; synchronous and asynchronous.


Synchronous replication is only available to replicate data to a remote system.
Asynchronous replication can replicate data within the same system or to a remote
system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 567


Replication Overview

Asynchronous Local Replication Overview

Asynchronous

Pool A Pool B

• Replicates storage resource between pools on a system


− Balance capacity
− Change disk type
• Supported storage resources

− LUNs
− Consistency Groups
− Thin Clones
− VMware vStorage VMFS datastores
− VMware NFS datastores
− File systems
− NAS servers

Module 1 Course Introduction and System Administration

Page 568 © Copyright 2022 Dell Inc.


Replication Overview

Remote Replication Overview

Asynchronous

Synchronous

Site A Site B

• Replicates storage resource between systems


− Forms DR solution based on RPO9
− Provides data access during planned down time
• Supported storage resources for synchronous and asynchronous replication

− LUNs
− Consistency Groups
− Thin Clones

9 RPO is an amount of data, which is measured in units of time to perform


automatic data synchronization between the source and remote systems. The RPO
for synchronous replication is set to zero. The RPO for asynchronous replication is
configurable. The RPO value represents the acceptable amount of data that may
be lost in a disaster situation. The remote data is consistent with the configured
RPO value. The minimum and maximum RPO values are 5 minutes and 1440
minutes [24 hours].

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 569


Replication Overview

− VMware VMFS datastores


− VMware NFS datastores
− File systems
− NAS servers

Module 1 Course Introduction and System Administration

Page 570 © Copyright 2022 Dell Inc.


Replication Overview

Creating Replication Sessions

Replication sessions are created for


block, file, and the VMware storage
resources. All are performed in a
similar manner. Asynchronous
• For block, the replication is
created on a LUN or a group of Synchronous
LUNs that make up a Consistency
Group, or a Thin Clone.
• For file, the replication is
configured on a NAS server and
file systems.
• For VMware, the storage resource is either going to be a LUN-based VMFS
datastore or a file system-based NFS datastore.

Replication is configured when the resource is created or manually created from


the resource properties page.
• Unisphere provide a wizard for the creation of the storage resources. Each
wizard provides an option to automatically create the replication on the
resource. Each resource replication creation is nearly identical to the other
resources.
• For storage resources already created, replications can be created manually
from their Properties page. As with the wizard, the replication creation from the
resource Properties page is nearly identical to the other resources.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 571


Replication Overview

NAS Server and File System Remote Replication

Because file system access depends on a NAS server, to remotely replicate a file
system, the associated NAS server must be replicated first.

File systems depend on NAS server


1. NAS server replicated first
2. Associated file systems replicated next

NAS Server NAS Server

Site A Site B

When a NAS server is replicated, any file systems that are associated with the NAS
server are also replicated.

The system creates separate replication sessions; a session for the NAS server
and a session for each associated file system.

Module 1 Course Introduction and System Administration

Page 572 © Copyright 2022 Dell Inc.


Synchronous Replication Overview

Synchronous Replication Overview

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 573


Synchronous Replication Overview

Synchronous Replication Architecture

System connectivity Write Intent Logs (WIL)


Data replicated on replication Small SPA and SPB structures hold
Interfaces (FC connections) persistent fracture log
Management on Replication Track writes if destination resource is
Connection unavailable

Synchronous session

Resource Resource

WIL WIL WIL WIL


Data

Management

Source Destination
Site A Site B

The architecture for Dell Unity XT synchronous replication is shown here.


Synchronous replication only supports remote replication and does not support
local replication. Fundamental to remote replication is connectivity and
communication between the source and destination systems. A data connection to
carry the replicated data is required. The connection is formed using Fibre Channel
connections between the replicating systems. A communication channel is also
required to manage the replication session. For synchronous replication, part of the
management is provided using Replication Interfaces. These IP-based interfaces
are configured on SPA and SPB using specific Sync Replication Management
Ports. The management communication between the replicating systems is
established on a Replication Connection. It defines the management interfaces and
credentials for the source and destination systems.

Synchronous replication architecture uses Write Intent Logs (WIL) on each of the
systems that are involved in the replication. These logs are internal structures and
are created automatically on each system. There is a WIL for SPA and one for SPB
on each system. During normal operations, these logs are used to maintain
synchronization of the source and the replica. The Write Intent Logs hold fracture
logs, designed to track changes to the source storage resource should the
destination storage resource become unreachable. When the destination becomes

Module 1 Course Introduction and System Administration

Page 574 © Copyright 2022 Dell Inc.


Synchronous Replication Overview

reachable again, synchronization between the source and replica automatically


recovers using the fracture log.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 575


Synchronous Replication Overview

Synchronous Remote Replication Topologies

One-Directional

Bi-Directional

For synchronous replication, two topologies can be used, either One-Directional or


Bi-Directional replication.

One-Directional replication is typically deployed when only one of the systems is


used for production I/O. The second system is a replication target for all production
data and sits idle. If the need arises, the DR system can be placed into production
and provide production I/O. In this scenario, mirroring the production system
configuration, including the number of drives and pool layout, on the DR system is
suggested. Each system would have the same performance potential.

The Bi-Directional replication topology is typically used when production is spread


across multiple systems or locations. With this replication topology, production I/O
from each system is mirrored to the peer system. If there is an outage, one of the
systems can be promoted as the primary production system, and all production I/O
can be sent to it. After the outage is resolved, the replication configuration can be
changed back to its original configuration. This replication topology ensures that
both systems are in use by production I/O simultaneously.

A storage resource can only be replicated to a single storage resource on a remote


system. This capability is regardless of how many replication connections are
configured on the system. It is not possible to synchronously replicate a single
storage resource to multiple destination resources. Only one replication connection
used for synchronous replication can be configured on a Dell Unity XT system.
Therefore, only a single source and destination pair can use synchronous
replication on Dell Unity XT.

Module 1 Course Introduction and System Administration

Page 576 © Copyright 2022 Dell Inc.


Synchronous Replication Overview

Synchronous Replication Process – 7 Steps

Initial State

The synchronous replication of a storage resource has an initial process, and then
an ongoing synchronization process. The starting point is a data populated storage
resource on the source system that is available to production and has a constantly
changing data state.

Initial process Synchronization process

Primary Secondary

Resource

Data
Management
Source Destination
Site A Site B

1-Create Dest. Resource

The first step of the initial process for synchronous replication is to create a storage
resource of the exact same capacity on the destination system. The system creates
the destination storage resource automatically. The new destination resource
contains no data.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 577


Synchronous Replication Overview

Initial process Synchronization


1. Create destination resource

Primary Secondary

Resource Resource

Data
Management
Source Destination
Site A Site B

2-Create WILs

In the next step, SPA, and SPB Write Intent Logs are automatically created on the
source and destination systems.

Initial process Synchronization process


1. Create destination resource
2. Create Write Intent Logs

Primary Secondary

Resource Resource

WIL WIL Data WIL WIL

Management
Source Destination
Site A Site B

3-Initial Copy

Initial synchronization of the source data is then performed. It copies all the existing
data from the source to the destination. The source resource is available to
production during the initial synchronization, but the destination is unusable until
the synchronization completes.

Module 1 Course Introduction and System Administration

Page 578 © Copyright 2022 Dell Inc.


Synchronous Replication Overview

Initial process Synchronization process


1. Create destination resource
2. Create Write Intent Logs
3. Initial copy of source to destination

Primary Secondary

Resource Resource

Data
Management
Source Destination
Site A Site B

4-Host Writes
to Source

After the initial synchronization is complete, the process to maintain


synchronization begins. When a primary host writes to the source, the system
delays the write acknowledgement back to the host.

Initial process Synchronization process


1. Create destination resource 4. Host writes to source
2. Create Write Intent Logs
3. Initial copy of source to destination

Primary Secondary

Resource Resource

Data
Management
Source Destination
Site A Site B

5-Write Replicated to Dest.

The write is replicated to the destination system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 579


Synchronous Replication Overview

Initial process Synchronization process


1. Create destination resource 4. Host writes to source
2. Create Write Intent Logs 5. Write replicated to destination
3. Initial copy of source to destination

Primary Secondary

Resource Resource

Data
Management
Source Destination
Site A Site B

6-Dest. Ack.

After the destination system has verified the integrity of the data write, it sends an
acknowledgement back to the source system.

Initial process Synchronization process


1. Create destination resource 4. Host writes to source
2. Create Write Intent Logs 5. Write replicated to destination
3. Initial copy of source to destination 6. Destination acknowledges source

Primary Secondary

Resource Resource

Data
Management
Source Destination
Site A Site B

7-Source Ack. Host

At that point, the source system sends the acknowledgement of the write operation
back to the host. The data state is synchronized between the source and
destination. Should recovery be needed from the destination, its RPO is zero.

Module 1 Course Introduction and System Administration

Page 580 © Copyright 2022 Dell Inc.


Synchronous Replication Overview

Should the destination become unreachable, the replication session is out of


synchronization. The source Write Intent Log for the SP owning the resource tracks
the changes. When the destination becomes available, the system automatically
recovers synchronization using the WIL.

Initial process Synchronization process


1. Create destination resource 4. Host writes to source
2. Create Write Intent Logs 5. Write replicated to destination
3. Initial copy of source to destination 6. Destination acknowledges source
7. Source acknowledges host

Primary Secondary

Resource Resource

WIL WIL WIL WIL


Data
Management
Source Destination
Site A Site B

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 581


Synchronous Replication Overview

Animation - Synchronous Replication Process

This animation describe the process of Synchronous Replication between two


storage systems.

Movie:

The web version of this content contains a movie.

Module 1 Course Introduction and System Administration

Page 582 © Copyright 2022 Dell Inc.


Synchronous Replication Overview

Synchronous Replication States

Session States Sync States

Active In Sync

Syncing
Paused
Out of Sync

Failed Over
Consistent

Lost Sync Communications


Inconsistent

Synchronous replications have states for describing the session and its associated
synchronization.

An Active session state indicates normal operations and the source and
destination are In Sync.

A Paused session state indicates that the replication has been stopped and has
the sync state of Consistent. This state indicates that the WIL is used to perform
synchronization of the destination.

A Failed Over session has one of two sync states. It can show an Inconsistent
state meaning the sync state was not In Sync or Consistent before the Failover. If
the sync state was In Sync before the Failover, it will be Out of Sync after session
Failover.

A Lost Sync Communications session state indicates that the destination is


unreachable. It can have any of the following sync states: Out of Sync,
Consistent or Inconsistent.

A sync state of Syncing indicates a transition from Out of Sync, Consistent or


Inconsistent. The Syncing state is due to the session changing to an Active state

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 583


Synchronous Replication Overview

from one of its other states. For example, if the system has been recovered from
the Lost Sync Communications state.

Module 1 Course Introduction and System Administration

Page 584 © Copyright 2022 Dell Inc.


Synchronous Replication Overview

Synchronous Replication of File Snapshots

• Synchronous replication of file


snapshots
− Primary storage resource must
already be synchronously Synchronous
replicated
o Supports file system and
VMware NFS datastore
snapshots
− Read-only file snapshots
o Scheduled or user created
snapshots
− Replica retention policies can be customized
− Changes to source snapshot replicated to destination
• Snapshot Schedule Replication

− Consistent schedule for the resource


− Changes replicated to peer
− Schedule inactive on destination
Dell Unity XT Replication provides the ability to synchronously replicate read-only
file snapshots to a remote system. The replication of snapshots ensures that
consistent snapshots for the resource are present at the source and destination
sites. Synchronous replication of snapshots for file systems and VMware NFS
datastores are supported. NAS servers do not support snapshots and thus are not
replicated. Read-write snapshots are not replicated. Any read-only snapshot
created after the primary resource replication is automatically replicated. User
created, or schedule created snapshots are supported for replication. Snapshots
created prior to the primary resource replication are not replicated. A replicated
snapshot has the same properties as the source snapshot such as retention policy
and snapshot name. The destination snapshot can later be customized with a
different retention policy than the source as necessary. Any change to the retention

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 585


Synchronous Replication Overview

policy of the source snapshot automatically updates the retention policy of


destination snapshot, even if it was previously modified.

Synchronous replication also supports replication of snapshot schedules for the


supported file resource. It ensures a consistent schedule for the resource on both
the source and destination systems. Snapshot schedule replication uses the same
replication management connection as synchronous replication. Making changes to
a replicated snapshot schedule, whether on the source or destination system,
automatically makes the same change to the peer system. Replicated snapshot
schedules are inactive on the destination. They become active if the parent
resource is failed over. In other words, the schedule becomes active when the
resource is no longer a destination of replication.

Module 1 Course Introduction and System Administration

Page 586 © Copyright 2022 Dell Inc.


Synchronous Replication Overview

Synchronous Replication Capabilities

Dell Dell Dell Dell


Unity Unity XT Unity XT Unity XT
380/380F 480/480F 680/680F 880/880F

Max replication sessions 1000 1000 1500 2000


(Synchronous + Asynchronous)

Max synchronous replication 500 750 1000 2000


sessions

Max Consistency Group 64 64 64 128


replication sessions

Max LUNs in a replicated 32 32 32 32


Consistency Group

Max remote systems 1 1 1 1

The table details the various maximum capabilities for synchronous replication that
are based on specific Dell Unity XT models. The maximum replication sessions
include all replication sessions on the system, which include both synchronous and
asynchronous replication sessions, local or remote. The replication destination
storage resources count towards the system maximums, even though they are not
host accessible as a destination image. In Dell Unity XT, only one replication
connection that is used for synchronous replication, or synchronous and
asynchronous replication, can be created. Only one pair of systems can replicate
synchronously with each other.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 587


Synchronous Replication Configuration

Synchronous Replication Configuration

Module 1 Course Introduction and System Administration

Page 588 © Copyright 2022 Dell Inc.


Synchronous Replication Configuration

Synchronous Replication Creation Process

1. Identify Synchronous FC Ports for establishing FC connectivity for data


2. Create Replication Interfaces using Sync Replication Management Ports on source and destination
3. Create Replication Connection on source or destination
4. Verify and Update on peer system
5. Create/select storage resource to replicate
6. Define replication settings
7. Create the session

Synchronous session
Resource Resource

WIL WIL WIL WIL


Data

Management
Source Destination
Site A Site B

The steps for creating remote replication sessions are different depending upon the
replication mode; either asynchronous or synchronous. Synchronous remote
replication steps are covered here. Before a synchronous replication session is
created, communications must be established between the replicating systems.

The first step is to identify the Synchronous FC Ports on the source and destination
systems for use to establish FC connectivity. This connectivity forms the
connections that carry the data between the two replicating systems.

Next, create Replication Interfaces on both the source and destination systems.
The interfaces must be created on the Sync Replication Management Ports and
form a portion of the management channel for replication.

A Replication Connection between the systems is created next. This step is


performed from either the source or the destination. It establishes the management
channel for replication.

After the connection is created, verify it can be seen on the peer system.

Communications are now in place to create a replication session for a storage


resource. A storage resource can now be selected for replication. It can be

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 589


Synchronous Replication Configuration

selected during the resource creation wizard. Or if the storage resource already
exists, it can be selected from the storage resource Properties page.

Next, configure the replication settings which define the replication mode and
destination system. The system automatically creates a destination resource and
the Write Intent Logs on both systems.

The replication session is established.

Module 1 Course Introduction and System Administration

Page 590 © Copyright 2022 Dell Inc.


Synchronous Replication Configuration

Synchronous Remote Replication Communication

Before you create a synchronous remote replication session, you must configure
active communications channels between the two systems. The active
communications channels are different for the two replication modes, synchronous
is shown here.

1. Synchronous FC connections

FC-based connectivity
between source and
destination SPs

Carries replicated
data
Data
Management
2: Replication
Source Interfaces [sync] Destination
3: Replication
Connection

Mode of replication
Channel for management

270. The first communication configuration required for synchronous


replication is the Fibre Channel connections between the corresponding SPs of
the source and destination systems.
− The Fibre Channel connectivity can be zoned fabric or direct connections.
− This connectivity carries the replicated data between the systems.
271. The Replication Interfaces are configured next. These interfaces are
IP-based connections configured on specific Sync Replication Management
Ports on the SPs of each system.
− These interfaces are part of the replication management channel.
272. The Replication Connection is configured next.

− The connection defines the replication mode, the management interface,


and credentials for both replicating systems.
− The Replication Connection completes the configuration of the management
channel.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 591


Synchronous Replication Configuration

Determining Synchronous FC Ports

One of several Fibre Channel ports on each SP of the Dell Unity XT system is
configured and used for synchronous replication. If available, the system uses
Fibre Channel Port 4 of SPA and SPB. If not available, and then the system uses
Fibre Channel Port 0 of I/O module 0. If that is not available, and then Port 0 of I/O
module 1 is used.

After the Synchronous FC Ports on the replicating systems are verified, the Fibre
Channel connectivity is established between the corresponding SP ports on each
system. Direct connect or zoned fabric connectivity is supported.

Although the Synchronous FC ports can also support host connectivity, Dell
recommends that they be dedicated to synchronous replication.

Unisphere

In Unisphere, navigate to the System View page for the rear view of the DPE to
identify the Synchronous FC Ports. In the example, the SPA FC Port 4 is selected
and its Replication Capability lists Sync replication.

Module 1 Course Introduction and System Administration

Page 592 © Copyright 2022 Dell Inc.


Synchronous Replication Configuration

UEMCLI

The UEMCLI command /remote/sys show –detail is available to verify the


Fibre Channel port that the system has specified as the Synchronous FC Ports on
the SPs. In the abbreviated example output, the system specifies Fibre Channel
Port 4 as the Synchronous FC port for SPA and SPB.

Ordering of SPA/SPB
Synchronous FC Ports:
1. FC Port 4
2. Module 0 FC Port 0
3. Module 1 FC Port 0

Verify Synchronous FC Ports using CLI console

Unisphere CLI> uemcli /remote/sys/show -detail


Storage system address: 192.168.1.30
Storage system port: 443
HTTPS connection
~ Output abbreviated ~

Health details "Communication with the replication


host is established. No action is required."

Synchronous FC ports = spa_fc4, spb_fc4

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 593


Synchronous Replication Configuration

Replication Interfaces – Synchronous

The Replication Interfaces for synchronous replication are created first.


273. To create replication interfaces, select Interfaces from the
PROTECTION & MOBILITY section in Unisphere.
274. Select + (ADD) from the Interfaces page.
275. Select the physical ports for the replication interfaces on both the
SPs.

− Synchronous replication requires that the Interfaces be created on the Sync


Replication Management Port of each SP.
− The Sync Replication Management Port is a virtual device that uses the
same physical network connection as the SP Management port.
− An IP address and subnet mask must be provided for both SPs. Gateway
addressing is optional, and a VLAN ID configuration is also provided if
needed.
− The interfaces are configured on the same network as the SP Management
or are configured on a different network if VLANs are used.

Module 1 Course Introduction and System Administration

Page 594 © Copyright 2022 Dell Inc.


Synchronous Replication Configuration

Important: The Replication Interfaces must be configured on both the


source and destination systems. This action must be repeated on the
peer system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 595


Synchronous Replication Configuration

Replication Connections – Synchronous

After the Replication Interfaces are created, a Replication Connection is created


between the two systems.

The Replication Connection creation is performed from either the source or


destination system.
276. Select Replication from the PROTECTION & MOBILITY section in
Unisphere.
277. Select the Connections tab.
278. Select + (ADD) from the Connections page.
279. The requirements of the connection include the remote system
management IP address and its management credentials.
− The local system management password is also required.
280. Finally, a replication mode must be selected from the drop-down list.
Choices are Asynchronous, Synchronous or Both.

− For synchronous replication, the Mode must be set Synchronous or Both.


− If Both is selected, synchronous, and asynchronous sessions can be
configured between the two systems.

Module 1 Course Introduction and System Administration

Page 596 © Copyright 2022 Dell Inc.


Synchronous Replication Configuration

Verifying Synchronous Replication Communications

After the Replication Connection between systems has been created, the
connection is verified from the peer system using the Verify and Update option.
This option is also used to update Replication Connections if anything has been
modified with the connection or the interfaces. The updated connection status is
displayed.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 597


Synchronous Replication Configuration

Synchronous Session – Resource Creation Wizard

A synchronous replication session is created as part of the wizard that creates the
storage resource.

From the LUN creation wizard example, the Replication step within the wizard is
shown.
281. Checking the Enable Replication option exposes the Replication
Mode and Replicate To fields required to configure the session.
282. The mode must be set to Synchronous to create a synchronous
replication session.
283. A Destination Configuration link is also exposed to provide
information concerning the destination resources used for the session.

Module 1 Course Introduction and System Administration

Page 598 © Copyright 2022 Dell Inc.


Synchronous Replication Configuration

Synchronous Session – Resource Properties

284. A synchronous replication session is also created from an existing


storage resource Properties page.
285. From the LUN Properties page example, the Configure Replication
button is presented to create a replication session. It starts a wizard with several
steps to configure the replication session.
286. The Replication Settings step requires the Replication Mode and
Replicate To settings for the session. The mode must be set to Synchronous
to create a synchronous replication session.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 599


Synchronous Replication Configuration

Important: The example shows the configuration of synchronous


replication of a block storage resource. However, synchronous
replication is also supported for NAS servers and associated file
systems. In the case of file storage resources, the replication settings
also include the option to automatically search user snaps on the
replication destination when the Reuse destination resource option is
selected. The selection enables the replication to synchronize source
and destination data using system or user snaps as the common base,
and avoid a full synchronization.

Module 1 Course Introduction and System Administration

Page 600 © Copyright 2022 Dell Inc.


Synchronous Replication Configuration

Synchronous Session – Destination Resources

The next step defines what resources on the destination system the replicated item
will use. The Name and Pool settings are required. More options are available
based on the destination system. In this example, the destination is a Hybrid model
that supports Data Reduction.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 601


Synchronous Replication Configuration

Synchronous Session – Summary

The wizard presents a Summary screen for the configured replication. In the
example, the session settings for the replication and destination are displayed.

Module 1 Course Introduction and System Administration

Page 602 © Copyright 2022 Dell Inc.


Synchronous Replication Configuration

Synchronous Session – Results

The creation Results page displays the progress of the destination resource
creation and the session creation. When it is complete, the created sessions can
be viewed from the Replications page by selecting the Sessions tab.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 603


Synchronous Replication Configuration

Synchronously Replicated Snapshot Schedules

During the creation of Snapshot Schedules, there is an option to synchronously


replicate the schedule. Schedules can be created in Unisphere from the Snapshot
Schedule page of the PROTECTION & MOBILITY section. When synchronous
replication is configured the Synchronize snapshot schedule to remote system
option becomes available for selection. When selected, the created schedule is
replicated synchronously to the peer system. When a schedule is selected to apply
to a resource, it has (Sync Replicated) appended to its name. Synchronously
replicated schedules are not active on replication destinations.

Module 1 Course Introduction and System Administration

Page 604 © Copyright 2022 Dell Inc.


Asynchronous Replication Overview

Asynchronous Replication Overview

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 605


Asynchronous Replication Overview

Asynchronous Local Replication Architecture

The architecture for asynchronous local replication is shown here. The difference
between the local and remote architecture that was seen previously is that the local
architecture does not require the communications to a remote peer. The
management and data replication paths are all internal within the single system.
Otherwise, local replication uses Snapshots in the same manner. Local replication
uses source and destination objects on the two different pools similar to how
remote replication uses source and destination on two different systems.

Rep
Rep
Snap 2
Snap 2

Delta

Rep Rep
Snap 1 Snap 1

Resource Asynchronous session Resource

Pool Pool
Data
Management

Module 1 Course Introduction and System Administration

Page 606 © Copyright 2022 Dell Inc.


Asynchronous Replication Overview

Asynchronous Remote Replication Architecture

Rep System connectivity Snapshots


Corresponding source and Rep
Snap Data replicated on
destination pairs Snap 2
2 replication Interface
Management on Replication Alternate based on RPO and form
Connection common base

Delta

Rep
Rep
Snap 1
Snap 1

Asynchronous session
Resource Resource

Data
Management
Source Destination
Site A Site B

The architecture for Dell Unity XT asynchronous remote replication is shown here.
Fundamental to remote replication is connectivity and communication between the
source and destination systems. A data connection is needed for carrying the
replicated data, and it is formed from Replication Interfaces. They are IP-based
connections that are established on each system. A communication channel is also
needed for management of the replication session. The management channel is
established on Replication Connections. It defines the management interfaces and
credentials for the source and destination systems.

Asynchronous replication architecture uses Snapshots. The system creates two


snapshots for the source storage resource and two corresponding snapshots for
the destination storage resource. The system created snapshots cannot be
modified. Based on the replication RPO value, the source snapshots are updated in
an alternating fashion to capture the source data state differences, which are
known as deltas. The data delta for the RPO timeframe is replicated to the
destination replica resource, and the corresponding destination snapshot is
updated. The two corresponding snapshots capture a common data state, which is
known as a common base. The common base can be used to restart a stopped or
interrupted replication session.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 607


Asynchronous Replication Overview

Asynchronous Remote Replication Topologies

One-to-Many

One-Directional

Many-to-One

Bi-Directional

The Dell Unity XT asynchronous replication feature is supported in many different


topologies. While a system can replicate to multiple destination systems, an
individual block storage resources can only replicate to a single destination block
storage resource.

One-Directional replication is typically deployed when only one of the systems is


used for production I/O. The second system is a replication target for all production
data and sits idle. If the need arises, the DR system can be placed into production
and provide production I/O. In this scenario, mirroring the production system
configuration on the DR system is suggested, as each system would have the
same performance potential. For physical systems, this configuration would mean
mirroring the drive configurations and the pool layout. On Dell UnityVSA systems,
this configuration would mean using similar virtual drives and pools.

The Bi-Directional replication topology is typically used when production I/O is


spread across multiple systems or locations. When this replication topology is
used, production I/O from each system is mirrored to the peer system. If there is an
outage, one of the systems can be promoted as the primary production system,
and all production I/O can be sent to it. After the outage is resolved, the replication

Module 1 Course Introduction and System Administration

Page 608 © Copyright 2022 Dell Inc.


Asynchronous Replication Overview

configuration can be changed back to its original configuration. This replication


topology ensures that both systems are in use by production I/O simultaneously.

The One-to-Many replication topology is deployed when production exists on a


single system, but replication must occur to multiple remote systems. This
replication topology can be used to replicate data from a production system to a
remote location to provide local data access to a remote team. At the remote
location, Dell Unity XT Snapshots can be used to provide host access to a local
organization or test team.

The Many-to-One replication topology is deployed when multiple production


systems exist, and replicating to a single system to consolidate the data is required.
This topology is useful when multiple production data sites exist, and data must be
replicated from these sites to a single DR data center. One example of this
configuration is Remote Office Branch Office (ROBO) locations. A Dell UnityVSA
may be deployed at each ROBO site, and all replicate back to a single All-Flash or
Hybrid system. Using Dell UnityVSA at ROBO locations eliminates the need for a
physical Dell Unity XT system at each site.

For the One-to-Many and Many-to-One replication topology examples, the One-
Directional replication is depicted. One-Directional replication is not a requirement
when configuring the One-to-Many and Many-to-One replication topologies. Each
individual Replication Connection can be used for the Bi-Directional replication
between systems, which enables more replication options than depicted here.
Again, a single storage resource can only be replicated to a single destination
storage resource.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 609


Asynchronous Replication Overview

Asynchronous Replication Process - 8 Steps

Initial State

The asynchronous replication process is the same for local and remote replication.
Shown here is remote replication. The asynchronous replication of a storage
resource has an initial process and an ongoing synchronization process. The
starting point is a data populated storage resource on the source system that is
available to production and has a constantly changing data state.

Initial process Synchronization process

Resource

Data
Management
Site A (Source) Site B (Destination)

1-Create Dest. Resource

The first step of the initial process for asynchronous replication is to create a
storage resource of the exact same capacity on the destination system. The
system automatically creates the destination storage resource. The destination
storage resource contains no data.

Module 1 Course Introduction and System Administration

Page 610 © Copyright 2022 Dell Inc.


Asynchronous Replication Overview

Initial process Synchronization process


1. Create destination resource

Resource Resource

Data
Management
Site A (Source) Site B (Destination)

2-Create Rep Snapshot Pairs

In the next step, corresponding snapshot pairs are created automatically on the
source and destination systems. They capture point-in-time data states of their
storage resource.

Initial process Synchronization process


1. Create destination resource
2. Create rep snapshot pairs

Rep Rep
Snap 2 Snap 2

Rep Rep
Snap 1 Snap 1

Resource Resource

Data
Management
Site A (Source) Site B (Destination)

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 611


Asynchronous Replication Overview

3-Source Rep Snap 1 Copied to Dest.

The first snapshot on the source system is used to perform an initial copy of its
point-in-time data state to the destination storage resource. This initial copy can
take a significant amount of time if the source storage resource contains a large
amount of existing data.

Initial process Synchronization process


1. Create destination resource
2. Create rep snapshot pairs
3. Source Rep Snap 1 copied to destination

Rep Rep
Snap 2 Snap 2

Rep Rep
Snap 1 Snap 1

Copy

Resource Resource

Data
Management
Site A (Source) Site B (Destination)

4-Dest. Rep Snap 1 Updates

After the initial copy is complete, the first snapshot on the destination system is
updated. The data states that are captured on the first snapshots are now identical
and create a common base.

Module 1 Course Introduction and System Administration

Page 612 © Copyright 2022 Dell Inc.


Asynchronous Replication Overview

Initial process Synchronization process


1. Create destination resource
2. Create rep snapshot pairs
3. Source Rep Snap 1 copied to destination
4. Destination Rep Snap 1 updates

Rep Rep
Snap 2 Snap 2

Rep Rep
Snap 1 Snap 1

Copy

Resource Resource

Data
Management
Site A (Source) Site B (Destination)

5-Source Rep Snap 2 Updated

Because the source storage resource is constantly changing, its data state is no
longer consistent with the first snapshot point-in-time. In the synchronization
process, the second snapshot on the source system is updated, capturing the
current data state of the source.

Initial process
Synchronization process
1. Create destination resource 5. Source Rep Snap 2 updated
2. Create rep snapshot pairs
3. Source Rep Snap 1 copied to destination
4. Destination Rep Snap 1 updates
Rep
Rep Snap 2
Snap 2

Rep Rep
Snap 1 Snap 1

Copy

Resource Resource

Data
Management
Site A (Source) Site B (Destination)

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 613


Asynchronous Replication Overview

6-Delta Copied to Dest.

A data difference or delta is calculated from the two source system snapshots. A
delta copy is made from the second snapshot to the destination storage resource.

Initial process Synchronization process


1. Create destination resource 5. Source Rep Snap 2 updated
2. Create rep snapshot pairs 6. Delta copied to destination
3. Source Rep Snap 1 copied to destination
4. Destination Rep Snap 1 updates

Rep Rep
Snap 2 Snap 2

Delta Copy

Rep Rep
Snap 1 Snap 1

Copy

Resource Resource

Data
Management
Site A (Source) Site B (Destination)

7-Dest. Rep Snap 2 Updated

After the copy is complete, the second snapshot on the destination system is
updated to form a common base with its corresponding source system snapshot.

Module 1 Course Introduction and System Administration

Page 614 © Copyright 2022 Dell Inc.


Asynchronous Replication Overview

Initial process Synchronization process


1. Create destination resource 5. Source Rep Snap 2 updated
2. Create rep snapshot pairs 6. Delta copied to destination
3. Source Rep Snap 1 copied to destination 7. Destination Rep Snap 2 updated
4. Destination Rep Snap 1 updates

Rep Rep
Snap 2 Snap 2

Delta Copy

Rep Rep
Snap 1 Snap 1

Copy

Resource Resource

Data
Management
Site A (Source) Site B (Destination)

8-Cycle Alternates at RPO Between Rep Snaps

The cycles of delta copies continue for the session by alternating between the first
and second snapshot pairs that are based on the RPO value. The first source
snapshot is updated, the data delta is calculated and copied to the destination. The
first destination snapshot is then updated forming a new common base. The cycle
repeats using the second snapshot pair upon the next RPO synchronization time.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 615


Asynchronous Replication Overview

Initial process Synchronization process


1. Create destination resource 5. Source Rep Snap 2 updated
2. Create rep snapshot pairs 6. Delta copied to destination
3. Source Rep Snap 1 copied to destination 7. Destination Rep Snap 2 updated
4. Destination Rep Snap 1 updates 8. Cycle alternates at RPO between Rep Snaps

Rep Rep
Snap 2 Snap 2

Delta Copy

Rep Rep
Snap 1 Snap 1

Copy

Resource Resource

Data
Management
Site A (Source) Site B (Destination)

Module 1 Course Introduction and System Administration

Page 616 © Copyright 2022 Dell Inc.


Asynchronous Replication Overview

Animation - Asynchronous Replication Process

This animation describe the process of Asynchronous Replication between two


storage systems.

Movie:

The web version of this content contains a movie.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 617


Asynchronous Replication Overview

Asynchronous Replication of Snapshots

• Asynchronous replication of
snapshots
− Primary storage resource must
also be replicated Asynchronous
− Unattached block and read-
only file snapshots
− Scheduled or user created
snapshots
• Primary storage resource
snapshots can be:
− LUNs
− Consistency Groups
− Thin Clones
− VMware VMFS datastores
− VMware NFS datastores
− File systems
• Replica retention policies can be customized

− Cost savings
− Compliance
With the Dell Unity XT replication feature, it is also possible to asynchronously
replicate a snapshot of a primary storage resource. They can be replicated either
locally or remotely. Also referred to as “Snapshot Shipping,” snapshot replication
requires that the primary storage resource is replicated. Block-based unattached
snapshots and file-based read-only snapshots are supported. The snapshots can
either be user created or created by a schedule.

Module 1 Course Introduction and System Administration

Page 618 © Copyright 2022 Dell Inc.


Asynchronous Replication Overview

Snapshot replication can be enabled on all resources that support asynchronous


replication, including LUNs, Consistency Groups, Thin Clones, VMware VMFS
datastores, VMware NFS datastores, and file systems.

The snapshot replicas can have retention policies that are applied to them that are
different from the source. The feature has multiple use case scenarios, one is cost
savings. With snapshots that are replicated to a lower-end system on the
destination site, the source snapshots can be deleted off the higher-end production
source site system. Therefore, saving capacity and its associated costs on a
production system. Another use case is for compliance needs. The retention policy
on the snapshot replicas can be tailored to any compliance needs, such as for
medical or governmental storage requirements.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 619


Asynchronous Replication Overview

Architecture for Asynchronous Replication of Snapshots

Rep
Rep
Snap 2
Snap 2

Asynchronous session
Delta
User User
Snap Snap Rep
Rep Snap 1
Snap 1
Asynchronous session

Resource Resource

Data
Management
Source Destination
Site A Site B

The architecture for snapshot asynchronous replication is shown here. This


example illustrates remote replication. It uses the asynchronous remote replication
architecture that was seen earlier with the addition of a user snapshot of the
storage resource. As shown, the storage resource itself is being replicated.

A snapshot of that resource is taken and is selected to be replicated. A data delta is


then calculated between the user snapshot and the latest common base replication
snapshot. That data delta is then replicated to the destination storage resource at
the next RPO synchronization time. A snapshot of that resource is then created on
the destination system, capturing the identical data state of the source snapshot.

Snapshot replication is similar to the replication of the storage resource itself. On


the source side, the difference for a user snapshot is that the data delta calculation
uses a replication snapshot and the user snapshot. On the destination side, the
difference is a snapshot of the updated resource is made to a user snapshot rather
than to refresh a replication snapshot. A point to note is that there is an efficiency of
data replication between the snapshot and the storage resource. The snapshot
replication delta occurs first, and then the storage resource delta is processed as
part of its ongoing RPO-based synchronization cycle. If there is common data in
both deltas, that data is only replicated to the destination with the snapshot delta. It
is not re-replicated with the storage resource delta. Another notable point is that

Module 1 Course Introduction and System Administration

Page 620 © Copyright 2022 Dell Inc.


Asynchronous Replication Overview

since the snapshot data does not change, it is only replicated a single time. After
that, it is not part of any RPO-based synchronization cycle that is needed for the
replicated primary storage resource.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 621


Asynchronous Replication Overview

Asynchronous Replication Capabilities

The table details the various maximum capabilities for asynchronous replication
that are based on specific Dell Unity XT models. The maximum replication sessions
include all replication sessions on the system, which include both synchronous and
asynchronous replication sessions, local or remote. The replication destination
storage resources count towards the system maximums, even though they are not
host accessible as a destination image.

Dell Dell Dell Dell Dell


UnityVSA Unity XT Unity XT Unity XT Unity XT
380/380F 480/480F 680/680F 880/880F

Max replication 16 1000 1000 1500 2000


sessions
(Synchronous +
Asynchronous)

Max concurrent 8 256 256 256 256


replication sessions

Max LUNs per 50 50 50 50 50


replicated Consistency
Group

Max replicated NAS 4 90 128 128 256


servers

Max initial replication 4 32 32 32 32


syncs

Max target systems 16 16 16 16 16

Module 1 Course Introduction and System Administration

Page 622 © Copyright 2022 Dell Inc.


Asynchronous Replication Configuration

Asynchronous Replication Configuration

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 623


Asynchronous Replication Configuration

Asynchronous Replication Creation Process

1. Create Replication Interfaces on source and destination


2. Create Replication Connection on source or destination
3. Verify and Update on peer system
Rep 4. Create/select storage resource to replicate
Snap 1 Rep
5. Define replication settings Snap 1

6. Create the session

Rep Rep
Snap 2 Snap 2
Asynchronous session

Resource Resource

Data

Management
Source Destination
Site A Site B

The steps for creating remote replication sessions are different depending upon the
replication mode; either asynchronous or synchronous. Asynchronous remote
replication steps are covered here. Before an asynchronous replication session can
be created, communications must be established between the replicating systems.
290. Create Replication Interfaces on both the source and destination
systems. The interfaces form the connection for replicating the data between
the systems.
291. Create a Replication Connection between the systems. This step is
performed on either the source or the destination. It establishes the
management channel for replication.
292. Verify the connection from the peer system. Communications are
now in place for the creation of a replication session for a storage resource.
293. Define a session for a storage resource during the resource creation
or, select a source for replication if the storage resource already exists.
294. Define the replication settings which include the replication mode,
RPO, and the destination. The system automatically creates the destination
resource and the Snapshot pairs on both systems.
295. Create the replication session.

Module 1 Course Introduction and System Administration

Page 624 © Copyright 2022 Dell Inc.


Asynchronous Replication Configuration

Asynchronous Remote Replication Communication

Before you create an asynchronous remote replication session, you must configure
active communications channels between the two systems. The active
communication channels are different for the two replication modes,
asynchronous is shown here.

1. Replication Interface:

IP-based connectivity
between source and
destination SPs

Carries replicated
data
Data
Management
2: Replication
Source Connection Destination

Pairs Replication
Interfaces
Mode of replication
Channel for management

296. The first communications configuration that is required for


asynchronous replication is to create Replication Interfaces on the source and
destination systems.
− The interfaces are dedicated IP-based connections between the systems
that carry the replicated data.
− The interfaces are defined on each SP using IPv4 or IPv6 addressing.
− The interfaces form the required network connectivity between the
corresponding SPs of the source and destination systems.
297. A Replication Connection is created between the systems next. The
Replication Connection forms the management channel for the remote
replication between the systems.

− The connection pairs together the Replication Interfaces between the source
and destination systems.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 625


Asynchronous Replication Configuration

− It also defines the replication mode between the systems; asynchronous,


synchronous or both.
− The connection is also configured with the management interface and
credentials for both replicating systems.

Module 1 Course Introduction and System Administration

Page 626 © Copyright 2022 Dell Inc.


Asynchronous Replication Configuration

Replication Interfaces – Asynchronous

The creation of Replication Interfaces for remote asynchronous replication is


covered here. Replication Interfaces are not required for local asynchronous
replication.
298. From Unisphere, the PROTECTION & MOBILITY section include an
interfaces option.
299. From the Interfaces page, new Replication Interfaces are created.
300. On the creation screen, an available Ethernet Port from the system
must be selected.

− An IP address and subnet mask must be provided for both SPs. Gateway
addressing is optional, and a VLAN ID configuration is also provided if
needed.
Replication Interfaces must be created on both of the replicating systems. The
creation of Replication Interfaces must be repeated on the peer system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 627


Asynchronous Replication Configuration

Replication Connection – Asynchronous

After the Replication Interfaces are created, a Replication Connection is created


between the two systems for remote asynchronous replication and is not required
for local replication.

The Replication Connection is only created on one of the replicating systems.


301. Select Replication from the PROTECTION & MOBILITY section in
Unisphere.
302. Select the Connections tab. Select + (ADD) from the Connections
page.
303. The requirements of the connection include the remote system
management IP address and its management credentials.
− The local system management password is also required.
304. Next, a replication mode must be selected from the drop-down list:
− Choices are Asynchronous, Synchronous or Both.

Module 1 Course Introduction and System Administration

Page 628 © Copyright 2022 Dell Inc.


Asynchronous Replication Configuration

− For asynchronous replication, the Mode must be set to Asynchronous or


Both.
− If Both is selected, synchronous, and asynchronous sessions can be
configured between the two systems.
305. Throttling is set at the Replication Connection level. Asynchronous
replication traffic can be throttled to reduce the rate at which data is copied to a
destination system.

− A bandwidth schedule includes maximum bandwidth (KB/s), days of the


week, and start and end hour.
− Setting a throttle only controls data being replicated to a remote system, not
from a remote system.
− To throttle data received from a remote system, it must be throttled on that
remote system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 629


Asynchronous Replication Configuration

Verifying Replication Communications

After the Replication Connection between systems has been created, the
connection is verified from the peer system using the Verify and Update option.
This option is also used to update Replication Connections if anything has been
modified with the connection or the interfaces. The updated connection status is
displayed.

Module 1 Course Introduction and System Administration

Page 630 © Copyright 2022 Dell Inc.


Asynchronous Replication Configuration

Asynchronous Session – Resource Creation Wizard

Asynchronous replication sessions can be created as part of the wizard that


creates any storage resource.

From the NAS Server creation wizard example, the Replication step within the
wizard is shown.
306. Checking the Enable Asynchronous Replication option exposes
the Replication Mode, RPO, and Replicate To fields required to configure the
session.
− The mode must be set to Asynchronous to create an asynchronous
replication session.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 631


Asynchronous Replication Configuration

− To create a remote replication session, select the remote system from the
Replicate To drop down. Select Local if configuring a local replication
session.
307. A Destination Configuration link is also exposed to provide
information concerning the destination resources used for the session.

− Reuse destination resource automatically searches for a resource with the


same name on the destination and replicate to it if found.
− If one does not exist, Unisphere creates a new destination resource.
− The replica name, the destination pool, and Storage Processor are
configurable in the window.
− There is also a Used as backup onlyoption unique to file resources
available to select for the replica. If that option is selected, the replica can
only be used for backup and cannot be used for replication failover.
As noted before; there is a dependency between a NAS server and a file system.
The NAS server must be replicated before any associated file system.

Module 1 Course Introduction and System Administration

Page 632 © Copyright 2022 Dell Inc.


Asynchronous Replication Configuration

Asynchronous Session – Resource Properties

308. An asynchronous replication session is also created from an existing


storage resource Properties page.
309. From the NAS server Properties page example, the Configure
Replication button is presented to create a replication session. It starts a
wizard with several steps to configure the replication session.
310. The Replication Settings step requires the Replication Mode,
RPO, and Replicate To settings for the session. The mode must be set to
Asynchronous to create an asynchronous replication session.
311. If the existing storage resource includes snapshots, there are
options that are presented to replicate them as well.
− For this NAS server file system example, the Support Asynchronous Snap
Replication option is presented.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 633


Asynchronous Replication Configuration

− When selected, the Replicate all existing snapshots option and the
Replicate scheduled snapshots options becomes available.
312. When the Reuse destination resource option is selected, the
system automatically searches for a resource with the same name on the
destination and replicate to it if found. If one does not exist, it creates a new
destination resource.

− When Automatically search user snap as common base is selected, the


system attempts to locate a snapshot in common on both resources to use
as a common base.
− If matching snapshots are not found or this option is not enabled the system
performs a full sync.

Module 1 Course Introduction and System Administration

Page 634 © Copyright 2022 Dell Inc.


Asynchronous Replication Configuration

Asynchronous Session – Destination Resources

The next step defines what resources on the destination system the replicated item
will use and how the replica is configured. For the NAS server example, the Name,
the Pool, and Storage Processor settings are required. By default, the system
configures the replica as close as possible to the source. The user can choose to
customize the replica configuration as needed.

In the NAS server example shown, the NAS server has an associated file system
and a separate replication session is created for it. The table details the destination
resources that are used for the file system. The user can select the file system and
edit its destination configuration to customize the resources that the replica uses.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 635


Asynchronous Replication Configuration

Asynchronous Session – Summary

The wizard presents a Summary screen for the configured replication session. In
the example, sessions for the NAS server and its associated file system are
configured for creation.

Module 1 Course Introduction and System Administration

Page 636 © Copyright 2022 Dell Inc.


Asynchronous Replication Configuration

Asynchronous Session – Results

The creation Results page displays the progress of the destination resource
creation and the session creation. When it is complete, the created sessions can
be viewed from the Replications page by selecting the Sessions tab.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 637


Replication Operations

Replication Operations

Module 1 Course Introduction and System Administration

Page 638 © Copyright 2022 Dell Inc.


Replication Operations

Unisphere Resource Filtering

When replication is in place on systems, resources that are being replicated are
displayed on various Unisphere pages. Unisphere resource filtering provides a
method for administrators to identify system resources as replication source or
destination resources. The Source, Destination, and All resource filter buttons are
on Unisphere pages for administrators to filter the displayed resources. Block
storage LUNS and Consistency Groups pages include resource filter buttons. File
storage File Systems, NAS Servers, NFS Shares, and SMB Shares pages
include resource filter buttons. VMware storage Datastores and Datastore Shares
pages include resource filter buttons. The Replication Sessions page includes
resource filter buttons.

For resources being replicated locally, the Sessions page displays those sessions
in all three views. Click through each tab to see the filtered views. The
FASTVP_CG resource is being locally replicated and is seen in all views.

Source

With the Source button selected, the page displays only source resources.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 639


Replication Operations

Destination

With the Destination button selected, the page displays only destination
resources.

Module 1 Course Introduction and System Administration

Page 640 © Copyright 2022 Dell Inc.


Replication Operations

All

With the default All button selected, source, destination and any resources not
replicated are displayed.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 641


Replication Operations

System Level Replication Operations

System level replication operations are available for Pause, Resume, andFailover
of replicated resources. The system level operations are run on replication
sessions for specified replication connections to remote systems. If multiple
replication sessions exist, the system level operation impacts all sessions for the
specified remote systems to reduce discrete administrative tasks.

In Unisphere, the system level operations are available from the Replication
Connections page when a replication connection is selected.

System level
operations

Unisphere system level failover

Module 1 Course Introduction and System Administration

Page 642 © Copyright 2022 Dell Inc.


Replication Operations

System Level Pause and Resume

Replication Connection

System level Pause and Resume operations are performed from the Replication
Connections page.
• Select the remote system with replication sessions that you want to perform the
operation on.

− Single or multiple remote system selections are supported.


UnityB-400

LUN

NAS Server and file


systems

Asynchronous
session
Synchronous
session

UnityA-300F UnityVSA-1

Replication Connections

Pause Operation

More Actions lists the operations.


314. Select the Pause operation.
315. If a connection supports asynchronous and synchronous replication,
select the session type to perform the operation on.

UnityB-400

LUN

NAS Server and file


systems

Asynchronous
session

Synchronous
session

UnityA-300F UnityVSA-1

Pause operation

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 643


Replication Operations

Pause Results

• Source and destination sessions for selected remote systems are paused.

− Supports block and file replication sessions.


UnityB-400

UnityA-300F UnityVSA-1

Pause operation results

Resume Operation

The Resume operation is performed in the same manner as the Pause operation.
Only paused session support the resume operation.
317. From More Actions, select the Resume operation.
318. If a connection supports asynchronous and synchronous replication,
select the session type to perform the operation on.

UnityB-400

UnityA-300F UnityVSA-1

Resume operation

Module 1 Course Introduction and System Administration

Page 644 © Copyright 2022 Dell Inc.


Replication Operations

Resume Results

• Source and destination sessions for selected remote systems are resumed.

− Supports block and file replication sessions.


UnityB-400

UnityA-300F UnityVSA-1

Resume operation results

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 645


Replication Operations

System Level Failover

The system level Failover operation is designed for unplanned failover. It is run
from the destination system. The single operation fails over all file replication
sessions to the destination system.

Replication Connection

• The system level Failover operation is performed from the Replication


Connections page.

− Only single remote system connection is supported.


NAS Server and UnityB-400
file systems

Asynchronous session

Synchronous
session

UnityA-300F UnityVSA-1

Replication Connection

Failover Operation

321. From More Actions, select Failover.


jjjjjjj. Failover is not available when multiple remote connections are
selected.
322. Select the Skip pre-checks on replication connection option to
force a failover if connection to source system is not alive.

Module 1 Course Introduction and System Administration

Page 646 © Copyright 2022 Dell Inc.


Replication Operations

UnityB-400

UnityA-300F UnityVSA-1

Failover operation

Failover Results

• Replication sessions for the selected remote system are failed over.

UnityB-400

Sessions
failed over

UnityA-300F UnityVSA-1

Failover results

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 647


Replication Operations

Session Operations

Block Storage Resources

Replication session operations for block resources can be performed within


Unisphere from two areas.
324. The Replications page Sessions tab provides a list of all the
replication sessions.
325. When a specific session is selected, it can be deleted, edited or by
using the More Actions drop-down list various replication operations that can
be performed.
326. Similar operations of a session can also be performed from the
Properties page of the replicated block resource. The Replication tab displays
information about the session.
327. The Replication tab also provides certain editable fields and
buttons to delete or perform various replication operations.

Module 1 Course Introduction and System Administration

Page 648 © Copyright 2022 Dell Inc.


Replication Operations

File Storage Resources

Replication session operations for file resources can be performed within


Unisphere from two areas.
328. The Replications page Sessions tab provides a list of all the
replication sessions.
329. When a specific session is selected, it can be deleted, edited or by
using the More Actions drop-down list various replication operations that can
be performed
330. Similar operations of a session can also be performed from the
Properties page of the replicated file resource. The Replication tab displays
the replication session for the file resource.
331. Selecting the session enables the More Actions drop-down where
operations can be selected.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 649


Replication Operations

Source and Destination Operations

Replication sessions can be performed from the source or destination systems.


The possible operations differ between source or destination. The operations also
differ based on the type of replication, asynchronous, or synchronous, and the state
of the session.

Asynchronous
Unplanned
Planned
Synchronous

Source Destination

The example compares operations from healthy asynchronous and synchronous sessions on their
source and destination systems.

Operations relating to failover can be performed either from the source or


destination system.

Planned failover operations are performed from the source system when both
systems are available.
• Planned failover would be run to test the DR solution or if the source site was
scheduled to be down for maintenance.

Unplanned failover operations are performed from the destination system when
the source system is not available.
• For example, if the source site is affected by a power outage or natural disaster.
The Failover operation would make data available from the destination site.

From the source, it is also possible to perform session Pause or Sync operations.

Module 1 Course Introduction and System Administration

Page 650 © Copyright 2022 Dell Inc.


Replication Operations

From the destination, it is only possible to perform a session Failover operation.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 651


Replication Operations

Replication and NAS Server Interfaces

Because a NAS server has networking associated with it, when the server is
replicated its network configuration is also replicated.

During replication, the source NAS server interface is active and the destination
NAS server interface is not active. Having the source and destination NAS server
interfaces the same is fine for sites that share common networking.

For sites where the source and destination have different networking, it is important
to modify the network configuration of the destination NAS server. The modification
is needed to ensure correct NAS server operation in a failover event.

NAS Server NAS Server

Source Destination

The modification is performed from the NAS server Properties page on the
destination system.
• Select the Override option and configure the destination NAS server for the
networking needs of the destination site.

Because the NAS server effectively changes its IP address when failed over,
clients may need to flush their DNS client cache. The client is then able to connect
to the NAS server when failed over.

Module 1 Course Introduction and System Administration

Page 652 © Copyright 2022 Dell Inc.


Replication Operations

Grouped NAS Server and File System Session Operations

Because of the dependence between a NAS server and its associated file systems,
certain NAS server replication session operations are also performed on its file
systems.

Operations grouped to NAS server level Operations not grouped to NAS server level

- Failover - Create
- Failover with sync - Sync
- Failback - Delete
- Pause - Modify
- Resume

NAS Server NAS Server

Failover

Grouped operation

Site A Site B

Discrete file system replication session operations permitted

In the example shown, if a failover operation is performed on the NAS server, its two associated file
systems also failover.

Grouped operations are only available to sessions that are in a healthy state.
• Grouped operations are prohibited if a NAS server is in the paused or error
state.
• Grouped operations skip any file system sessions that are in paused, error, or
non-replicated states.

The operations capable of being grouped are: Failover, Failover with sync,
Failback, Pause, and Resume.

The Create, Sync, Delete, and Modify replication operations are not grouped and
are performed discretely per session. Discrete operations are also permitted on file
system replication sessions.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 653


Replication Operations

Replication Operations - Failover with Sync

Failover with sync is an operation available to asynchronous replication sessions. It


is used for a planned event, either scheduled maintenance or disaster recovery
testing when both the primary and secondary sites are available. It provides data
availability from the secondary site.

The next page illustrates the process of the operation.

Supported only on Asynchronous Replication Sessions


Primary and secondary sites are available

Site A Site B

Planned event: maintenance/testing

Module 1 Course Introduction and System Administration

Page 654 © Copyright 2022 Dell Inc.


Replication Operations

Replication Operations - Failover with Sync Process

Select Action

The process starts with issuing the Failover with sync operation from site A which
is the primary production site.

Failover with sync process


1. Issue Failover with sync from site A

Site A Site B

Planned event: maintenance/testing

Remove Source Access

The operation removes access to the replicated object on site A.

Failover with sync process


1. Issue Failover with sync from site A
2. Access to site A object removed

Site A Site B

Planned event: maintenance/testing

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 655


Replication Operations

Sync Storage Objects

A synchronization from the site A object to the site B object happens next.

Failover with sync process


1. Issue Failover with sync from site A
2. Access to site A object removed
3. Sync site B object

Sync

Site A Site B

Planned event: maintenance/testing

Pause Replication

After the sync process completes, the replication session is then paused.

Failover with sync process


1. Issue Failover with sync from site A
2. Access to site A object removed
3. Sync site B object
4. Replication paused

Sync

Site A Site B

Planned event: maintenance/testing

Enable Destination Access

The site B object is and then made available for access to complete the operation.

Module 1 Course Introduction and System Administration

Page 656 © Copyright 2022 Dell Inc.


Replication Operations

Failover with sync process


1. Issue Failover with sync from site A
2. Access to site A object removed
3. Sync site B object
4. Replication paused
5. Access to site B object allowed

Sync

Site A Site B

Planned event: maintenance/testing

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 657


Replication Operations

Replication Operations - Planned Failover

Planned Failover is an operation available to synchronous replication sessions and


is performed from the replication source. It is used for a planned event, either
scheduled maintenance or disaster recovery testing when both the primary and
secondary sites are available. It provides data availability from the secondary site
and restarts the replication session in the reverse direction. Before performing this
operation, quiesce I/O to the source storage resource.

The next page illustrates the process of the operation.

Supported only on Synchronous Replication Sessions


Primary and secondary sites are available

Site A Site B

Planned event: maintenance/testing

Module 1 Course Introduction and System Administration

Page 658 © Copyright 2022 Dell Inc.


Replication Operations

Replication Operations - Planned Failover Process

Select Action

The process starts with issuing the Failover operation from site A which is the
primary production site.

Planned Failover process


1. Issue Failover from site A

Site A Site B

Planned event: maintenance/testing

Remove Site A Access

The operation removes access to the replicated object on site A.

Planned Failover process


1. Issue Failover from site A
2. Access to site A object removed

Site A Site B

Planned event: maintenance/testing

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 659


Replication Operations

Sync Storage Objects

The ongoing synchronous replication session synchronizes the data state of the
site B object to the site A object.

Planned Failover process


1. Issue Failover from site A
2. Access to site A object removed
3. Sync site B object

Sync

Site A Site B

Planned event: maintenance/testing

Reverse/Restart Session

The existing synchronous replication session reverses direction and restarts,


replicating the site B object to the site A object.

Planned Failover process


1. Issue Failover from site A
2. Access to site A object removed
3. Sync site B object
4. Session reversed and restarted

Reverse

Sync

Site A Site B

Planned event: maintenance/testing

Module 1 Course Introduction and System Administration

Page 660 © Copyright 2022 Dell Inc.


Replication Operations

Enable Site B Access

The site B object is and then made available for access to complete the operation.

Planned Failover process


1. Issue Failover from site A
2. Access to site A object removed
3. Sync site B object
4. Session reversed and restarted
5. Access to site B object allowed

Reverse

Sync

Site A Site B

Planned event: maintenance/testing

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 661


Replication Operations

Replication Operations - Unplanned Failover

Unplanned Failover is an operation available to replication sessions of either mode;


asynchronous or synchronous. It is used for an unplanned event when the primary
production site is unavailable. It provides access to the replicated data from the
secondary site.

The next page illustrates the process of the operation.

Supported on Asynchronous or Synchronous


Replication Sessions

Site A Site B
Unplanned event: Primary production site unavailable

Module 1 Course Introduction and System Administration

Page 662 © Copyright 2022 Dell Inc.


Replication Operations

Replication Operations - Unplanned Failover Process

Primary Site Unavailable

The primary production site becomes unavailable and all its operations cease. Data
is not available, and replication between the sites can no longer proceed.

Unplanned Failover process


1. Site A unavailable, operations cease

Site A Site B

Unplanned event: Primary production site unavailable

Select Action

A Failover operation is issued from site B which is the secondary production site.

Unplanned Failover process


1. Site A unavailable, operations cease
2. Issue Failover from site B

Site A Site B

Unplanned event: Primary production site unavailable

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 663


Replication Operations

Pause Replication

The operation pauses the existing replication session so that the session does not
start again should site A become available.

Unplanned Failover process


1. Site A unavailable, operations cease
2. Issue Failover from site B
3. Session paused

Site A Site B

Unplanned event: Primary production site unavailable

Enable Secondary Site Access

The site B object is made available for production access to complete the
operation.

Unplanned Failover process


1. Site A unavailable, operations cease
2. Issue Failover from site B
3. Session paused
4. Access to site B object allowed

Site A Site B

Unplanned event: Primary production site unavailable

Module 1 Course Introduction and System Administration

Page 664 © Copyright 2022 Dell Inc.


Replication Operations

Replication Operations - Resume

Resume is an operation available to replication sessions of either mode;


asynchronous or synchronous. It is used to restart a Paused replication session.
For sessions manually paused by the user, or paused by the system due to an
error, the Resume operation is available from the replication source. For sessions
paused from an unplanned failover operation, the Resume operation is available
from the site that the object is failed over to. When a failed over session is resumed
from its paused state, the direction of replication is reversed.

The next page illustrates the process of the operation.

Supported on Asynchronous or Synchronous


Replication Sessions

Site A Site B
Replication resumes in reversed direction

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 665


Replication Operations

Replication Operations - Resume Process

Primary Site Available

The Site A replicated object must be available before the replication session can be
resumed.

Resume process
1. Site A becomes available

Site A Site B

Replication resumes in reversed direction

Select Action

The Resume operation is issued from site B.

Resume process
1. Site A becomes available
2. Resume issued from site B

Site A Site B

Replication resumes in reversed direction

Module 1 Course Introduction and System Administration

Page 666 © Copyright 2022 Dell Inc.


Replication Operations

Reverse/Restart Session

The operation restarts the paused session in the reverse direction.

Resume process
1. Site A becomes available
2. Resume issued from site B
3. Paused session is reversed and restarted

Site A Site B

Replication resumes in reversed direction

The operation updates the site A object with any changes that may have been
made to the site B object during the failover. The replication session then resumes
in the reverse direction and returns to a normal state. For asynchronous file
replication sessions, there is an option available to perform a synchronization of the
site A data to site B. The option overwrites any changes that are made to site B
during the failover. After the overwrite synchronization, replication is then restarted
in the reverse direction; from site B to site A in this example.

The Resume operation is preferred over Failback in situations where large amounts
of production change have accumulated due to long session pauses.

Resuming a failed over synchronous replication session starts a full


synchronization to the original source. The Resume operation restarts the
replication and returns synchronization to the sites while maintaining production
I/O.

The Failback operation interrupts production I/O to perform the resynchronization of


data. If a long resynchronization is needed, production is impacted proportional to
the resynchronization time. To return production to the site A object requires a
session Failover operation, followed by another Resume operation.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 667


Replication Operations

Important: If the replication session was configured to reuse


destination resource, and the storage resource user snaps, a full
synchronization can be avoided after an unplanned failover. When
the original source system becomes available, the system can use
system or user/scheduled snapshots to establish a common base
image, and only copy deltas.

Module 1 Course Introduction and System Administration

Page 668 © Copyright 2022 Dell Inc.


Replication Operations

Replication Operations - Failback

Failback is an operation available to replication sessions that have failed over,


either asynchronous or synchronous. As its name implies, it is used to return a
replication session to its state before the failover operation.

The next page illustrates the process of the operation.

Supported on Asynchronous or Synchronous


Replication Sessions

Site A Site B
Replication returns to the state before Failover

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 669


Replication Operations

Replication Operations - Failback Process

Primary Site Available

The site A replicated object must be available before the Failback operation can be
initiated on a session.

Failback process
1. Site A becomes available

Site A Site B

Replication returns to the state before Failover

Select Action

The Failback operation is issued from site B.

Failback process
1. Site A becomes available
2. Failback issued from site B

Site A Site B

Replication returns to the state before Failover

Module 1 Course Introduction and System Administration

Page 670 © Copyright 2022 Dell Inc.


Replication Operations

Sync Storage Objects

The operation removes access to the site B object and synchronizes the site A
object to the data state of the site B object.

Failback process
1. Site A becomes available
2. Failback issued from site B
3. Sync from site B to site A

Sync

Site A Site B

Replication returns to the state before Failover

For asynchronous file replication sessions, there is an option available to perform a


synchronization of the site A data to site B. The option overwrites any changes that
are made to site B during the failover.

Failing back a failed over synchronous replication session starts a full


synchronization to the original source. It is important to note that this operation may
be lengthy if the replication session needs to synchronize a large amount of data to
the original source object.

Important: If the replication session was configured to reuse


destination resource, and the storage resource user snaps, a full
synchronization can be avoided after an unplanned failover. When
the original source system becomes available, the system can use
system or user/scheduled snapshots to establish a common base
image, and only copy deltas.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 671


Replication Operations

Enable Access to Primary Site

The operation then enables access to the site A object for production.

Failback process
1. Site A becomes available
2. Failback issued from site B
3. Sync from site B to site A
4. Access to site A object allowed

Sync

Site A Site B

Replication returns to the state before Failover

Replication Restarts

Replication is restarted using the site A object as a source and the site B object as
a destination. This single operation returns the object’s replication state as it was
before the failover.

Failback process
1. Site A becomes available
2. Failback issued from site B
3. Sync from site B to site A
4. Access to site A object allowed
5. Replication restarts from site A to site B

Sync

Site A Site B

Replication returns to the state before Failover

Module 1 Course Introduction and System Administration

Page 672 © Copyright 2022 Dell Inc.


Replication Operations

Demonstration: Synchronous Remote Replication

This demo covers the synchronous remote replication of a LUN, and the
synchronous remote replication of a NAS Server and file system.

Movie:

The web version of this content contains a movie.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 673


Replica Access

Replica Access

Module 1 Course Introduction and System Administration

Page 674 © Copyright 2022 Dell Inc.


Replica Access

Block Resource Remote Replica Data Access

Remote replica data access Replica read/write access restricted


- No impact to production system resources - Create user snapshot of replica
- No impact to replication session - Attach host to snapshot
- Data backup/recovery - Host access read-only or read/write
- Application testing/data mining

- DR testing

User snap

Block Replica
resource resource

Source Destination

When block resources are replicated remotely, there are many benefits of being
able to access the replica data. One benefit of replica data access is that there is
no impact on the source system resources that are used for production. Another
positive aspect of accessing replica data is to have no impact on the existing
remote replication session. With replica access to data, backup and recovery of
data are possible. Testing and data mining are also possible with replica data
access. Having access to the replica data is also a valid way to test and validate
the remote replication DR solution.

Access to the replica block data on the destination system is not performed directly
to the replicated block resource. The remote replica is marked as a destination
image by the system and blocks read/write access to it. Access to the data is
accomplished through a user snapshot of the resource and attaching a host to the
snapshot. The snapshot can be a replicated user snapshot from the source. Or it
can be a user created snapshot of the replica resource on the destination system.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 675


Replica Access

File Resource Remote Replica Data Access

Remote replica data access Replica NAS Server production access


- No impact to production system resources inactive
- No impact to replication session - Create user snapshot
- Data backup/recovery - Create replica NAS Server Backup & Test
- Application testing/data mining interface
- Create Proxy NAS Server
- DR testing

User snap

NAS Proxy
Server NAS NAS
Server Server
File
resource Replica
resource

Source Destination

When file resources are replicated remotely, there are many benefits of being able
to access the replica data. One benefit of replica data access is that there is no
impact on the source system resources that are used for production. Another
positive aspect of accessing replica data is to have no impact on the existing
remote replication session. With replica data access, backup and recovery of data
are possible. Testing and data mining are also possible with replica data access.
Having access to the replica data is also a valid way to test and validate the remote
replication DR solution.

Access to the replica file data is not achieved as directly as access to the source
file resource. File data is accessed through the NAS server that is associated with
the file system data. But a replicated NAS server has its production interfaces
inactive. The recommended method of accessing replica file data is to have a user
snapshot of the file resource. Access of replica file data can be done several ways.
A user snapshot of the resource could be made on the source system and
replicated to the destination system for access. A Read-only or Read/Write user
snapshot can also be made from the replica resource on the destination.

Once the user snapshot is on the destination system, there are several methods for
gaining access to its data. One way is to create a Backup and Test IP interface on
the replica NAS server. If a Read/Write snap is made, an NFS client can access a

Module 1 Course Introduction and System Administration

Page 676 © Copyright 2022 Dell Inc.


Replica Access

share that is configured on the snapshot through the replica NAS server Backup
and Test IP interface. Another method for replica file data access is to create a
Proxy NAS server on the destination system. The Proxy NAS server is created in
the same manner as any NAS server. It becomes a Proxy NAS server by running a
CLI command to associate it to the replica NAS server. The Proxy NAS server must
be created on the same SP as the replica NAS server. If the replica NAS server is
configured for SMB, the Proxy NAS server must also have the same SMB
configuration. Read-only administrative access to the replica data is provided
through the Proxy NAS server. SMB client Read/Write access to the replica data is
also supported with the Proxy NAS server. For Read/Write access, the user
snapshot must be Read/Write and have a share that is configured on the Proxy
NAS server with the share path that is configured to the Read/Write user snapshot.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 677


Replica Access

Create a Proxy NAS Server for Replica File Data Access

Create a NAS Server on destination system Configure proxy settings to replicated NAS Server

- On same SP as replicated NAS Server - Via SSH and service command


- Supports multitenancy - Associates proxy NAS Server to replicated NAS Server
- Supports one or multiple NAS Server's data access
- NFSRoot option used to define NFS root client access

svc_nas -proxy -help


Syntax:
svc_nas -proxy_share -help

spa:~> svc_nas nas02_Proxy -proxy -add nas02 -NFSRoot ip=192.168.1.115

nas02_Proxy : commands processed: 1

command(s) succeeded Example configures a proxy NAS Server named "nas02_Proxy" for a
output is complete
replicated NAS Server named "nas02" and root access to an NFS client.
Command succeeded

To create a proxy NAS server for accessing replica file data, a new NAS server is
created on the destination system. The new NAS server that is the proxy NAS
server must be created on the same storage processor as the replicated NAS
server. It must be configured with the same access protocols that are used on the
replicated NAS server. A similar multitenancy configuration is needed for the proxy
NAS server as the replicated NAS server. A proxy NAS server can support multiple
NAS servers file data access.

The new NAS server is then configured with the proxy settings for the replicated
NAS server. This configuration is performed with the service command svc_nas
over a secure shell connection to the destination system. Use the svc_nas -proxy
-help to view the command syntax to configure a proxy NAS server. Use the
svc_nas -proxy_share -help to view the command syntax to configure an SMB
share for proxy access to a file system on another NAS server. The example
configures a proxy NAS server that is named nas02_Proxy for a NAS server that is
named nas02 and gives a specific NFS client root access.

Module 1 Course Introduction and System Administration

Page 678 © Copyright 2022 Dell Inc.


Replica Access

Data Protection with Replication Key Points

337. Replication Overview


kkkkkkk. The Dell Unity XT Replication feature creates synchronized
redundant data replicas of storage resources within the same system or
remote system.
lllllll. Synchronous replication is a data protection solution for limited
distances.
- The operation ensures zero data loss between local source and remote
replica.
- Each block data written is saved to the local and remote system at the
same time.
mmmmmmm. Asynchronous replication is primarily used to
replicate data over long distances.
- The write operations to the source are not instantly replicated to the
remote replica.
- All writes are tracked on the source and the deltas (data difference) are
replicated to the destination during the synchronization cycles.
nnnnnnn. The Dell Unity XT platform supports the asynchronous local
replication of supported storage resources between pools.
ooooooo. The Dell Unity XT platform supports asynchronous or synchronous
remote replications of supported storage resources between storage
systems.
ppppppp. Supported synchronous remote replication topologies are One-
Directional, and Bi-Directional. The supported asynchronous remote
replication topologies are One-Directional, Bi-Directional, One-to-Many,
and Many-to-One.
qqqqqqq. The synchronous replication session states are: Active, Paused,
Failed Over, or Lost Sync Communications. Depending on the session
state, the sync states are In Sync, Syncing, Out of Sync, Consistent, or
Inconsistent.
338. Replication Configuration Process

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 679


Replica Access

rrrrrrr. Replication is configured when the supported storage resource is


created or manually created from the resource properties page.
– Supported block storage resources are LUNs or LUNs which are
members of a Consistency Group.
– Supported file storage resources are NAS server and associated file
systems. (A session for the NAS server and a session for each
associated file system are created.)
– Supported VMware storage resources are VMFS and NFS datastores.
sssssss. Before you create a remote replication session, you must configure
active communications channels between the two systems.
- Asynchronous replications use an IP-based connectivity between the
source and destination SPs.
- Synchronous replications use FC-based connectivity between source
and destination SPs.
339. Asynchronous Replication Configuration
ttttttt. The configuration of an Asynchronous replication session between
Dell Unity XT systems involve the following procedures.
– Configuration of replication interfaces on source and destination
systems.
– Create a replication connection between the storage systems, and
validate the connection from peer system.
– Define the source and replication settings (mode, RPO, destination), and
create the replication session.
340. Synchronous Replication Configuration
uuuuuuu. The configuration of a Synchronous replication session between Dell
Unity XT systems involve the following procedures.
– Identify the Sync Replication Management FC ports on source and
destination, and create replication interfaces.
– Create a replication connection between the storage systems, and
validate the connection from peer system.
– Select the source, define the replication settings, and create the
replication session.

Module 1 Course Introduction and System Administration

Page 680 © Copyright 2022 Dell Inc.


Replica Access

341. Replication Operations


vvvvvvv. Replication session operations can be performed within Unisphere
from two areas.
– Select a session from the Replication > Sessions page and an option
from the More Actions menu.
– The same More Actions menu options are available from the Replication
tab on the storage resource properties page.
wwwwwww. Replication session operations can be performed
from the source or destination systems, depending on the type of replication
(asynchronous or synchronous) and the state of the session.
- Session planned Failover (Synchronous) and Failover with sync
(Asynchronous) operations can be performed from the source system
when both systems are available.
- Session Pause,or Sync (Asynchronous) operations are also possible
from the replication source.
- Session Resume operations are available from the source when paused
by user or system. The operation is available from the destination system
for sessions paused from an unplanned failover operation.
- Session unplanned Failover operations can be performed from the
destination system when the source systems is not available.
- Session Failback operations are performed from the destination system
when the original source system becomes available.
xxxxxxx. NAS servers replicas on the destination might require network
interfaces re-configuration to ensure correct operation after a session
failover.
yyyyyyy. NAS server replication session operations are also performed on the
associated file systems.
342. Replica Access

zzzzzzz. Block storage resource replica read-write access is restricted.


– The access to the data is accomplished through a user snapshot of the
resource, and attaching a host to the snapshot.
aaaaaaaa. Replica NAS server production access is inactive

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 681


Replica Access

- The access to the data is accomplished through a user snapshot or by


creating a proxy NAS server and associating with the replica NAS server.

For more information, see the Dell Unity: Replication


Technologiesand the Dell Unity: MetroSync white paper on the
Dell Unity Info Hub site.

Module 1 Course Introduction and System Administration

Page 682 © Copyright 2022 Dell Inc.


Replica Access

Appendix

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 683


Replica Access

Adding View Blocks


Select a widget (view block) on the left pane then select Add View Block.

Module 1 Course Introduction and System Administration

Page 684 © Copyright 2022 Dell Inc.


Replica Access

Health Score Categories


The CloudIQ Health Score engine breaks down into five categories each of which
is monitored and contributes to the overall health of the system. For each category,
CloudIQ runs a check against a known set of rules and makes a determination if a
particular resource has issues.

The score can help a storage administrator spot where the most severe health
issues are, based on the five core factors (health categories). The area with the
highest risk to the system's health hurts its score until actions are taken towards
remediation.

These categories are not customer configurable but built into the CloudIQ software.

Icon Category Types of Health Checks

Component (system Components with issues, OE/Firmware compliance


health) issues

Configuration Hosts - non-HA, drive issues: Faults subject to use


(hot spare, RAID 6, RAID 5)

Capacity Pools reaching full capacity

Performance Processor utilization, SP balance

Data Protection RPOs not being met, last snap not taken.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 685


Replica Access

Network Time Protocol (NTP) Synchronization


With the NTP synchronization method, the Unity storage system connects to an
NTP server and synchronize the system clock with other applications and services
on the network.
• Time synchronization is key for the Microsoft Windows environments for both
client and server systems.
• Time synchronization is necessary to join a NAS server to the Active Directory
domain, to enable SMB and multiprotocol access.
• Microsoft Windows environments typically use NTP service that is configured in
one or more Domain Controllers.

Warning: If the storage system clock is not synchronized to the


same source as the host system clock, some applications do not
operate correctly.

Module 1 Course Introduction and System Administration

Page 686 © Copyright 2022 Dell Inc.


Replica Access

Traditional RAID Limitations


Redundant Array of Inexpensive Disks is a technology that has been around since
the late eighties.
• The RAID technology was developed and implemented for redundancy and
reliability that could exceed that of any large single drive.
• As the size of disk drives increase, current RAID capabilities have several
limitations.

Long rebuild times from a drive failure


• RAID technology suffers from long rebuild times which in turn, contribute to
increased exposure to a second drive failure (data loss).
• Pools are limited by a single drive rebuild performance.

Inefficient provisioning
• Storage is managed in RAID Group units.
− If a storage administrator wants to add capacity, the administrator must add
a RAID Group.
− This process usually provides more capacity than needed at a higher cost
since disk drives cost money.
• A RAID Group is limited to a maximum of 16 drives and is composed of drives
of the same type.
• There is no mixing of drive types in a RAID Group.

Dedicated hot spares


• A hot spare is any “healthy” unused drive.
− A hot spare swaps into a RAID Group containing drives of the same type
upon the failure of a drive in the RAID Group.
− The hot spare becomes a part of the storage pool, that is, it is no longer
unused. The failed drive becomes an “unhealthy” unused drive.
• Replacing the failed drive results in the replacement drive being a “healthy”
unused drive.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 687


Replica Access

− The hot spare that was swapped in at the time of failure remains in the
storage pool even after the failed drive is replaced.
• Spare drives cannot be used to mitigate flash drive wear through
overprovisioning or improve pool performance.

Module 1 Course Introduction and System Administration

Page 688 © Copyright 2022 Dell Inc.


Replica Access

Drive Partnership Group


Drive Partnership Group is a collection of drives within a dynamic pool. Drive
partnership groups are automatically configured by the system.

Drive Partnership Group (DPG) =

Single-tier Pool

DPG 1

RAID 5 (4+1)

64 Total Drives

2 Drives of Spare Space

DPG 2

RAID 5 (4+1)

6 Total Drives

DPG 1 reaches its maximum capacity of 64 drives, and DPG 2 is created

There may be one or more drive partnership groups per dynamic pool.
• Every dynamic pool contains at least one drive partnership group.
• Each drive is member of only one drive partnership group.
• Drive partnership groups are built when a dynamic pool is created or expanded.

− A drive partnership group only contains a single drive type.


− Different sizes of a particular drive type can be mixed within the group.
Each drive partnership group can contain a maximum of 64 drives of the same
type.
• Limit the number of drives RAID extents can cross.
• When a drive partnership group for a particular drive type is full, a new group is
started.
• The new group must have the minimum number of drives for the stripe width
plus hot spare capacity.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 689


Replica Access

RAID Protection Levels and Drive Counts


Depending on the selected RAID protection level, different drive counts must be
selected. The drive count must fulfill the RAID stripe width plus spare space
reservation set for a drive type.

RAID Type Number of Drives RAID Stripe Width

6 to 9 4+1

RAID 5 10 to 13 8+1

14 or more 12+1

7 or 8 4+2

9 or 10 6+2

11 or 12 8+2
RAID 6
13 to 14 10+2

15 or 16 12+2

17 or more 14+2

3 or 4 1+1

5 or 6 2+2
RAID 1/0
7 or 8 3+3

9 or more 4+4

Module 1 Course Introduction and System Administration

Page 690 © Copyright 2022 Dell Inc.


Replica Access

iSCSI Qualified Name

The storage system automatically generates IQN (iSCSI Qualified Name) and the
IQN alias. The IQN alias is the alias name that is associated with IQN. Both the
IQN and the IQN alias are associated with the port, and not the iSCSI interface.

The IQN format is iqn.yyyy-mm.com.xyz.aabbccddeeffgghh where:


• iqn is the naming convention identifier
• yyyy-nn is the point in time when the .com domain was registered
• com.xyz is the domain of the node backwards.
• aabbccddeeffgghh is the device identifier, which can be a WWN, the system
name, or any other vendor-implemented standard.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 691


Replica Access

Supported Configurations for Data Reduction and Advanced


Deduplication

Dell Unity Technology Supported Pool Supported


OE Type Models

4.3 / 4.4 Data Reduction All Flash Pool 1 300 | 400 | 500 |
600
300F | 400F | 500F
| 600F
350F | 450F | 550F
| 650F

4.5 Data Reduction All Flash Pool 1 300 | 400 | 500 |


600
300F | 400F | 500F
| 600F
350F | 450F | 550F
| 650F

Data Reduction + All Flash Pool 2 450F | 550F | 650F


Advanced Deduplication

5.0 / 5.1 Data Reduction All Flash Pool 1 300 | 400 | 500 |
600
300F | 400F | 500F
| 600F
350F | 450F | 550F
| 650F
380 | 480 | 680 |
880
380F | 480F | 680F
| 880F

Module 1 Course Introduction and System Administration

Page 692 © Copyright 2022 Dell Inc.


Replica Access

Data Reduction + All Flash Pool 1 450F | 550F | 650F


Advanced Deduplication 380 | 480 | 680 |
880
380F | 480F | 680F
| 880F

5.2 Data Reduction All Flash Pool 1 300 | 400 | 500 |


600
300F | 400F | 500F
| 600F
350F | 450F | 550F
| 650F
380 | 480 | 680 |
880
380F | 480F | 680F
| 880F

Hybrid Pool 1, 3 380 | 480 | 680 |


880

Data Reduction + All Flash Pool 1 450F | 550F | 650F


Advanced Deduplication 380 | 480 | 680 |
880
380F | 480F | 680F
| 880F

Hybrid Pool 1, 3 380 | 480 | 680 |


880

1 Resource can be created on either a Traditional or a Dynamic Pool (For systems


that support Dynamic Pools).

2 Resource can be created on a Dynamic Pool Only.

3The pool must contain a flash tier, and the total usable capacity of the flash tier
must meet or exceed 10% of the total pool capacity.

Module 1 Course Introduction and System Administration

© Copyright 2022 Dell Inc. Page 693


Storage Resource on Low Flash Capacity Pool
When the Flash Percent (%) value of a hybrid flash dynamic pool is below 10%,
data reduction is not supported for the storage resources.

In the example, the Flash Percent (%) of Pool 1 does not comply with the data
reduction requirements.

Pool properties General tab showing the Flash Percent (%).

Data Reduction is disabled and unavailable for any storage resource that is created
from the pool. The feature is grayed out, and a message explains the situation.

The example shows that data reduction is not available when creating a LUN on
Pool 1.

Module 1 Course Introduction and System Administration

© Copyright 2021 Dell Inc. Page 695


Appendix

Create LUN wizard with the Data Reduction option unavailable for configuration

Module 1 Course Introduction and System Administration

Page 696 © Copyright 2021 Dell Inc.


Module 1 Course Introduction and System Administration
Appendix

Glosary
Drive Extent
A drive extent is a portion of a drive in a dynamic pool. Drive extents are either
used as a single position of a RAID extent or can be used as spare space. The size
of a drive extent is consistent across drive technologies – drive types.

Drive Extent Pool


Management entity for drive extents. Tracks drive extent usage by RAID extents
and determines which drive extents are available as spares.

Dynamic Pool Private LUN


A single dynamic pool private LUN is created for each dynamic RAID Group.
The size of a private LUN is the number of RAID extents that are associated with
the private LUN. A private LUN may be as small as the size of a single drive.

Dynamic RAID Group


A Dynamic pool RAID Group is a collection of RAID extents, and it can span more
than 16 drives. The RAID group is based on dynamic RAID with a single associated
RAID type and RAID width.
The number of RAID Groups and the size of the RAID Group can vary within a
pool. It depends on the number of drives and how the pool was created and
expanded.

iSCSI Node Name


An iSCSI name is not the IP address or the DNS name of an IP host. Names
enable the iSCSI storage resources to be managed regardless of address.

LUN or Logical Unit


A single element of storage while Consistency Group is a container with one or
more LUNs.

PACO
The Proactive Copy feature or PACO enables disks to actively copy the data to the
hot spare. The operation is triggered with the number of existing media errors on
the disk. PACO reduces the possibility of two bad disks by identifying whether a
disk is about to go bad and proactively running a copy of the disk.

Module 1 Course Introduction and System Administration


Appendix

RAID Extent
A collection of drive extents. The selected RAID type and the set RAID width
determine the number of drive extents within a RAID extent.
Each RAID extent contains single drive extent from a specific number of drives
equal to the RAID width.
RAID extents can only be part of a single RAID Group and can never span across
drive partnership groups.

SCSI Device Name


The SCSI device name is the principal object that is used in authentication of
targets to initiators and initiators to targets.

Spare Space
Spare space refers to drive extents in a drive extent pool not associated with a
RAID Group. Spare space is used to rebuild a failed drive in the drive extent pool

vVol
Virtual volumes (vVols) are VMware objects that correspond to a Virtual Machine
(VM) disk, and its snapshot and its clones. Virtual machine configuration, swap
files, and memory are also stored in virtual volumes.

You might also like