Dell Unity Implementation and Administration: Participant Guide
Dell Unity Implementation and Administration: Participant Guide
Dell Unity Implementation and Administration: Participant Guide
IMPLEMENTATION AND
ADMINISTRATION
PARTICIPANT GUIDE
PARTICIPANT GUIDE
Module 1 Course Introduction and System Administration
Supported Configurations for Data Reduction and Advanced Deduplication .................... 692
Overview
Unisphere login
The Unisphere user interface is a web-based software that is built on the HTML5
technology with support on a wide range of browsers1.
1The supported browsers are: Google Chrome v33 or later, Microsoft Edge, Mozilla
Firefox v28 or later, and Apple Safari v6 or later.
The interface provides an overall view of what is happening in the environment plus
an intuitive way to manage the storage array.
Interface Navigation
Navigation
Pane Main Page
The Unisphere interface has four main areas which are used for navigation and
visualization of the content:
• The Navigation Pane has the Unisphere options for storage provisioning, host
access, data protection and mobility, system operation monitoring, and support.
• The Main Page is where the pertinent information about options from the
navigation pane and a particular submenu is displayed. The page also shows
the available actions that can be performed for the selected object.
• The Top Menu has links for system alarms, job notifications, help menu, the
Unisphere preferences, global system settings, and the CloudIQ.
• The Sub Menus provide various tabs, links, and more options for the selected
item from the navigation pane.
Dashboard
View Block
Unisphere Dashboard with menu and submenu options, and view blocks
The Unisphere main dashboard provides a quick view of the system health and
storage health status.
The customized dashboards can be renamed and deleted using the dashboard
submenu.
To add view blocks, open the selected dashboard submenu and select Customize.
Overview
Unisphere CLI enables you to run commands on a Dell Unity storage system from
a host with the Unisphere CLI client installed.
• Unisphere CLI supports provisioning and management of network block and
file-based storage.
• The Unisphere CLI client can be downloaded from the online support website
and installed on a Microsoft Windows or UNIX/Linux computer.
The application is intended for advanced users who want to use commands in
scripts for automating routine tasks.
Command Syntax
The command syntax begins with the executable uemcli, using switches and
qualifiers as described here.
Where:
• Switches: Used to access a system, upload files to the system, and manage
security certificates.
Example
In the example, the Unisphere CLI command displays the general settings for a
physical Unity storage system.
• Access a Dell Unity 300F system using the management port with IP address
192.168.1.230.
− The first time a Unity system is accessed the command displays the system
certificate. (Not shown here.)
− The storage administrator has the choice to accept it only for the session or
accept and store it for all sessions.
• Log into the system with the provided local admin user credentials.
• Retrieves the array's general settings and outputs its details on the screen.
REST API is a set of resources, operations, and attributes that interact with the
Unisphere management functionality.
A storage administrator can perform some automated routines on the array using
web browsers, and programming and scripting languages.
Deep Dive: For more details, read the Unisphere Management REST
API Programmer’s Guide available on the online support website.
There are two user authentication scopes for Dell Unity storage systems: Local
User Accounts or Domain-mapped User Accounts.
Storage administrator can create local user accounts through the User
Management section of the Unisphere UI Settings window.
These user accounts are associated with distinct roles. Each account provide a
user name and password authentication only for the system on which they were
created.
User accounts do not enable the management of multiple systems unless identical
credentials are created on each system.
LDAP
Once the configuration is set, a storage administrator can create LDAP users or
LDAP Groups in the User Management section of the Unisphere Settings window.
These accounts use the user name and password that is specified on an LDAP
domain server. Integrating the system into an existing LDAP environment provides
a way to control user and user group access to the system through Unisphere CLI
or Unisphere.
The concept of a storage domain does not exist for Dell Unity systems.
There is no Global authentication scope.
The user authentication and system management operations are
performed over the network using industry standard protocols.
• Secure Socket Layer (SSL)
• Secure Shell (SSH)
Dell Unity storage systems have factory default management and service user
accounts. Use these accounts when initially accessing and configuring Unity.
These accounts can access both Unisphere and Unisphere CLI interfaces but have
distinct privileges of operations they can perform.
During the initial configuration process, it is mandatory to change the passwords for
the default admin and service accounts.
Tip: You can reset the storage system factory default account
passwords by pressing the password reset button on the storage
system chassis. Read the Unisphere Online Help and the Hardware
Information Guide for more information.
For environments with more than one person managing the Unity system, multiple
unique administrative accounts can be created.
• Different roles can be defined for those accounts to distribute administrative
tasks between users.
• Unisphere accounts combine a unique user name and password with a specific
role for each identity.
• The specified role determines the types of actions that the user can perform
after login.
This table describes each of the supported Dell Unity management user roles.
Storage View the storage system data, edit the Unisphere settings, use
Administrator the Unisphere tools, and create, modify, and delete storage
resources.
Overview
Hosts
ESXi Host
Storage administrators can monitor Dell Unity systems (physical models and
UnityVSA) using Unisphere Central.
Storage administrators can remotely access the application from a client host, and
check their storage environment.
Interface Navigation
Navigation
Pane Main Page
Administrators use a single interface to rapidly access the systems that need
attention or maintenance.
• Unisphere Central server obtains aggregated status, alerts and host details
from the monitored systems.
• The server also collects performance and capacity metrics, and storage usage
information.
The Navigation Pane on the left has the Unisphere Central options for filtering and
displaying information about the monitored storage systems.
The application displays all information for options selected from the navigation
pane on the Main Page. In the example, the Systems page shows the storage
systems the instance of Unisphere Central monitors.
The Unisphere Central user interface is built on the HTML5 technology with support
on a wide range of browsers2.
Configuration
To start monitoring a Dell Unity system, the storage administrator must configure
the storage array to communicate with Unisphere Central.
2The compatible web browsers are: Google Chrome v33 or later, Microsoft Edge,
Mozilla Firefox v28 or later, and Apple Safari v6 or later.
Open the Unisphere Settings window, and then go to the Management section.
1. Select Unisphere Central.
2. Select Configure this storage system for Unisphere Central.
3. Enter the IP address of the Unisphere Central server.
− If the security policy on the Unisphere Central server was set to manual, no
further configuration is necessary.
4. Select the Use additional security information from Unisphere Central if the
security policy on the server is set to Automatic.
− Then retrieve the security information from the server.
5. Enter the security information configured in the Unisphere Central server, and
click Apply to save the changes
Deep Dive: For more information, read the latest white paper on
Unisphere Central available in the product support site.
Overview
The Dell EMC Unity system collects metrics at certain intervals and sends the data to ESRS
- Capacity = 1 hour
SRS Infrastrucuture
Data is sent to
the Cloud
External
Firewall
Public
Internet
Authorization
Encrypted Externa
files l
Firewall
Administrators can monitor supported systems and perform basic service actions.
• The feature enables the access to near real-time analytics from anywhere at
any time.
• The CloudIQ interface is accessible using a web browser from any location.
Interface Navigation
Navigation through the CloudIQ interface is done by selecting a menu option on the
left pane. The selected information is displayed on the right pane.
The Overview page widgets provide storage administrators with a quick check of
the overall systems.
• Default widgets include system health scores, cybersecurity risks, system alerts,
capacity approaching full, reclaimable storage, performance impacts, and
systems needing updates.
• The System Health widget provides a summary of the health scores of all the
monitored systems.
System Health
System Health page with filtered view of the monitored Unity systems
The System Health page is broken into Storage, Networking, HCI, Service and
Data Protection systems categories.
The page uses the proactive health score feature to display the health of a single or
aggregated systems in the format of:
• A score that is shown as a number.
− Systems are given a score with 100 being the top score and 0 being the
lowest.
• A color that is associated with the score.
CloudIQ services running in the background collect data on each of the five
categories: components, configuration, capacity, performance, and data protection.
• A set of rules is used to determine the impact point of these core factors on the
system.
• The total score is based on the number of impact points across the five
categories.
Individual System
Category with
impact points
Selecting an individual system from the System Health view opens a summary
page with key system information divided into tabs: health, configuration, capacity,
and performance.
The summary landing page is the Health tab showing the system health score and
the total of issues that are found in each category.
From the Health tab, additional information about the cause can be retrieved by
selecting the affected category.
In this example, the Unity system Test_Dev is selected and the Health tab shows a
score of 60 (status = poor).
• The RED color is used to identify the system status and category with impact
points that are causing the condition.
• The problem is at the storage capacity level. There are three issues reported:
three storage pools are full.
Storage Pool
Impact points
The Properties page shows detailed information including the system health status
at the capacity level, with the score impact points and the number of issues.
The Capacity tab displays the used and free capacity in the pool, and the time to
reach a full state.
The Performance tab enables a storage administrator to view the top performance
storage objects
The STORAGE tab on the bottom of the page shows storage objects that are
created from the pool.
The VIRTUAL MACHINES tab shows information about the VMs using the
provisioned storage resource.
The DRIVES tab shows the number of drives, drive types, and capacity that is used
to create the storage pool.
Unisphere Settings
From the Unisphere Settings window, a storage administrator can configure the
global settings and parameters for a Dell Unity system.
Open the Settings configuration window by selecting its icon from the top menu.
Supported operations include:
• Monitor installed licenses.
• Manage users and groups that can access the system.
• Configure the network environment.
• Enable centralized management.
• Enable logging of system events to a remote log server.
• Start and pause FAST suite feature operations.
• Register support credentials, and enable Secure Remote Services.
• Create IP routes.
• Enable CHAP authentication for the iSCSI operations.
• Configure email and SNMP alerts.
The first screen of the Settings window is the License Information page.
1. Select a feature license from the list.
2. A description of the feature is displayed.
3. To obtain a product license, select Get License Online.
• Access the product page on the support site and download the license file.
• Transfer the license file to a computer with access to the storage system.
4. To unlock the Dell Unity features, select Install License.
The Dell Unity platform supports two methods for configuring the storage system
time:
• Manual setup
• Network Time Protocol (NTP) synchronization
4. Enter the IP address or the name of an NTP server and select Add on the
dialog box.
5. The NTP server is added to the list. Select Apply to save the changes.
The Dell Unity platform supports a time zone configuration for snapshot schedules
and asynchronous replication throttling.
• The schedule time zone applies to system defined and user created snapshot
schedules.
• The selected schedule time zone reflects the Universal Time Coordinated
(UTC)3 adjusted by an offset for the local time zone.
3. Select Apply. A disclaimer message is displayed with a warning about the
impact of the change.
4. Select Yes to confirm the time zone change. The new schedule time zone is set
for the system.
3 Unity systems use the Universal Time Coordinated (UTC) for time zone setting
(operating system, logs, FAST VP, and so on).
Some Dell Unity features rely on network name resolution configuration to work.
For example, Unisphere alert settings.
To manually add the network address of DNS servers the storage system uses for
name resolution:
1. Select the DNS Server option under the Management section.
2. Select Configure DNS server address manually.
3. To open the Add DNS Server configuration window, select Add.
4. Enter the IP address of the DNS server and select Add.
5. The DNS server entry is added to the list. Select Apply to submit and save the
changes.
If running Unity on a network which includes DHCP and DNS servers, the system
can automatically retrieve one or more IP addresses of DNS servers.
• Select the Obtain DNS servers address automatically on the Manage
Domain Name Servers page.
Administrators can view and modify the hostname, and the network addresses
assigned to the Dell Unity storage system.
The storage system supports both IPv4 and IPv6 addresses. Each IP version has
radio buttons to disable the configuration and select the dynamic or static
configuration.
• If running the storage system on a dynamic network, the management IP
address can be assigned automatically by selecting the proper radio button.
• If enabling ESRS support for the Unity system, then Dell Technologies
recommends that a static IP address is assigned to the storage system.
To view or modify the network configuration of the Unity system management port:
1. Expand the Management section and select the Unisphere IPs option.
2. To manually configure a IPv4 network address, select the Use a static IPv4
address. (Default option.)
3. Enter or modify the network address configuration: IP address, Subnet Mask,
Gateway.
4. Select Apply to submit and save the changes.
If enabling ESRS support for the Unity system, then Dell Technologies
recommends that a static IP address is assigned to the storage system.
In Dell EMC Unity XT systems with dual SPs, when one of them has a problem or
is rebooting, the NAS servers that are hosted on the SP fail over to the other SP.
− When the option is disabled, you can manually fail back all NAS servers, by
selecting Failback Now.
3. Select Apply to submit and save the changes. The Failback Policy is set for the
storage system.
Support Configuration
Proxy server configuration enables the exchange of service information for the Dell
Unity systems that cannot connect to the Internet directly.
To configure the Proxy Server settings, the user must open the Settings window
and perform the following:
1. Expand the Support Configuration section, and select Proxy Server.
2. The Connect through a proxy server checkbox must be selected.
3. Select the communication protocol: HTTP4 or SOCKS5.
4. Enter the IP address of the Proxy Server, and the credentials (username and
password) if the protocol requires user authentication.
• The SOCKS protocol requires user authentication.
5. Select Apply to save the changes.
After configured, the storage administrator performs the following service tasks
using the proxy server connection:
• Configure and save support credentials.
• Configure Secure Connect Gateway.
• Display the support contract status for the storage system.
• Receive notifications about support contract expiration, technical advisories for
known issues, software and firmware upgrade availability, and the Language
pack update availability.
4 The HTTP (nonsecure) protocol supports all service tasks including upgrade
notifications. This option uses port 3128 by default.
5 The SOCKS (secure) protocol should be selected for IT environments where
HTTP is not allowed. This option uses port 1080 by default and does not support
the delivery of notifications for technical advisories, software, and firmware
upgrades.
Support credentials are used to retrieve the customer current support contract
information and keep it updated automatically.
• The data provides access to all the options to which the client is entitled on the
Unisphere Support page.
To configure the support credentials, the user must open the Settings page and
perform the following:
1. Expand the Support Configuration section, and select the Dell EMC Support
Credentials option.
2. Then enter the username and password on the proper fields. The filled out
credentials must be associated with a support account.
3. Select Apply to commit the changes.
Up-to-date contact information ensures that Dell support has the most accurate
information for contacting the user in response to an issue.
Tip: The user receives system alert reminders to update the contact
information every six months.
Overview
SCG Infrastructure
Unity XT systems
HTTPS
Outbound
Inbound
Public Internet
two-way communication
Firewall Firewall
Benefits:
• Dell support can remotely monitor configured systems by receiving system-
generated alerts.
• Support personnel can connect into the customer environment for remote
diagnosis and repair activities.
• Provides a high-bandwidth connection for large file transfers.
• Enables proactive Service Request (SR) generation and usage license
reporting.
• The service operates on a 24x7 basis.
Deployment Options
Secure Connect Gateway options available with the Dell Unity platform include an
embedded version and the Secure Connect Gateway virtual appliance edition.
The embedded version provides direct connectivity integrated into the Dell Unity XT
storage system.
• The Secure Connect Gateway software is embedded into the Dell Unity XT
operating environment (OE)6 as a managed service.
• The embedded version uses an on-array Docker container which enables only
the physical system to communicate with Support Center.
• The storage administrator can configure one way (outbound) or two way
(outbound/inbound) communication.
6The Dell EMC Unity OE is responsible for persisting the configuration and the
certificates that are needed for Secure Connect Gateway to work.
Communication
There are two remote service connectivity options for the Integrated Secure
Connect Gateway version:
• Outbound/Inbound (default)
− This option enables remote service connectivity capabilities for remote
transfer to and from the Support Center, with the Dell Unity XT system.
− Ports 443 and 8443 are required for outbound connections.
− Two-way Secure Connect Gateway is the recommended configuration.
• Outbound only
Comparison
Storage administrators can view the status and enable Secure Connect Gateway
from the Secure Remote Services page of the Unisphere settings.
− User contact information and specific credentials that are associated with
the site ID, which is associated with the system serial number.
− If there is a problem with the user Online Support account, support
personnel can help with the configuration using their RSA credentials.
Readiness Check
To verify if the storage system is ready for Secure Remote Services configuration,
select the Readiness Check option on the Secure Remote Services page.
In the Secure Connect Gateway Readiness Check window, select the Secure
Connect Gateway deployment option to configure
Integrated
To verify if the Unity XT storage system is ready for an Integrated Secure Connect
Gateway deployment, select Integrated.
• Select the check box to configure two-way or uncheck it for one-way
communication, and advance the step.
• Before the readiness check runs, the end user license agreement (EULA) must
be accepted. Select Accept license agreement, and advance to the next step.
− If no errors are found, a successful message is displayed and you can select
Configure ESRS to close the check and advance to the configuration.
− However, If errors are displayed, a Check Again button is displayed and
you must resolve the issues before running the new check.
ESRS Readiness Check window showing the results of the readiness check
Centralized
After selecting the centralized option, enter the network address of the primary and
secondary Secure Connect Gateway servers.
− If no errors are found, a successful message is displayed and you can select
Configure ESRS to close the check and advance to the configuration.
− However, If errors are displayed, a Check Again button is displayed and
you must resolve the issues before running the new check.
ESRS Readiness Check window showing the results of the readiness check
Integrated SCG
To configure Integrated Secure Connect Gateway on the storage system, the user
must select Integrated on the Secure Remote Services page.
• Select the check box to configure two-way or uncheck it for one-way
communication, and advance the step.
Network Check
If a proxy server has been configured for the storage system, the server information
is displayed on this page.
• To add a proxy server or modify the configuration select the pencil icon beside
the Connect Through a Proxy Server option.
Configure ESRS wizard window showing the proxy configuration determined by the network check.
Contact Information
Verify the customer contact data information and make any edits if required. Select
NEXT to advance the step.
Email Verification
In the email verification process, select the Send access code to start a request
for an access code.
• This option is unavailable if valid support credentials are not configured.
A message is sent to the contact email with a generated 8-digit PIN access code
which is valid for 30 minutes from the generated time.
RSA Credentials
If there is a problem with the user Online Support account, Dell support can help
with the configuration by selecting the Alternative for Support Personnel only.
And then enter the RSA credentials and site ID on the proper fields to browse the
Secure Connect Gateway configuration. Select Next to continue.
The system starts initializing the Secure Connect Gateway. The Support Personnel
RSA credentials are requested once again to finish the configuration. A new token
code must be entered (only if the Alternative for Support personnel was invoked).
Results
The results page notifies that Secure Connect Gateway should be connected to the
Support Center in 15 minutes. The user can monitor the status of the Secure
Connect Gateway connectivity on the Service page, and configure Policy Manager
while waiting for Secure Connect Gateway to connect.
SCG Configured
The Secure Remote Services page show the status of the connection and which
type of Secure Connect Gateway configuration is saved to the system.
Centralized SCG
Specify the Primary gateway network address of the Secure Connect Gateway
virtual appliance server that is used to connect to the Dell Enterprise.
• Ensure that the port 9443 is open between the server and the storage system.
After you click Next,the system starts initializing Secure Connect Gateway.
Results
The results page notifies that Secure Connect Gateway should be connected to the
Support Center in 16 minutes. The user can monitor the status of the Secure
Connect Gateway connectivity on the Service page.
SCG Configured
The Secure Remote Services page show the status of the connection and which
type of Secure Connect Gateway configuration is saved to the system.
Alerts are usually events that require attention from the system administrator.
Some alerts indicate that there is a problem with the Dell Unity system. For
example, you might receive an alert telling you that a disk has faulted, or that the
storage system is running out of space.
Unisphere UI with dashboard view and the three methods used for alerts monitoring
Alerts are registered to the System Alerts page in Unisphere. Access the page
using one of three methods
• Select the link on the top menu bar.
• Select the option on the navigation pane.
• Select notification icons on the dashboard view block.
The view block on the dashboard shows an icon with the number of alerts for each
recorded severity category.
• The link on these icons opens the Alerts page, showing the records filtered by
the selected severity level.
System alerts with their severity levels are recorded on the System Alerts page.
Logging levels are not configurable.
The table provides an explanation about the alert severity levels from least to most
severe.
Information An event has occurred that does not impact system functions.
No action is required.
Notice An event has occurred that does not impact system functions.
No action is required.
Warning An error has occurred that the user should be aware of but
does not have a significant impact on the system. For
example, a component is working, but its performance may not
be optimum.
Error An error has occurred that has a minor impact on the system
and should be remedied—no need to fix immediately. For
example, a component is failing and some or all its functions
may be degraded or not working.
Tip: Two of these severity levels are identified by the same icon and
refer to events that require no user intervention: Information and Notice.
Information alerts report the status of a system component or changes
to a storage resource condition. Notice alerts normally report the status
of a system process that is triggered by service commands.
There are multiple ways to review the health of a Dell Unity system. In the
Unisphere UI, the storage administrator can review the System Health view block
on the dashboard, and the System View page. The user can also check the Alerts
page for resolved issues.
The Alerts page shows the event log with all alerts that have occurred in the
system.
• Alert states are used to help the user determine which records are current, and
which records are resolved.
• An alert state changes when the software OE is upgraded, the error condition is
resolved, and the alert is repeating.
Active_Manual Status when the alert is active and must be manually cleared.
The alert is still Active, and a user must deactivate the alert to
mark it Inactive once the condition is solved.
Active_Auto Status when the alert is active but will be automatically cleared
when the issue is resolved. The alert is still Active and will be
marked Inactive automatically once the condition is cleared.
Inactive Status when the alert is no longer active because it has been
resolved. The alert condition has been resolved.
Updating Status when the alert is transitioning between the other states:
Active_Auto to Inactive, Active_Manual to Inactive.
Manage Alerts
Filter
In Unisphere, the Alerts page is automatically filtered by default to show only the
records in Active and Updating states. Records in an Inactive state are hidden.
In the example, the records were also filtered to show only the log entries already
acknowledged by the user.
The dialog box is closed, and the record entry is marked as inactive. Because of
the page filtering, the record entry is not displayed in the list of entries.
To view detailed information about a system alert, select the alert from the list of
records of the Alerts Page.
Details about the selected alert record are displayed in the right pane. The
information includes:
• Time the event was logged.
• Severity level
• Alert message
• Description of the event
• Acknowledgement flag
• Component affected by the event
• Status of the component
The example shows the details about theAlert_2721. Observe that the current
status of the alert is Degraded, and the state is Active_Auto. The alert will
transition to Inactive once the issue is resolved.
Email Address
In Unisphere, open the Settings configuration window and expand the Alerts
section:
1. Select Email and SMTP, under the Alerts section.
2. On the Specify Email Alerts and SMTP configuration, click the Add icon.
3. The Alert Email Configuration window opens - enter the email that is
supposed to receive the notification messages.
4. Select the severity level from the drop-down list, then select OK to save the
configuration.
• The dialog box closes, and the new email is displayed in the list of email
messages.
The example shows the configuration of the email address epratt@hmarine.test
as a recipient for notifications about issues with Notice and above severity levels.
SMTP Configuration
• Optionally select Send Test Email to verify that the SMTP server and
destination email addresses are valid.
• The Send Test Email button is only available after changes to the email
configuration.
SNMP Target
SNMP v3.0
Severity Level
Storage administrator can view information about all jobs, including the ones that
are active, complete, or failed. From the Jobs page the administrator can also
delete a completed or failed job, and cancel an active job (queued or running).
Select the Jobs icon on the top menu to quickly view the jobs in progress.
• The Jobs icon also helps determine the number of active jobs: queued or
running.
• The system polls for active jobs every 10 seconds and updates the count.
Inactive jobs older than seven days are automatically deleted from the list. Only the
most recent 256 jobs are listed. Inactive jobs have a status of completed or failed.
Administrators can also view information about the Dell Unity system logged events
by selecting Logs, under Events.
• Unisphere immediately displays real time changes to the storage system.
• By default, the logged events are sorted by the time the event was posted, from
most recent to earlier.
The storage administrator can also customize the view and sort, filter, and export
the data. The event log list can be sorted by Date/Time: ascending or descending.
A link to the Remote Logging page in the Unisphere Settings window enables the
administrator to configure the logging of user/audit messages to a remote host.
Remote Logging
The Remote Logging setting enables a Dell Unity system to log user/audit
messages to a remote host. A remote host running syslog must be configured to
receive logging messages from the storage system before the user can enable this
feature in Unisphere.
To view and add a new remote logging configuration for another remote host,
perform the following steps:
1. Open the Unisphere Settings window, and select Remote Logging under the
Management section.
2. Select the Add icon. (Only a maximum of five remote logging configurations are
supported. If five configurations are already configured, the Add icon is
disabled.)
Add Configuration
4. Then select the protocol used to transfer log information (UDP or TCP).
5. Select OK to save the configuration.
a. Unisphere alerts are usually events that require attention from the system
administrator.
b. The alerts severity levels are categorized as Information, Notice, Warning,
Error, and Critical.
c. There are four states for alerts: Active_Manual, Active_Auto, Inactive, and
Updating.
d. Alert details provide time of the event, severity level, alert message,
description of the event, acknowledge flag, component affected by the
event, and status of the component.
e. Unisphere can be configured to send the system administrator alert
notifications via email or though an SNMP trap.
f. Users can monitor when jobs are active, complete, or failed. The jobs page
shows the number of active jobs: queued or running. The system polls for
active jobs every 10 seconds and updates the active jobs count.
g. Unisphere can be configured to send log message entries of a determined
severity level to a remote server.
The Dell Unity XT platform provides storage resources that are suited for the needs
of specific applications, host operating systems, and user requirements.
Exchange Network
Database Linux/UNIX File
Mail Servers
Servers Hosts Servers
Windows
Unisphere
Users
VMware
Hosts
iSCSI or FC
Storage Management
Network NAS Network
Storage Pool
NAS Server
VMFS NFS
LU
N
vVol
LUNs and Consistency Groups provide generic block-level storage to hosts and
applications.
• Hosts and applications use the Fibre Channel (FC) or the iSCSI protocol to
access storage in the form of virtual disks.
Another modality of supported VMware datastores is the vVol (Block) and vVol
(File) datastores. These storage containers store the virtual volumes or vVols.
• The vVol (File) use NAS protocol endpoints and vVol (Block) uses SCSI
protocol endpoints for I/O communications from the host to the storage system.
All storage resources on the Dell Unity XT platform are provisioned from unified
storage pools7.
Storage Pool
NAS Server
NAS vVol
NAS
Server Server NFS (Block) VMFS LUN LUN
root config
Datastores
7A storage pool is a collection of drives that are arranged into an aggregate group,
with some form of RAID protection applied.
A storage administrator can modify the pool configuration to improve efficiency and
performance using the management interfaces. The administrator can also monitor
a pool capacity usage, and expand it, or delete a pool that is no longer in use.
Dell Unity XT HFA systems have multiple tiers of drives to select when building a
storage pool. Dell Unity XT AFA systems have a single SSD tier.
Tiers are a set of drives of similar performance. Each tier supports a single RAID
type and only certain drive types. The Dell Unity XT defined tiers are:
• Extreme Performance tier: SAS Flash drives
• Performance tier: SAS drives
• Capacity tier: NL-SAS drives
8 Fully Automated Storage Tiering for Virtual Pools or FAST VP is a feature that
relocates data to the most appropriate disk type depending on activity level. The
feature improves performance while reducing cost.
The Pools page shows the list of created pools with its allocated capacity, its
utilization details, and free space.
Details about a pool are displayed on the right-pane whenever a pool is selected.
Dynamic pools are storage pools whose tiers are composed of Dynamic Pool
private RAID Groups.
All storage pools that are created on a Dell Unity XT physical system using
Unisphere are dynamic pools by default.
All Dell Unity storage resources and software features are supported on dynamic
pools.
Pool management operations in Unisphere, Unisphere CLI, and REST API are the
same for both dynamic and traditional pools.
Dynamic pools are not limited by traditional RAID technology, and provide
improved storage pool planning, provisioning, delivering a better cost per GB.
• Users can provision pools to a specific capacity without having to add drives in
specific multiples. For example, the expansion by 4+1 or 8+1.
• Users can expand pools by a specific capacity, generally by a single drive,
unless crossing a drive partnership group.
• Different drive sizes can be mixed in dynamic pools.
Another major benefit is the time that it takes to rebuild a failed drive.
• Dynamic pools reduce rebuild times by having more drives engaged in the
rebuild process.
− Data is spread out to engage more drives.
• Multiple regions of a drive can be rebuilt in parallel.
Dynamic Pools can be created using any of the supported management interfaces:
Unisphere, UEMCLI, and REST API.
The user selects the RAID type for the dynamic pool, while creating it.
• Dynamic pools support RAID 5, RAID 6, and RAID 1/0.
The user may set the RAID width only when creating the pool with the UEMCLI or
REST API interfaces.
Dynamic pools also support the use of drives of the same type but different
capacities in the same drive partnership group.
• The rule applies to storage pool creation, storage pool expansion, and the
grabbing of unused drives if one of the pool drives fail.
• If the number of larger capacity drives is not greater than the RAID width, the
larger drive entire capacity is not reflected in the “usable capacity.”
Tip: Check the table with RAID width defined by the drive count
selected for each RAID level.
Private LUN 1
Private RAID Group 1
RE RE RE RE
DE DE DE DE DE DE
DE DE DE DE DE DE
DE DE DE DE DE DE
DE DE DE DE DE DE
DE DE DE DE DE DE
Private LUN 2
... ... ... ... ... ...
SPARE SPARE SPARE SPARE SPARE SPARE Private RAID Group 2
RE RE RE RE
RE RE RE RE
The storage administrator must select the RAID type when creating a dynamic
pool.
The system automatically populates the RAID width which is based on the number
of drives in the system.
40. The example shows a RAID 5 (4+1) configuration in a drive partnership group.
41. At the physical disk level, the system splits the whole disk region into identical
portions of the drive called drive extents.
m. Drive extents hold a position of a RAID extent or are held in reserve as a
spare space.
n. The drive extents are grouped into a drive extent pool.
42. The drive extent pool is used to create a series of RAID extents. RAID extents
are then grouped into one or more RAID Groups.
43. The process creates a single private LUN for each created RAID Group by
concatenating pieces of all the RAID extents and striping them across the LUN.
o. The LUN is partitioned into 256 MB slices. The system distributes the slices
across many drives in the pool.
To create a dynamic pool using the Unisphere interface, the user must select
Pools under the Storage section on the navigation pane.
To launch the Create Pool wizard, select the + icon in the Pools page. Then the
user must follow the steps in the wizard window.
Enter the pool name and optionally a pool description, then advance the wizard
step.
Tiers
A storage administrator can change the RAID protection level for the selected tiers
and reserve up to two drives per 32 drives of hot spare capacity.
Drives
Select the number of drives from the selected tier to add to the pool (the step is
displayed here).
In the example, a minimum of 7 drives was selected to comply with the RAID width
(4+1) plus the two drives of hot spare capacity.
If the pool is used for the provisioning of vVol storage containers, the user can
create and associate a Capability Profile to the pool.
A Capability Profile is a set of storage capabilities for a vVol datastore. The feature
is discussed in VMware Datastores Provisioning.
Summary
The user can go back and change any of the selections that are made for the pool,
or click Finish to start the creation job.
Results
A green check mark with a 100% status indicates successful completion of the
task.
The properties page for both pool types includes General, Disks, Usage, and the
Snapshot Settings tabs. The properties page for a dynamic pool includes the
RAID tab.
The General tab shows the type of pool, and the user can only change the pool
name and description.
The Drives tab displays the characteristics of the disks in the pool.
The Usage tab shows information about storage pool allocation. The information
includes the space storage resources use, and the free space. The Usage tab also
shows an alert threshold for notifications about the space remaining in the pool,
and a chart with the pool used capacity history.
• The space all snapshots and thin clones use in the pool is reported in the Non-
base Space field.
• The tab also displays the data reduction savings (in GB, percentage and
savings ratio) achieved by the deduplication and compression of supported
storage resources.
On the RAID tab (shown here), the user can view the drive types and number of
drives per drive type within the pool. The user can also check:
• The RAID protection level (RAID 5, RAID 6 OR RAID 1/0)
• The stripe width of each drive type
• The hot spare capacity reserved based on the tier selection
A dynamic pool can expand up to the system limits by one or more drives under
most circumstances.
• Expansion is not allowed if the number of drives being added is more than
enough to fill a drive partnership group, but at the same time not enough to also
fulfill the minimum drive requirements to start another drive partnership group.
− The maximum size of a drive partnership group is 64 drives.
− The minimum number of drives to start a new partnership group is the RAID
width+1 requirement for spare space.
• When a new drive type is added, the pool must be expanded with a minimum
number of drives.
− The minimum of drives must satisfy the RAID width+1 requirement for spare
space.
When expanding with a drive count that is equal to the Stripe width or less, the
process is divided into two phases:
53. The dynamic pool is expanded by a single drive, and the free space made
available to the user.
t. This process enables some of the additional capacity to be added to the
pool.
u. However, only if the single drive expansion does not increase the amount of
spare space required.
v. If the pool is running out of space, the new free space helps delay the pool
from becoming full.
w. The new free space is made available to the user If the expansion does not
cause an increase in the Spare Space the pool requires.
The system automatically creates the private RAID Groups depending on the
number of drives added.
• Space becomes available in the pool after the new RAID Group is ready.
• Expanding by the RAID width+1 enables space to be available quickly
For more information about expanding dynamic pools, refer to Dell EMC Unity:
Dynamic Pools white paper.
Adding Drives
When the user expands a dynamic pool, the number of added drives determines
the time in which the new space is made available. The reason is that the drive
extents are rebalanced across multiple drives.
If the number of drives that are added to the pool is equivalent to the existing drive
count (stripe width plus an extra drive):
• The drive count exceeds the Stripe width.
• The time for the space to be available matches the time that it takes to expand
a traditional pool.
RAID 5(4+1) + extra drive DE= Drive extent | Spare= Spare space | D1-12= Disks | RE= RAID extent
In this example, a RAID 5 (4+1) configuration with a RAID width of 5 is shown. The user then adds
the same number of drives to the current pool.
When extra drives increase the spare space requirement, a portion of the space
being added is reserved equal to the size of one drive. This space reservation can
occur when the spare space requirement for drive type -1 for 31 drive policy is
crossed.
Expansion Process
This expansion process creates extra drive extents. From the drive extents, the
system creates RAID extents and RAID Groups and makes the space available to
the pool as user space.
RAID 5(4+1) + extra drive DE= Drive extent | Spare= Spare space | D1-12= Disks | RE= RAID extent
The system runs a background process to rebalance the space across all the drives.
Adding capacity within a Drive Partnership Group causes the drive extents to
rebalance across the new space. This process includes rebalancing new, used,
and spare space extents across all drives.
The process runs in parallel with other processes and in the background.
• Balancing extents across multiple drives distributes workloads and wear across
multiple resources.
• Optimize resource use, maximize throughput, and minimize response time.
RAID 5(4+1) + extra drive DE= Drive extent | Spare= Spare space | D1-12= Disks | RE= RAID extent
Observe that this graphic is an example and the actual algorithm is design-dependent.
When adding a single drive or less than the RAID width, the space is available
about the same time a PACO operation to the drive takes.
If adding a single drive and the spare space boundary is crossed, none of that drive
capacity is added to the pool usable capacity.
The example shows the expansion of a dynamic pool with a single drive of the
same type. The process is the same as adding multiple drives.
With dynamic pools, the pool can be expanded based on a single drive capacity.
This method is more cost effective since pools can be expanded based on capacity
needs without the additional cost of drives. Example: 1 x 12 TB drive.
Extents moved to
RAID 5 (4+1) + extra drive new drive
In the example, the system first identifies the extents that must be moved off drives
to the new drive as part of the rebalance process. As the extents are moved, their
original space is freed up.
The expansion process continues to rebalance the extents to free space on the
new drive. The background process also creates free space within the pool.
RAID 5 (4+1)
This rule applies for storage pool creation and expansion, and the use of spare
space. However, different drive types including SAS-Flash drives with different
writes per day, cannot be in the same RAID Group.
The example displays a RAID 5 (4+1) configuration using mixed drive sizes.
Although Unisphere displays only 400 GB drives, users can select 800 GB drives to
add to the pool using the user interface.
In this configuration, only 400 GB of space is available on the 800 GB drive. The
remaining space is unavailable until the drive partnership group contains at least
the same number of 800 GB drives as the RAID width+1.
RAID 5 (4+1)
[5 x 400 GB] + [1 x 800 GB]
LEGEND
D1-6 = Drives DE = Drive extent
Depending on the number of drives of each capacity, dynamic Pools may or may
not use the entire capacity of the larger drives. All the space within the drives is
available only when the number of drives within a drive partnership group meet the
RAID width+1 requirement.
The example shows the expansion of the original RAID 5 (4+1) mixed drive
configuration by five drives. The operation reclaims the unused space within the
800 GB drive.
RAID 5 (4+1)
[5 x 400 GB] + [1 x 800 GB]
LEGEND
After adding the correct number of drives to satisfy the RAID width of (4+1) + 1, all
the space becomes available.
Observe that although these examples are possible scenarios, best practices for
building pools with the same drive sizes and types should be followed whenever
possible.
In Unisphere, the Expand Pool wizard is launched by selecting the pool to expand
and clicking the Expand Pool button.
The wizard displays the tier that is used to build the pool and the tiers with available
drives to the pool.
• For Dynamic Pools, the available tiers are Extreme Performance tiers with
different SAS-Flash drives.
• The next step enables the user to select the number of drives from the tier with
available drives, to add to the Dynamic pool.
• The user can add a single drive or all the drives available in the tier.
In the example, 1 new disk is added to the system increasing the usable capacity in 280 Gb.
Dynamic pools use spare space to rebuild failed drives within the pool.
Spare space consists of drive extents that are not associated with a RAID Group,
used to rebuild a failed drive in the drive extent pool.
• Each drive extent pool reserves a specific percentage of extents on each disk
as the spare space.
• The percentage of reserved capacity varies based on drive type and the RAID
type that is applied to this drive type.
• If a drive within a dynamic pool fails, spare space within the pool is used.
Spare space is counted as part of a Pool overhead, as with RAID overhead, and
therefore is not reported to the user.
• Spare space is also not part of the usable capacity within the pool for user data.
Drive Fail
When a pool drive fails, the spare space within the same Drive Partnership Group
as the failed drive is used to rebuild the failed drive.
A spare extent must be from a drive that is not already in the RAID extent that is
being rebuilt.
Rebuild Process
Drive extents are rebuilt using the spare Spare drive space is consumed if available
space
The idea is to spread the extents across several drives and do so in parallel so
rebuild operations complete quicker.
• RAID extents are composed of drive extents from different drives. For that
reason, the drive extents being rebuilt target different drives which in turn,
engages more drives for the rebuild.
• The code ensures that multiple drive extents from the same RAID extents do
not end up on the same drive. This condition would cause a single point of
failure for the RAID extent.
This demo covers how to create and manage a Dynamic Pool on a Dell Unity XT
storage system.
Movie:
Traditional pools are storage pools whose tiers are composed of Traditional RAID
Groups.
• Traditional RAID Groups are based on Traditional RAID with a single
associated RAID type and RAID width.
• Traditional RAID Groups are are limited to 16 drives.
The configuration of a traditional storage pool involves defining the types and
capacities of the disks in the pool.
A storage administrator can define the RAID configuration (RAID types and stripe
widths) when selecting a tier to build a traditional storage pool.
• Each tier supports drives of a certain type and a single RAID level.
During the storage pool provisioning process, a storage administrator has the
option to select:
• More than one drive type to build a a multitiered pool
• Just one drive type to create a single-tiered pool.
Administrators can also identify if the pool must use the FAST Cache feature, and
associate a Capability Profile for provisioning vVol datastores.
The FAST VP and Capability Profile features are discussed in more details in the
Scalability and Performance section and the VMware Datastores section.
Tip: The Dell Unity XT platform uses dynamic pools by default, but
support the configuration of traditional pools using UEMCLI or REST
API. UnityVSA only support the configuration of traditional pools using
the Unisphere interface.
Traditional pools consist of some traditional RAID Groups, which are built from
drives of a certain type.
• These RAID Groups are based on traditional RAID with a single associated
RAID type, RAID width, and are limited to 16 drives.
• These pools use dedicated hot spares and are only expanded by adding RAID
Groups to the pool.
96. A heterogeneous pool is created from a RAID 1/0 (2+2) group that is built from
the SAS Flash drives.
• The heterogeneous pool also includes a RAID 6 (4+2) group that is built with
HDDs.
97. A RAID Group Private LUN is created for each RAID Group.
98. These Private LUNs are split into continuous array slices that are 256 MB.
Slices hold user data and metadata. (FAST VP moves slices to the various tiers
in the pool using this granularity level).
99. After the Private LUNs are partitioned out in 256 MB slices, they are
consolidated into a single pool that is known as a slice pool.
To create a new traditional storage pool using the Unisphere interface, select
Pools under the STORAGE section on the navigation pane.
Wizard Summary page with the configuration details for a homogeneous storage pool called
Performance Pool.
To launch the Create Pool wizard, select the add (+) icon in the Pools page.
Follow the steps in the wizard window:
• Enter the pool Name and the pool Description.
• The wizard displays the available storage tiers. The user can select the tier and
change the RAID configuration for the selected tier.
• Select whether the pool is supposed to use FAST Cache.
• Select the number of drives from the tier to add to the pool.
• If the pool is used for the provisioning of vVol storage containers, the user can
create and associate a Capability Profile * to the pool. A Capability Profile is a
set of storage capabilities for a VVol datastore.
The properties for both a traditional or a dynamic pool include the General, Drives,
Usage, and the Snapshot Settings tabs. The properties page for traditional pools in
hybrid systems includes a tab for FAST VP.
1 2 3 6
4
5
1: The General tab shows the type of pool (traditional and dynamic), and the user
can only change the pool name and description.
2: The Drives tab displays the characteristics of the disks in the pool.
3: On the FAST VP tab, it is possible to view the data relocation and tier
information. The FAST VP tab is displayed only for traditional pools on hybrid and
the virtual storage systems.
• The total amount of space that is allocated to existing storage resources and
metadata. This value does not include the space that is used for snapshots.
• The Used field displays the pool total space that is reserved by its associated
storage resources. This value includes the space the thin clones and snapshots
use. This value does not include preallocated space.
• The Non-base Space field displays the space that is used by all snapshots and
thin clones in the pool.
• The Preallocated Space field displays the amount of remaining space in the
pool that is reserved for, but not actively being used by, a storage resource.
• The Free field shows the amount of unallocated space that is available for
storage resources consumption that is measured in TB. The percentage of free
capacity is also displayed by hovering over the pool graphic of the current pool
capacity.
• Alert threshold, which is the percentage of storage allocation at which
Unisphere generates notifications about the amount of space remaining in the
pool. You can set the value between 50% and 84%.
• The Data Reduction Savings shows the amount of space that is saved when the
feature is enabled on the pool. The savings is displayed as a size, percentage,
and a proportion ratio. The value includes savings from compression and
deduplication.
• The Pool used capacity history is a chart graphic with pool consumption over
time. The user can verify the used capacity at a certain point in time by hovering
over different parts of the graphic.
5: The Storage Resources option on the Usage tab provides a list of the storage
resources in the pool, along with the applicable pool utilization metrics.
6: On the Snapshot Settings tab, the user can review and change the properties
for snapshot automatic deletion.
A storage pool that needs extra capacity can be expanded by adding more disks to
the storage tiers of the pool.
In the example, five new disks from a performance tier are added to the pool. Their additional 1.7
TB of capacity increase the pool total usable capacity to 13.9 TB.
In Unisphere, select the pool to expand at the Pools page, and then select Expand
Pool. Follow the wizard steps:
• On the Storage Tiers step, select the tiers for the drives you want to add to the
pool.
− If you are adding another tier to the storage pool, you can select a different
RAID configuration for the disks in the tier.
• On the Drives step, select the number of drives to add to each tier selected on
the previous step.
• Review the pool configuration on the Summary page (the page is displayed
here). Click Finish to start the expansion job.
• The results page shows the status of the job. A green check mark with a 100%
status indicates successful completion of the task.
LUN
Consistency
Thin Group
Clone
Block Storage
Block storage resources that are supported by the Dell Unity XT platform include
LUNs, consistency groups, and clones.
• A LUN or logical unit represents a quantity of block storage that is allocated for
a host. You can allocate a LUN to more than one host if you coordinate the
access through a set of clustered hosts.
• A Consistency Group is an addressable instance of LUN storage that can
contain one or more LUNs (up to 50). Consistency Groups are associated with
A storage administrator can provision LUNs to SAN hosts using any of the
supported management interfaces.
In Unisphere, select Block under the Storage section to provision and manage
LUNs.
The page shows the list of created LUNs with its size in GB, the allocated capacity,
and the pool it was built from.
To see the details about a LUN, select it from the list. The LUN details are
displayed on the right pane.
From the LUNs page, you can create a LUN, view the LUN properties, modify
some settings, and delete an existing LUN.
• Before creating a LUN, at least one pool must exist in the storage system.
• The deletion of a LUN that has host access that is configured is not allowed.
• The administrator must manually remove host access before deleting the LUN.
Launch Wizard
To create LUNs select the add (+) icon from the LUNs page to launch the wizard.
Then follow the Create LUNs wizard steps to complete the configuration.
Define the characteristics of the LUN (or multiple LUNs) to create in the storage
system.
Access
The storage administrator can associate the LUNs with an existing host or host
group configuration.
To associate a host configuration select the add (+) icon and then choose the host
profile.
Snapshot
Local data protection can be configured for one or more LUNs at the time of the
creation.
Replication
A storage administrator can also configure remote data protection on a LUN at the
time of its creation or later.
The administrator can optionally enable the replication of the snapshot schedule to
the destination, and overwrite destination resource.
The Summary page shows all the selections made for the new LUN (or multiple
LUNs) for a quick review.
The storage administrator can go back to make changes or select Finish to start
the creation job.
In the example, snapshot schedule and replication were not enabled for the single
LUN been created. And no host configuration was assigned for LUN access.
Results
The Results of the process are displayed on the last page. Select OK to close the
wizard.
List of LUNs
LUN Properties
The General tab of the LUN properties shows both the storage object capacity
utilization details and the free space.
− For LUNs built from a multi-tiered pool, the properties window also includes
a FAST VP tab.
The Consistency Groups page shows the list of created groups with their size in
GB. The page also shows the number of pools that are used to build each group
and the allocated capacity.
To see the details about a Consistency Group, select it from the list and its details
are displayed on the right-pane.
From this page, create a group, view its properties, modify some settings, or delete
an existing Consistency Group.
• The deletion of a Consistency Group that has configured host access is not
allowed.
• The storage administrator must manually remove host access before deleting
the Consistency Group.
Launch Wizard
To create a Consistency Group select the add (+) icon from the Consistency
Groups page to launch the wizard. Then follow the Create a Consistency Group
wizard steps.
Launching the Create a Consistency Group wizard from the Consistency Groups page
Enter a name and provide an optional description for the consistency group.
Select the add (+) icon to configure the Consistency Group member LUNs. There
are two options: selecting existing LUNs or creating new ones.
When adding existing LUNs, the group members can be from different pools.
The selected LUNs are included in the list and marked with the Add action.
The Create new LUNs options launches the Configure LUN wizard. Define the
characteristics of the member LUNs: name, size, pool to use.
• The LUN identity is a combination of the name and a sequenced number.
The storage administrator can associate the Consistency Group with an existing
host or host group configuration.
To associate a host configuration select the add (+) icon and then choose the host
profile.
Local data protection can be configured for the Consistency Group at the time of
the creation.
Snapshots are covered in more details in the Data Protection with Snapshots
section.
Replication
A storage administrator can also configure remote data protection for the
Consistency Group at the time of its creation or later.
Summary
The Summary page shows all the selections made for the new Consistency Group
for a quick review.
Existing LUNs are added once the New LUNs are created and added once
group is created. the group is created.
LUNs set to be added to the Consistency Group LUNs set to the created with the Consistency
Group
In the example, snapshot schedule and replication were not enabled for the
Consistency Group. And no host configuration was assigned for the access.
Results
The Results of the process are displayed on the last page. Select OK to close the
wizard.
To view and modify the properties of a Consistency Group, select the edit (pencil)
icon.
The General tab of the Consistency Group properties page depicts its utilization
details and free space.
The LUNs tab shows the LUNs that are part of the Consistency Group. The
properties of each LUN can be viewed and a LUN can be removed from the group
or moved to another pool. LUNs can also be added to a Consistency Group. Host
access to the Consistency Group member LUN can also be removed to enable the
deletion of the storage resource.
The other tabs of the Consistency Group properties window enable the user to
configure and manage local and remote protection, and advanced storage features.
Host access to Dell Unity XT block storage and VMware datastores requires host
connectivity to the storage system with supported protocols.
Hosts
iSCSI
Fibre Channel
VMFS
LUN
vVol
(Block)
CG*
VMware
Block Datastore
* CG = Consistency Group
Storage resources must be provisioned on the Dell Unity XT storage system for the
host.
Preparing the storage is accomplished by creating disk partitions and formatting the
partition.
Some requirements must be met before you configure hosts to access the storage
system.
To connect hosts to the storage system, ensure that these requirements are
fulfilled:
• Configure the storage system for block storage provisioning.
− Prepare supported front-end interfaces for host connectivity.
− Configure LUNs or consistency groups using any of the supported
administrative interfaces.
• The host must have an adapter to communicate over the storage protocol.
− In the Fibre Channel environments, a host has a host bus adapter (HBA).
− For iSCSI, a standard NIC can be used.
• Multipathing software is recommended to manage paths to the storage system.
− If one of the paths fails, it provides access to the storage.
• In a Fibre Channel environment, users must configure zoning on the FC
switches.
− In iSCSI environments, initiator and target relationships must be established.
• To achieve the best performance:
− In iSCSI environments, the host should be on a local subnet with each iSCSI
interface that provides storage for it.
− To achieve maximum throughput, the iSCSI interface and the hosts should
have their own private network.
− In a multipath environment, the SP physical interfaces must have two IP
addresses (one IP address assigned to the same port on each SP). The
interfaces should be on separate subnets.
• After connectivity has been configured, the hosts must be registered with the
Dell Unity XT storage array.
Subnet Subnet
192.124.1.100 192.124.2.100
FC Switches
0 1 2 3 0 1 2 3
SP A SP B
Directly attaching a host to a Dell Unity XT system is supported if the host connects
to both SPs and has the supported multipath software.
HBAs are initiators that are used to discover and connect to the storage system
target ports.
Depending on the type of HBA being used on the host (Emulex, QLogic, or
Brocade), users can install HBA utilities to view the parameters. Utilities are
downloaded from the respective vendor support pages and are used to verify
connectivity between HBAs, and the arrays they are attached to.
Fibre Chanel HBAs should be attached to a dual fabric for High Availability.
For the iSCSI connectivity, a software or hardware iSCSI initiator must be used.
The Dell Unity XT 380/380F models support two embedded ports per SP for
connectivity over Fibre Channel and a Fibre Channel I/O modules.
• The FC embedded ports enable connectivity at 4 Gb/s, 8 Gb/s, or 16 Gb/s.
• The 4 port, 16 GB FC expansion I/O modules enables connectivity at 4 Gb/s, 8
Gb/s, or 16 Gb/s.
• The 4 port, 32 GB FC expansion I/O modules enables connectivity at 8 Gb/s, 16
Gb/s, or 32 Gb/s.
The Dell Unity XT 480/480F, and higher models support block storage connectivity
with Mezz Card configurations and expansion I/O modules.
Mezz Cards are only available on the Unity XT 480/480F, 680/680F, and 880/880F
storage arrays, with these configurations.
• The 4-port SFP Mezz Card serves Ethernet traffic and iSCSI block protocol.
− The card provides SFP+ connection to a host or switch port with connectivity
speeds of 1Gb/10Gb/25Gb.
• 4-port 10 Gb BaseT Mezz Card serves Ethernet traffic and iSCSI block protocol.
Fibre Channel interfaces are created automatically on the Dell Unity XT storage
systems.
Information about the SP A and SP B Fibre Channel I/O modules and a particular
Fibre Channel port can be verified using Unisphere or uemcli commands.
In Unisphere, verify the Fibre Channel interfaces by selecting the Fibre Channel
option under the Access section of the Settings configuration window.
The Fibre Channel Ports page shows details about I/O modules and ports. The
World Wide Names (WWNs) for the Dell Unity XT system Fibre Channel ports are
unique.
To display information about a particular FC port, select it from the list and select
the edit link.
Initiators are endpoints from which FC and iSCSI sessions originate. Each initiator
has a unique worldwide name (WWN) or iSCSI qualified name (IQN).
Any host bus adapter (HBA) can have one or more initiators that are registered on
it.
• Some HBAs or CNAs have multiple ports. Each HBA or CNA port that is zoned
to an SP port is one path to that SP and the storage system containing that SP.
• Each path consumes one initiator record. An HBA or CNA port can be zoned
through different switch ports to the same SP port or to different SP. The result
provides multiple paths between the HBA or CNA port and an SP.
Dell Unity XT storage systems support both Fibre Channel and iSCSI host
identifiers registration.
• A Unity XT system register all paths that are associated with an initiator in place
of managing individual paths.
• After registration, all paths from each registered initiator automatically have
access to any storage provisioned for the host.
• Multiple paths ensure a highly available connection between the host and array.
• You can manually register one or more initiators before you connect the host to
the storage system.
The maximum number of connections between servers and a storage system has
limitations. Each Unity XT model has a maximum number of initiator records that
are supported per storage-system SP and is model-dependent.
Failover software running on the server may limit the number of paths.
Registration
In Unisphere, verify the host initiators by selecting Initiators under the ACCESS
section. The Initiators page is displayed.
In the example, two FC host initiators display a green check with a blue icon
indicating they logged in but are not associated with a host. This icon is displayed
when the connected FC ports are used for replication purposes. The other initiators
are iSCSI as shown by the IQN identifier.
Initiator Paths
The link between a host initiator and a target port on the storage system is called
the initiator path.
In Unisphere, verify the host initiator paths by selecting the Initiator Paths tab from
the Initiators page.
iSCSI Interfaces
iSCSI interfaces in the Dell Unity XT and UnityVSA systems enable hosts to access
the system block storage resources using the iSCSI protocol.
• A storage administrator can associate an iSCSI interface with one or both SPs.
• Multiple iSCSI interfaces can coexist on each Storage Processor (SP).
− These iSCSI interfaces become the available paths hosts, with the proper
privileges can use to access the relevant storage resources.
To view and manage the iSCSI interfaces select Block under the STORAGE
section in Unisphere, and open the iSCSI Interfaces tab.
• The iSCSI interfaces page shows the list of interfaces, the SP, and Ethernet
ports where they were created, and the network settings.
• The page also shows the interfaces IQN (iSCSI Qualified Name) in the last
column.
From this page, it is possible to create, view and modify, and delete an iSCSI
interface.
The example shows iSCSI interfaces that are created for Ethernet Port 2 on SPA
and SPB and Ethernet Port 3 on SPA and SPB.
Creating Interface
To view and modify the properties of an iSCSI interface, select the interface and
the edit icon. The Edit iSCSI Network interface window is launched.
A storage administrator can change the network settings for the interface and
assign a VLAN.
To assign a VLAN, click the Edit link to enter a value (1 through 4094) to be
associated with the iSCSI interface.
On a Dell Unity XT system, you can require all hosts to use CHAP authentication
on one or more iSCSI interfaces.
To require CHAP authentication from all initiators that attempt access to the iSCSI
interface, open Settings and go to the Access section.
• Select Enable CHAP Setting.
− The system denies access to storage resources of this iSCSI interface from
all initiators that do not have CHAP configured.
• To implement the global CHAP authentication, select Use Global CHAP and
specify a Username and Global CHAP Secret.
− Mutual CHAP authentication occurs when the hosts on a network verify the
identity of the iSCSI interface by verifying the iSCSI interface mutual CHAP
secret.
− Any iSCSI initiator can be used to specify the "reverse" CHAP secret to
authenticate to the storage system.
− When mutual CHAP Secret is configured on the storage system, the CHAP
secrets are shared by all iSCSI interfaces that run on the system.
Different host iSCSI initiator options are supported for connectivity with Dell Unity
XT storage systems.
NIC/HBA Resources
TCP TCP TCP
IP IP IP
With a software iSCSI initiator, the only server hardware that is needed is a Gigabit
networking card. All processing for iSCSI communication is performed using server
resources, such as processor, and to a lesser extent, memory. The processor
handles the iSCSI traffic consuming resources that could be used for other
applications. Windows has an iSCSI initiator that is built into the operating system.
On the other hand, a hardware iSCSI initiator is an HBA that is displayed to the
operating system as a storage device. The iSCSI HBA handles the processing
instead of server resources minimizing resource use on the server hardware.
Hardware iSCSI HBAs also enable users to boot a server from the iSCSI storage,
something a software iSCSI initiator cannot do. The downside is that iSCSI HBAs
typically cost 10 times what a Gigabit NIC would cost, so you have a cost vs.
functionality and performance trade-off.
There is a middle ground, though. Some network cards offer TCP/IP Offload
Engines (TOE) that perform most of the IP processing that the server would
normally perform. TOEs lessen the resource overhead associated with software
iSCSI because the server only processes the iSCSI protocol workload.
Within iSCSI, a node is defined as a single initiator or target. iSCSI nodes identify
themselves by an iSCSI name. iSCSI names are assigned to all nodes and are
independent of the associated address. An iSCSI node name is also the SCSI
device name. These definitions map to the traditional SCSI target/initiator model.
− Each host connected to an iSCSI storage system must have a unique iSCSI
initiator name for its initiators.
− Multiple hosts connected to the iSCSI interface have the same iSCSI initiator
name.
Connects to an external iSCSI-based storage array through an Ethernet network.
Example:
• Host: iqn.1994-05.com.linux1:2246335fe96e
• Target Port: iqn.1992-04.com.emc:cx.virt1619dzgnh7.a0
Note: The Linux iSCSI driver gives the same name to all NICs in a
host. This name identifies the host, not the individual NICs. When
multiple NICs from the same host are connected to an iSCSI interface
on the same subnet, only one NIC is used. The other NICs are in
standby mode. The host uses one of the other NICs only if the first
NIC fails.
Registration
For the iSCSI initiators, an iSCSI target port can have both a physical port ID and a
VLAN ID. In this case, the initiator path is between the host initiator and the virtual
port.
The example shows two iSCSI initiators (a Windows host and an ESXi host).
• As with FC, any host with a green and white check icon are registered and
associated with an initiator.
• The yellow triangle indicates that the initiator has no logged in initiator paths,
and a check of the connections should be verified.
In Unisphere, verify the host initiator paths by selecting the Initiator Paths tab from
the Initiators page.
The example displays the initiator paths for host Win12b. The initiator shows two
paths, one to each SP. The Target Ports are the hardware Ethernet ports on the
array.
Host configurations are logical connections through which hosts or applications can
access storage resources. They provide storage systems with profiles of the hosts
that access storage resources using the Fibre Channel or iSCSI protocols (block).
Before a host can access block storage, you must define a configuration for the
host and associate it with the provisioned storage resource.
Hosts FC
iSCSI
Host Configuration
Host configurations define how the host connect to the Unity system
The relationship between a host and its Initiators enable block storage resources
on the Dell Unity XT to be assigned to the correct machine.
Tip: Host profiles are also used to identify NAS clients that access
storage resources using the NFS protocol (file). The configuration of
these profiles is discussed in the File Storage Provisioning section.
Management
To manage the hosts configuration profiles, select Hosts from the Access section
of Unisphere.
From the Hosts page, users can create, view, modify, and delete a host
configuration.
To see the details about a host configuration, select it from the list and the details
about the host profile are displayed on the right-pane.
To create a host configuration, click the + icon, and select the Host as the profile to
create.
The initiators are registered, and the host added to the list. In the example, the
Windows server WIN16B was added with two Fibre Channel initiators registered.
The properties of a host configuration can be invoked by selecting the host and the
edit icon.
• The General tab of the Host properties window enables changes to the host
profile.
• The LUNs tab displays the LUNs provisioned to the host.
• The Network Addresses tab shows the configured connection interfaces.
• The Initiators tab shows the registered host initiators.
• The Initiator Paths tab shows all the paths that are automatically created for
the host to access the storage.
Unisphere
Host access to provisioned block storage is specified individually for each storage
resource.
To configure host access, open the storage resource properties window and go to
the Host Access tab.
• The tab displays the host configurations that currently associated with the
storage resource.
Host
To access a block storage resource from the host, the storage resource must be
made available to the host.
Verify that each host is associated with an initiator IQN or WWN record by viewing
the Access > Initiators page.
Perform a scan of the bus from the host to discover the resource. Different utilities
are used depending on the operating system.
• With the Linux systems, run a SCSI scan function or reboot the system to force
the discovery of new devices.
• Use a Windows or Linux/UNIX disk management tool such as Microsoft Disk
Manager to initialize and set the drive letter or mount point for the host.
Format the LUN with an appropriate file system type: For example, FAT or NTFS.
Configure the applications on the host to use the LUN as a storage drive as an
alternative.
Host Configurations
LUN 1
Host Group
A host group is a logical container which groups multiple hosts and block storage
resources.
• Hosts in a host group have access to the same LUNs or VMFS datastores. The
feature provides each host in the group the same type of access to the selected
block resources.
• A host can only be part of a single host group at a time.
• There are two types of host groups set at creation.
− General: Used to provide LUN access to multiple hosts.
− ESX: Used to provide VMFS datastore access to multiple ESXi hosts.
When creating a host group, a storage administrator has the option to choose
between the types of hosts to group: General and ESX.
Existing host configurations can be added to the host group, or new hosts created
within the wizard.
• Each host has a unique HLU for each LUN it accesses, whether or not that host
is part of a host group.
Host Group
Example: LUNs 1-4 within host group are accessible by all members
Host Group
Example: Host access to LUN 5 is granted only to Host 4. Hosts 1-3 have no access
A host group does not restrict a resource from being attached to a subset of
hosts contained within a host group.
When adding the host access of a host group member host to a storage
resource:
• Access is granted only to the individual member host added.
• The other host group members have no access to the storage resource.
Management
To manage the hosts groups, select Hosts from the Access section of Unisphere.
From the Host Groups page, users can create, view, modify, and delete a host
group.
To see the details about a host group, select it from the list and the details about
the host group are displayed on the right-pane.
To create a host group, select the Add (+) icon from the page menu.
The Create a Host Group wizard is launched. Follow the wizard steps:
• Type a name for the host group, and select the type of host group: General or
ESX.
• On the Configure Hosts section, select the + icon:
− The Add Hosts to Host Group window is launched with the list of existing
host configuration profiles.
− Select the checkbox of the hosts to add or select More Actions and Create
New Host to create a new host configuration profile.
− Click OK to associate the host configuration profiles with the host group. The
Add Hosts to Host Group window is closed and the hosts are listed.
• Select the Merge existing LUNs of selected hosts into host group checkbox
if adding existing host configuration profiles. Advance the wizard step.
• The list of merged LUNs or VMFS datastores that are now accessible to all the
members of the host group are displayed.
− You can also select + to create new storage resources and automatically
make them available to all the hosts in the group.
− In the example, a host group (type ESX) was created to aggregate ESXi
hosts esxi-1.hmarine.test and esxi-2.hmarine.test.
− The VMFS datastore datastore05 that was originally provisioned to esxi-
1.hmarine.test was merged into the host group.
− The VMFS datastore is accessible to both ESXi hosts.
View/Modify Hosts
The properties of a host group can be invoked by selecting the host group and
clicking the edit icon.
The General tab of the host group properties window shows the status and the type
of host group.
• The tab also enables the change of the host group name and description.
To view/modify the host members of a host group, select the Hosts tab.
• The page shows the list of hosts added to the host group.
• The list also shows the number of LUNs that each host has access.
To view/modify the storage resources that are accessible by the member of the
host group, select the LUNs (shown here) or the Datastores tab.
Host Group LUNs tab showing the LUNs associated with the group
• The page shows the list of storage resources the hosts within the host group
have access.
• The list also shows the level of access:
− All Access means that all the hosts in the group were granted access to the
storage resource.
− Some Access means that only specific hosts within the group were granted
access to the storage resource.
• The user can select a storage resource and remove all access that is assigned
to the host group.
NAS Server
File Share
System
File Storage
File storage in the Dell Unity platform is a set of storage resources that provide file-
level storage over an IP network.
SMB and NFS shares are created on the storage system and provided to Windows,
Linux, and UNIX clients as a file-based storage resource. Shares within the file
system draw from the total storage that is allocated to the file system.
File storage support includes NDMP backup, virus protection, event log publishing,
and file archiving to cloud storage using CTA as the policy engine.
The components in the Dell Unity platform that work together to provision file-level
storage include:
• NAS Server
− Virtual file server that provides file resources on the IP network (to which
NAS clients connect).
− Exportable access pointer to file system storage that network clients can use
for file-based storage.
− The file-based storage is accessed through the SMB/CIFS or NFS file
access protocols.
• NAS servers are software components that provide file data transfer and
connection ports for users, clients, and applications that access the storage
system file systems.
− Communication is performed through TCP/IP using the storage system
Ethernet ports.
− The Ethernet ports can be configured with multiple network interfaces.
• Before provisioning a file system over SMB or NFS, an NFS datastore or a File
(vVol) datastore, a NAS server must be running on the system.
− The NAS server must be appropriate for managing the storage type.
− NAS servers can provide multiprotocol access for both UNIX/Linux and
Windows clients simultaneously.
• NAS servers retrieve data from available disks over the SAS backend, and
make it available over the network using the SMB or NFS protocols.
To manage NAS servers in Unisphere, select Storage > File > NAS Servers.
The NAS Servers page shows the list of created NAS servers, the SP providing
the Ethernet port for communication, and the Replication type, if any are
configured.
To manage NAS servers in Unisphere, select Storage > File > NAS Servers.
From the NAS Servers page, a storage administrator can create a NAS server,
view its properties, modify some settings, and delete an existing NAS server.
To see the details about a NAS server, select it from the list and its details are
displayed on the right-pane.
To create a NAS server, select Add (+) from the NAS Servers page, and follow the
Create a NAS Server wizard steps.
Enter a name for the NAS server. Select the storage pool to supply file storage.
Select the Storage Processor (SP) where you want the server to run. It is also
possible to select a Tenant to associate with the NAS Server. IP multitenancy is a
feature that is supported by Dell Unity systems.
In the next step, configure the IP interfaces used to access the NAS server. Select
the SP Ethernet port that you want to use and specify the IP address, subnet
mask, and gateway. If applicable select a VLAN ID to associate with the NAS
server. VLAN ID should be configured only if the switch port supports VLAN
tagging. If you associate a tenant with the NAS server, you must select a VLAN ID.
In the Configure Sharing Protocols page select the protocols the NAS server
must support:
• Windows shares: If you configure the NAS server to support the Windows
shares (SMB, CIFS), specify an SMB hostname, and a Windows domain. You
The NAS server DNS can be enabled on the next page. For Windows shares
enable the DNS service, add at least one DNS server for the domain and enter its
suffix.
The configuration of remote replication for the NAS Server is also available from
the wizard.
Review the configuration from the Summary page and click Finish to start the
creation job.
To view and modify the properties of a NAS Server, select the server and the edit
icon.
From the General tab of the properties window, you can view the associated pool,
SP, and supported protocols. It is also possible to change the name of the NAS
Server and change the SP ownership. Selecting which NAS servers run on each
SP balances the performance load on the Storage Processors.
The General tab also displays the configured interfaces and their associated roles.
Possible roles are Production or Backup and DR Test.
From the Network tab, you can view and modify the properties of the associated
network interfaces. New interfaces can be added, and roles defined. Existing
interfaces can be deleted. From this tab, it is also possible to change the preferred
interface, view and define network routes, and enable advanced networking
features such as Packet Reflect.
If you have configured multiple interfaces for a NAS server, the system
automatically selects the interface the default route uses for outgoing
communication. This interface is identified as the preferred interface. The NAS
When a NAS server starts outbound traffic to an external service, it compiles a list
of all the available network interfaces on the proper subnet. Then performs one of
the following actions if a preferred interface of the appropriate type, IPv4, or IPv6, is
in the compiled list:
• If the preferred production interface is active, the system uses the preferred
production interface.
• If the preferred production interface is not active, and there is a preferred active
backup interface, the system uses the preferred backup interface.
• If the preferred production interface is not active (NAS server failover), and
there is no preferred backup interface, the system does nothing.
The Naming Services tab enables the user to define the Naming services to be
used: DNS, LDAP and/or NIS.
The Sharing Protocols tab enables the user to manage settings for file system
storage access. For Windows shares (SMB, CIFS) it provides the Active Directory
or Standalone options. For Linux/UNIX shares, it provides the NFS v3 and/or NFS
v4 options. The user can also enable the support for File Transfer Protocol and
Secure File Transfer Protocol. If a UNIX Directory Service is enabled in the Naming
Services tab, multiprotocol access to the file system may also be provided.
The other tabs of the NAS Server properties window enable the user to enable
NDMP Backup, DHSM support, and Event Publishing. Extra configurable features
include anti-virus protection and Kerberos authentication, and remote protection.
You can verify the configuration of the network ports the NAS Server interfaces use
in the Settings configuration window.
From the Settings window, select the Ethernet option under the Access section.
From the Ethernet Ports page, settings such as link transmission can be verified
and changed.
To display information about a particular Ethernet port, select it from the list and
click the edit link.
The properties window shows details about the port, including the speed and MTU
size. The user can change both these fields if required.
The port speed can be set to 100 Mbps or 1 Gbps. The user can also set the port
to Auto Negotiate with the switch it is connected to.
The MTU for the NAS Server, Replication, and Import interfaces can be set to any
value (1280 to 9216). The MTU has a default value of 1500 bytes. If you change
the value, you must also change all components of the network path—switch ports
The File Systems page shows the list of created file systems with their sizes in
GB. Information includes the allocated capacity, the NAS server that is used to
share each one, and the pool it was built from.
From the File Systems page, you can create a file system, view its properties,
modify some of its settings, and delete it.
To see the details about a file system, select it from the list. The details about the
file system are displayed on the right pane.
To create a file system, click the “add” link from the File Systems page to launch
the Create a File System wizard.
To set the parameters for creating the file system, follow the steps of the wizard:
• Provisioning a file system involves selecting the NAS Server to associate with it.
So, before provisioning the file system a NAS server must be created. The
protocols the file system supports depend on the selected NAS server.
• On the next step of the wizard, you must enter a name and optionally enter a
description for the file system.
• The wizard enables the configuration of file-level retention for the file system.
This feature is covered in more details in the Scalability, Performance, and
Compliance Features module.
• The storage administrator also defines the size of the file system and the pool to
built it from. The capacity of the file system can be expanded after its creation.
− If not defined otherwise, a file system is thin provisioned by default. The only
time a user can define a file system as thick provisioned is at the moment it
is created. This setting cannot be changed later.
The Review section of the wizard shows the configuration, and the user can click
Finish to start the creation job.
The Results of the process are displayed on the last page, and the user can click
OK to close the wizard.
To view and modify the properties of a file system, click the edit icon.
The General tab of the properties window depicts the details about file system
utilization and free space. Also, the size of the file system can be expanded and
shrunk from this tab. The Capacity Alarm Setting link enables you to change the
settings for info, warning, and error alerts when a threshold for used space is
exceeded.
The other tabs of the file system properties window enable the user to:
• Configure and manage local and remote protection.
• For file systems built from traditional pools, the FAST VP tab is available. If a
multitiered pool was used to build the file systems, and then the tiering policy
can be changed.
• Configure file system quotas.
You can also enable and disable the Data Reduction feature for the storage
resource. When the feature is enabled, the tab displays the achieved Data
Reduction savings that are measured in GB, percentage, and ratio. The Data
Reduction feature is discussed in the Storage Efficiency section.
The File System Quotas feature is discussed in the Storage Efficiency Features
section. The File Level Retention feature is discussed in the Scalability,
Performance, and Compliance Features section.
Access to File storage on the Dell Unity platform requires a NAS client having
connectivity to a NAS Server from the storage system.
NAS Clients
SMB/CIFS
NFS
NAS Server
NFS
vVol (File)
File Shares
System VMware
File Datastore
Connectivity between the NAS clients and NAS servers is over the IP network
using a combination of switches, physical cabling, and logical networking. The Dell
Unity XT front-end ports can be shared, and redundant connectivity can also be
created by networking multiple switches together.
Storage must be provisioned on the storage system for the NAS client.
• NAS servers and file systems are created from storage pools.
• File system shares are configured based on supported sharing protocols
configured on the NAS Server.
The NAS client must then mount the shared file system. The mounting of file
systems is completed differently depending on the operating system.
NFS clients have a host configuration profile on the storage system with the
network address and operating systems defined. An NFS share can be created and
associated with host configuration. The shared file system can be mounted in the
Linux/UNIX system.
SMB clients do not need a host configuration to access the file system share. The
shared file system can be mounted to the Windows system.
ESXi hosts must be configured on the storage system by adding the vCenter
Server and selecting the discovered ESXi host. The host configuration can then be
associated with a VMware datastore in the Host Access tab of the datastore
properties. VAAI enables the volume to be mounted automatically to the ESXi host
after it is presented.
For Linux/UNIX NFS file system access, the user can configure host access using
a registered host (NAS client), or a list of NAS clients without registration. To create
a configuration profile and register a host, click the + icon and select the profile
type: Host, Subnet, or Netgroup.
Selecting the profile launches the wizard and steps the user through the process to
configure the profile.
• Users must enter the hostname at a minimum.
• While the host operating system information is not needed, providing the
information enables for a more specific setup.
• To customize access to NFS shares, the Network Address is required. The
Network Address is a name or IP address. No port information is allowed.
• Tenant information is not needed. Tenants are configured at the file system
level.
To manage file system shares created for Linux/UNIX hosts access, select File
from the Storage section. From the NFS Shares page, it is possible to create a
share, view the share properties, modify settings, or delete an existing NFS share.
To manage NFS shares, select Storage > File > NFS Shares
In the example, the vol/fs01 share is selected and the details are shown on the
right.
• The share is on the NAS Server nas01 with a file system fs01.
• The local path to the share is /fs01/ and the exported path to access the share
is: 192.168.64.182:/vol/fs01
• The share name can be a virtual name that is different from the real pathname.
• Access to the share is set to No Access which is the default.
Other options are Read-Only, Read/Write, Read/Write, allow Root, and the Read
only, allow Root.
To create an NFS share for a file system in Unisphere, go to the File option under
the Storage section and select the NFS Shares tab.
Select the Add (+) icon from the NFS Shares page. To create a share, follow the
Create an NFS Share wizard steps:
• Select the source file system for the new share.
• Provide an NFS share name and path. The user can also customize the anon
UID (and have it mapped to the uid of a user that has admin rights).
• Configure access to an existing host.
• Review the Summary page selections and Finish the configuration.
The example shows the wizard summary for a file system named fs02. The share
is the virtual name vol/fs02 with the local path of /fs02/. The default host access is
No Access and no customized host access is configured. The export path that is
used is the IP address of the NAS Server followed by the share.
To view and modify the properties of an NFS share, select it from the File > NFS
Shares page and click the pencil icon.
Based on the type of storage resource or share you set up, you may choose to
configure default access for all hosts. Users can also customize access to
individual hosts.
Unregistered hosts (NAS clients) can also be associated with an NFS share.
− Select the first option to enter a list of unregistered hosts to add separated
by a comma.
− Select the second option if you want to pick each one of the NAS clients.
The graphic displays the selection of Host access using a list of unregistered NAS
clients.
The default access permissions set for the share apply the default access
permissions set for the file system.
The table shows the permissions that can be granted to a host when accessing a
file system shared over NFS.
Access Description
Level
Read-only Hosts have permission to view the contents of the share, but not to
write to it.
Read/Write Permission to view and write to the file system, but not to set
permission for it.
Read/Write, Permission to read and write to the file system, and to grant and
allow root revoke access permissions. For example, enables permission to
read, modify and execute specific files and directories for other
login accounts that access the file system.
Read-only, Hosts have permission to view the contents of the share, but not
allow Root write to it. The root of the NFS client has root access to the share.
• When mounting the share, specify the network address of the NAS server and
the export path to the target share.
− Share address is a combination of NAS server network address and the
export path to the target share.
• To connect the shared NFS file system to the Linux/UNIX host, use the mount
command.
− Linux command: mount –t nfs NAS_server:/<share_name>
<directory>
• After mounting the share to the host, set the directory and file structure of the
share.
− Set the user and group permissions to the shares directories and files.
To manage file systems shares created for Windows host access, select File from
the STORAGE section in Unisphere.
Select the SMB Shares tab. The SMB Shares page shows the list of created
shares, with the NAS server, its file system, and its local path.
• From the SMB Shares page, create a share, view its properties, modify some
settings, or delete an existing SMB share.
• To view the details about a share, select the share from the list. The details are
shown on the right.
New SMB shares for a file system can be created from the SMB Shares page.
To launch the Create an SMB Share wizard, select the Add (+) icon.
− These features are optional and will be explained in the next page.
Access level permissions for the SMB shares are controlled by the network access
controls set on the shares. No host configuration is necessary.
View and modify the properties of an SMB share by selecting the share and the
edit icon.
The General tab of the properties window provides details about the Share name
and location of the share: NAS Server, file system, Local Path, and the Export path.
The Advanced tab enables the configuration of advanced SMB share properties:
• Continuous availability gives host applications transparent, continuous access
to a share following a failover of the NAS server.
• Protocol encryption enables SMB encryption of the network traffic through the
share.
• Access-Based Enumeration filters the list of available files on the share to
include only the ones to which the requesting user has read access.
• Branch Cache Enabled copies content from the share and caches it at branch
offices. Branch Cache Enabled copies enable client systems at branch offices
to access the content locally rather than over the WAN.
• Map the share using the host user interface or CLI commands.
− Specify the full Universal Naming Convention (UNC) path of the SMB share
on a NAS server.
− Windows Explorer select Tools and Map Network Drive
o Drive letter and UNC path to the file system share in the NAS server
− Command prompt
o net use [device]:\\NAS_server\share_export_path
• Authentication and authorization settings are maintained on the Active Directory
server for NAS servers in the Windows domain.
− Settings are applied to files and folders on the SMB file systems
• Dell recommends the installation of the Dell CIFS Management snap-in on a
Windows system.
− The snap-ins are used to manage home directories, security settings, and
virus-checking on a NAS Server.
VMFS NFS
vVol
VMware
Datastores
Specialized VMware storage resources that are called datastores are provisioned
from Dell Unity XT platform. Unisphere supports the configuration of host profiles
that discover ESXi hosts that are managed by a vCenter server.
A VMware datastore is a storage resource that provides storage for one or more
VMware vSphere ESXi hosts.
• The datastore represents a specific quantity of storage capacity made available
from a particular Dell Unity XT LUN (Block) or NAS file system (File).
• The storage system supports storage APIs that discover and mount datastores
that are assigned to ESXi hosts within the VMware environment.
The provisioning of the VMware datastores using the GUI or CLI interfaces involves
defining the datastore type:
• VMFS (Block)
• NFS (File)
From the Datastores tab of the VMware page, the storage administrator can create
a datastore, view its properties, modify some of its settings, and delete it. The
Datastores tab shows the list of created VMware datastores with its size in GB, the
allocated and used capacity, and the type of datastore. The page also shows the
storage pool that is associated with the datastore, and the NAS server used for
NFS and vVol (File) datastores.
To create a datastore in Unisphere, you must click the “add” link from the
Datastores page to launch the Create VMware Datastore wizard.
TIERING POLICIES
- Auto-Tier
Review the configuration settings for the datastore on the Summary step and select
Finish to start the job creation.
In a similar way, if creating an NFS datastore in Unisphere, the user must launch
the Create VMware Datastore wizard.
TIERING POLICIES
- Auto-Tier
Review the configuration settings for the datastore on the Summary step and select
Finish to start the job creation.
The Results page shows the conclusion of the process with a green check mark for
a successful operation.
To view and modify the properties of a VMware datastore, select the View/Edit (
) icon.
The General tab of the properties window depicts the details about the datastore,
including its capacity utilization and free space. You can expand the size of the
datastore from this page.
For an NFS datastore, the General tab also shows the Host I/O size, the file
system format, the NAS server used, and the NFS export path.
For a VMFS or vVol (Block) datastore, the user can modify the datastore name and
change the SP ownership to balance the workload between SPs.
Before an ESXi host can access the provisioned storage, some pre-configuration
must be done to establish host connectivity to the storage system.
The ESXi host must have an adapter to communicate over the storage protocol.
• In the Fibre Channel environments, use a host bus adapter (HBA).
• For iSCSI, and NFS protocols a standard NIC can be used.
Having completed the connectivity between the host and the array, you are in a
position to provision datastores to the ESXi host.
The Summary page enables the storage administrator to review the ESXi hosts
profile, and conclude the configuration.
Host access to the VMware datastores over FC, iSCSI, or NFS protocol is defined
when selecting the host configuration to associate with the provisioned datastore.
In Unisphere, this operation can be accomplished when creating the datastore, or
later from the storage resource properties or the ESXi host properties window.
From the datastore properties window in Unisphere go to Host Access tab, and
follow these steps:
7. Select the Add icon to open the Select Host Access window.
8. Select one or more wanted ESXi hosts from the filtered list of host configuration
profiles and select OK.
9. The newly added ESXi host is displayed in the list of hosts with granted access
to the datastore. For VMFS datastores, the host is given an automatic Host LUN
ID (HLU).
10. Optionally change the HLU assigned to the VMFS datastore.
To configure customized host access for the NFS datastore, set one of these
permission levels:
• No access: No access is permitted to the storage resource.
• Read-only: Permission to view the contents of the storage resource or share,
but not to write to it.
• Read/write: Permission to read and write to the NFS datastore or share. Only
hosts with "Read/Write" access are allowed to mount the NFS datastore using
NFSv4 with Kerberos NFS owner authentication.
• Read/write, enable Root: Permission to read and write to the file system, and
grant and revoke access permissions. For example, permission to read, modify
and execute specific files and directories for other login accounts that access
the file system. Only hosts with "Read/Write, enable Root" access are allowed
to mount the NFS datastore, using NFSv4 when NFS owner is set to root
authentication.
After a VMware datastore is created and associated with an ESXi host profile,
check to see if the block storage resource is discovered in the vSphere server.
New storage devices are displayed on the list as attached to the host.
The device Details section of the page displays the device properties and all the
created paths for the provisioned block storage.
From the vSphere Web Client, select the Datastores option under the Configure
section or the Datastores tab.
Verify that the datastores provisioned in Unisphere were automatically created and
presented to the ESXi host.
VCENTER SERVER
VMWARE
VSPHERE
PROTOCOL
ENDPOINTS
VASA
vVols Data Path
Provider
STORAGE
CONTAINER
vVol vVol vVol
vVol
VMware Virtual Volumes (vVols) are storage objects that are provisioned
automatically by a VMware framework to store Virtual Machine (VM) data.
The Dell Unity XT platform supports the creation of storage containers for both
Block and File VMware vVol datastores deployments.
Virtual volumes support enables storage policy based management in the vSphere
environment.
• Use of storage profiles that are aligned with published capabilities to provision
virtual machines.
• The VMware administrator can build storage profiles using services levels,
usage tags, and storage properties.
Compatible arrays such as the Dell Unity XT storage systems can communicate
with the ESXi server through VASA APIs based on the VASA 2.0 protocol. The
communications is established through the management network using https.
Virtual volumes are storage objects that are provisioned automatically on a vVol
datastore and store VM data.
These objects are different than LUNs and file systems and are subject to their own
set of limits.
Virtual Description
Volume
(vVol)
Config Stores standard VM-level configuration data such as .vmx files, logs,
NVRAM, and so on. At least one Config vVol must be created per VM
to store its .vmx configuration file.
Storage Administrator
Configure Create
Create
Capability Storage
Storage Pools
Profiles Containers
Provisioning Virtual Volumes (vVols) involve jobs that are performed on the storage
system and others that are performed on the vSphere environment.
First, the storage administrator must create the storage pools to associate with the
VMware Capability Profiles.
Then the storage containers can be created by selecting the storage pool and
associated Capability Profile.
Workflow Step
Storage Policy
vVol Datastore
- Capacity
- Performance
vVol (Block) / vVol (File)
- Availability
Datastore - Data Protection
Configure
- Security Capability
vDisks
Capability Profile Profile
Capability profiles define storage properties such as drive type, RAID level, FAST
Cache, FAST VP, and space efficiency (thin, thick). Also, service levels are
associated with the profile depending on the storage pool characteristics. The user
can add tags to identify how the vVol datastores that are associated with the
Capability Profile should be used.
To manage a capability profile, select VMware from the Storage section, and then
select Capability Profiles from the top submenu.
From the Capability Profiles page, it is possible to create a capability profile, view
its properties, modify some settings, and delete an existing capability profile.
The Capability Profile page shows the list of created VMware Capability Profiles,
and the pools it is associated with.
To see details about a capability profile, select it from the list and its details are
displayed on the right-pane.
To create a capability profile, click the “add” link from the Capability Profiles page to
launch the Create VMware Capability Profile wizard.
Follow the steps of the wizard to set the parameters for creating the capability
profile:
• Enter the capability profile name and description.
• Select the storage pool to associate the capability profile with.
• Enter any Usage Tags to use to identify how the associated vVol datastore
should be used.
• Then review the capability profile configuration, and click Finish to start the
operation. The results of the process are displayed.
Only after a capability profile is associated with a storage pool you can create a
vVol datastore.
View/Modify Properties
To view and modify the properties of a capability profile, select it from the list, and
click the edit icon.
The Details tab of the properties window enables you to change the name of the
capability profile. Also, the Universally Unique Identifier (UUID) associated with the
VMware object is displayed here for reference.
The Constraints tab shows the space efficiency, service level, and storage
properties that are associated with the profile. The user can add and remove user
tags.
Workflow Step
Virtual volumes reside in the vVol datastores, also known as storage containers.
A vVol datastore is associated with one or more capability profiles.
Create
Storage Policy
vVol Datastore Storage
Container
- Capacity
- Performance
vVol (Block) / vVol (File)
- Availability
Datastore - Data Protection Capability Profile
- Security
vDisks - Characteristics of pool: drive type, RAID, FAST Cache,
FAST VP, space efficiency
- Service levels: Gold, Silver, Bronze
NAS and SCSI
Protocol
Endpoints
There are two types of vVol datastores: vVol (File) and vVol (Block).
• vVol (File) are virtual volume datastores that use NAS protocol endpoints for
I/O communication from the host to the storage system. Communications are
established using the NFS protocol.
• vVol (Block) are virtual volume datastores that use SCSI protocol endpoints for
I/O communication from the host to the storage system. Communications are
established using either the iSCSI or the FC protocols.
To create a vVol datastore, the user must launch the Create VMware Datastore
wizard, and follow the steps.
The user must define the type of vVol datastore to create: vVol (File) or vVol
(Block). If provisioning a vVol (File), a NAS server must have been created in the
system. The NAS server must be configured to support the NFS protocol and
vVols. Also, a capability profile must have been created and associated with a pool.
A name must be set for the datastore, and a description to identify it.
The user can and then define the capability profiles to use for the vVol datastore.
The user can also determine how much space to consume from each of the pools
associated with the capability profiles. The capacity can be defined from the
Datastore Size (GB) column.
The datastore can also be associated with an ESXi host on another step of the
wizard.
The Summary page enables the user to review the configuration of the vVol
datastore before clicking Finish to start the creation job. The results of the process
are displayed on the final page of the wizard.
Storage Policy
Capacity Performance
Data
Availability
Protection
Virtual Volumes
VASA Provider
VM = Virtual Machine
PE = Protocol Endpoint
Protocol Endpoints or PEs establish a data path between the ESXi hosts and the
respective vVol datastores. The I/O from Virtual Machines is communicated
through the PE to the vVol datastore on the storage system.
A single protocol endpoint can multiplex I/O requests from many virtual machine
clients to their virtual volumes.
The Protocol Endpoints are automatically created when a host is granted access to
a vVol datastore.
• NAS protocol endpoints are created and managed on the storage system and
correspond to a specific NFS-based NAS server.
− The block vVol is bound to the associated SCSI PE every time that the VM
is powered on. When the VM is powered off, the PE is unbound.
− SCSI protocol endpoints simulate LUN mount points that enable I/O access
to vVols from the ESXi host to the storage system.
Storage Administrator
Create Configure Create
Storage Pools Capability Storage
Profiles Containers
VM Administrator
Complete VM
Compliant Provisioning
Add Vendor Create Provision VMs to
Provider / vVol Storage Storage Policies
Datastores Policies Non Compliant
Alert
Administrator
The Dell Unity system must be registered as a storage provider on the vCenter
Server to use the vVol datastores. The VM administrator performs this task using
the IP address or FQDN of the VASA provider.
After the virtual machines are created using the storage policies, users can view
the volumes that are presented on the Virtual Volumes page in Unisphere.
Workflow Step
The Dell Unity XT storage system or the Dell UnityVSA must be added as a
storage provider to the vSphere environment. The storage provider enables the
access to the vVols provisioned storage for the creation of storage policy-based
virtual machines.
- Performance
vVol (Block) / vVol (File)
- Availability
Datastore - Data Protection
- Security
vDisks
Capability Profile
Protocol
Endpoints
The vSphere administrator must launch a vSphere Web Client session to the
vCenter Server and open the Hosts and Clusters view.
Select the vCenter Server on the left pane, and from the top menu select the
Configure option and the Storage Providers option from the More submenu.
To add the Unity XT or UnityVSA system as a VASA vendor, open the New
Storage Provider window by clicking the Add sign.
• Enter a name to identify the entity.
• Type the IP address or FQDN of the VASA provider (the Dell Unity system) on
the URL field. The URL is a combination of the Dell Unity XT or UnityVSA
management port IP address, the network port, and VASA version XML path.
Ensure to use the full URL format described in the slide.
• Next, type the credentials to log in to the storage system.
The first time the array is registered, a warning message may be displayed for the
certificate. Click Yes to advanced, and validate the certificate.
Workflow Step
Next step is the creation of datastores in the vSphere environment using the vVol
datastores that were created in the storage system.
Storage Policy
vVol Datastore
- Capacity
- Performance
vVol (Block) / vVol (File)
- Availability
Datastore - Data Protection Capability Profile
- Security
vDisks - Characteristics of pool: drive type, RAID, FAST Cache,
FAST VP, space efficiency
- Service levels: Gold, Silver, Bronze
NAS and SCSI
Protocol
Endpoints
When vVol datastores (containers) are associated with an ESXi host in Unisphere,
they are seamlessly attached and mounted in the vSphere environment.
vVol datastores that are only created but not associated with an ESXi host still
show as available for use in the vSphere environment. The VMware administrator
must manually associate these storage containers with a datastore as explained
here.
Open the Hosts and Clusters view, and select the ESXi host from the left pane from
the vSphere Web Client.
The Datastores page is available by selecting the Datastores tab. The same page
can also be opened by selecting the Configure tab and the Datastores option on
the Storage section.
Workflow Step
Create the storage policies for virtual machines. These policies map to the
capability profiles associated with the pool that was used for the vVol datastores
creation.
vVol Datastore
Storage Policy
- Capacity
VASA
Provider
Launch the Create New VM Storage Policy wizard from the VM Storage Policies
page.
• Enter a name for the policy.
• Select the EMC.UNITY.VVOL data services rule type.
• Add different tags a datastore must comply with such as usage tag, service-
levels, and storage properties. In the example, only the usage tag was used.
• The next step shows all the available mounted datastores that are categorized
as compatible and incompatible. The administrator must select the datastore
that complies with the rules that were selected on the previous step.
After the wizard is complete, the new policy is added to the list.
Workflow Step
After the storage policies are created, the vSphere Administrator can create new
Virtual Machines to these policies.
- Capacity
- Performance
vVol (Block) / vVol (File)
Datastore - Availability
- Data Protection
vDisks
- Security Capability Profile
VASA
Provider
To create a Virtual Machine from the storage policies, open the Hosts and Clusters
tab. Then from the vSphere Web Client session, select the ESXi host from the left
pane.
From the drop-down list of the <Actions> top menu, select New Virtual Machine,
and the New Virtual Machine... option.
The wizard is launched, and the administrator can select to create a new virtual
machine.
• Enter a name, select the folder, select the ESXi host on which the virtual
machine is created.
• Then on the storage section of the wizard, the administrator must select the VM
Storage Policy that was previously created from the drop-down list.
• The available datastores are presented as compatible and incompatible. The
administrator must select a compatible datastore to continue.
• The rest of the wizard steps instructs administrators to select the following
parameters:
To manage virtual machine vVols, select VMware from the Storage section, and
then select Virtual Volumes from the top submenu.
From the Virtual Volumes page, it is possible to view the whole list of vVols stored
in the Dell Unity XT storage containers. The list shows the virtual machine each
vVol relates to, the storage container (datastore) used for VM provisioning, and the
capability profile used for storage policy driven provisioning.
Dell Unity XT OE algorithm evenly balances the number of vVols across the SPs
during virtual machine provisioning. When new virtual volumes are created, the
number of resources on each SP is analyzed and the new vVols are used to
balance the counts. To view this information displayed on the list, you must
customize the query to include the SP owner column (shown in the example with a
yellow box).
The Virtual Volumes page also allows the use to view the properties or delete an
individual virtual volume.
To see details about a virtual volume, select it from the list and its details are
displayed on the right-pane.
To view the properties of a virtual volume, select if from the list, and click the edit
icon.
The Binding Details tab is available on the properties page of all types of virtual
volumes. The tab displays information about the VMware protocol endpoints that
are associated with the access to the provisioned storage.
The Snapshots tab is displayed only on the properties page of a data virtual
volume. Native snapshot of individual VMDK (data) vVols is supported in Dell EMC
Unity XT storage systems. The Snapshots tab displays the list of vVol snapshots
that are created either in vSphere or Unisphere. The user can create manual
snapshots of the VMDK vVol and restore them.
The Host I/O Limit tab is displayed only on the properties page of a data virtual
volume. The tab collects information about the bandwidth and throughput that is
consumed by the ESXi host to storage object access.
This demo covers how to provisioning vVol (File) and vVol (Block) datastores on a
Dell Unity XT system or a Dell UnityVSA.
The video also demonstrates how to check some details of the datastore properties
and performing the expansion of the datastore.
The demo includes setting the storage system as a VASA provider to use the
provisioned storage container for storage policy-based provisioning of virtual
machines.
Movie:
5. Storage Resources
d. Dell Unity XT storage resources are categorized in storage pools, block
storage, file storage, and VMware datastores.
e. Dell Unity XT systems share pools with all the resource types: file systems,
LUNs, and the VMware datastores.
f. Storage pools are created using the SAS Flash, SAS, and NL-SAS drives.
6. Dynamic Pools
g. Dynamic Pools are supported on Dell Unity XT physical hardware only
h. All pools that are created with Unisphere on a Dell Unity XT AFA and HFA
systems are dynamic pools by default.
i. Dynamic pools reduce rebuild times by having more drives engaged in the
rebuild process.
- Data is spread out through the drive extent pool.
- The system uses spare space within the pool to rebuild failed drives.
j. A dynamic pool can be expanded by one or more drives to the system
limits.
7. Traditional Pools
a. The Dell UnityVSA uses traditional storage pools by default.
b. Traditional pools can be created on Dell Unity XT systems using the
UEMCLI, or REST API interfaces.
c. Traditional pools can be homogeneous (built from single tier) or
heterogeneous (multi-tiered).
8. Block Storage Provisioning
a. Block storage resources that are supported by the Dell Unity XT platform
include LUNs, Consistency Groups, and Thin Clones.
b. LUNs created from traditional heterogeneous pools, use FAST VP tiering
policies for data relocation: Start High and Then Auto-Tier, Auto-tier,
Highest Available Tier, and Lowest Available Tier.
c. Capability profiles must be associated with storage pools used for vVol
datastores, in order to enable the storage policy-based provisioning of
Virtual Machines.
d. The ESXi host must have an adapter to communicate over the storage
protocol: HBA (Fibre Channel) or standard NIC (iSCSI of NFS).
e. Host configurations for ESXi hosts are configured by adding the vCenter
Server and selecting the discovered hypervisor. The host configuration can
then be associated with a VMware datastore.
f. Protocol Endpoints establish a data path between the ESXi hosts and the
respective vVol datastores.
For more information, see the Dell EMC Unity Family Configuring
Hosts to Access Fibre Channel (FC) or iSCSI Storage, Dell EMC
Unity Family Configuring SMB File Sharing, Dell EMC Unity
Family Configuring NFS File Sharing, Dell EMC Unity Family
Configuring vVols, and Dell EMC Unity Family Configuring
Hosts to Access VMware Datastores on the Dell Technologies
Support site.
FAST Cache
FAST Cache
SAS Flash 2
RAID 1 pair
• FAST Cache is a performance feature for Hybrid Unity XT systems that extends
the existing caching capacity.
• FAST Cache can scale up to a larger capacity than the maximum DRAM Cache
capacity.
• FAST Cache consists of one or more RAID 1 pairs [1+1] of SAS Flash 2 drives.
− FAST Cache reduces the load on the disks that the LUN is formed from
which will ultimately contain the data.
− The data is flushed out of cache when it is no longer accessed as frequently
as other data.
− Subsets of the storage capacity are copied to FAST Cache in 64 KB chunks
of granularity.
Host I/O
Policy
Engine
Multicore
Cache
Memory map
FAST Cache
HDD
SSD SSD SSD SSD
Policy Engine - The FAST Cache Policy Engine is the software which monitors
and manages the I/O flow through FAST Cache. The Policy Engine keeps
statistical information about blocks on the system and determines what data is a
candidate for promotion. A chunk is marked for promotion when an eligible block is
accessed from spinning drives three times within a short amount of time. The block
is then copied to FAST Cache, and the Memory Map is updated. The policies that
are defined in the Policy Engine are system-defined and cannot be modified by the
user.
Memory Map - The FAST Cache Memory Map contains information of all 64 KB
blocks of data currently residing in FAST Cache. Each time a promotion occurs, or
a block is replaced in FAST Cache, the Memory Map is updated. The Memory Map
resides in DRAM memory and on the system drives to maintain high availability.
When FAST Cache is enabled, SP memory is dynamically allocated to the FAST
Cache Memory Map. When an I/O reaches FAST Cache to be completed, the
Memory Map is checked. The I/O is either redirected to a location in FAST Cache
or to the pool to be serviced.
FAST Cache is only supported on the Dell Unity XT hybrid models. This is because
the data is already on flash drives on the All-Flash models. Dell Unity hybrid
models support 200 GB, 400 GB, or 800 GB SAS Flash 2 drives in FAST Cache,
dependent on the model. The Dell Unity XT hybrid models support 400 GB SAS
Flash 2 drives only. See the Dell Unity Drive Support Matrix documentation for
more information.
The table shows each Unity XT hybrid model, the SAS Flash 2 drives supported for
that model, the maximum FAST Cache capacities and the total Cache.
FAST Cache can only be created on physical Dell Unity XT hybrid systems with
available SAS Flash 2 drives. In Unisphere, FAST Cache is created from the Initial
Configuration Wizard, or from the system Settings page. In this example, there is
no existing FAST Cache configuration on the system and it is being created from
the system Settings page in the Storage Configuration section. From the FAST
Cache page, the Create button is selected. The Create FAST Cache wizard is
launched to configure FAST Cache. The system has 400 GB SAS FLASH 2 drives
available for creating FAST Cache. The drop-down list shows the total number of
eligible drives for the FAST Cache configuration. In this example, two drives are
selected for the FAST Cache configuration. The Enable FAST Cache for existing
pools option is checked in this example. Thus, FAST Cache will be enabled on all
existing pools on the system. Leave the option unchecked if you want to customize
which pools to have FAST Cache enabled and disabled on. The wizard continues
the FAST Cache creation process, creating the RAID group for the FAST Cache
configuration, then enables FAST Cache on the existing storage pools. The status
of the used disks can be seen from the FAST Cache Drives page.
Although FAST Cache is a global resource, it is enabled on a per pool basis. You
can enable a pool to use FAST Cache during pool creation. The Create Pool
wizard Tiers step has a checkbox option Use FAST Cache to enable FAST Cache
on the pool being created. The option is disabled if FAST Cache is not created on
the system. If FAST Cache is created on the system, the Use FAST Cache option
is checked by default.
Pool Properties
If FAST Cache was created on the system without the Enable FAST Cache on
existing pools option checked, it can be selectively enabled on a per-pool basis.
Select a specific pool to enable FAST Cache on and go to its Properties page.
From the General tab, check the Use FAST Cache option checkbox to enable
FAST Cache on the pool.
FAST Cache can be expanded online with the Dell Unity XT system. The
expansion is used to increase the configured size of FAST Cache online, without
impacting FAST Cache operations on the system. The online expansion provides
an element of system scalability, enabling a minimal FAST Cache configuration to
service initial demands. FAST Cache can later be expanded online, growing the
configuration as demands on the system are increased. Each RAID 1 pair is
considered a FAST Cache object. In the example shown, the system is configured
with a single RAID 1 pair providing the FAST Cache configuration.
FAST Cache
RAID 1 pair
Empty page
Dirty page
Clean page
Start
Fast Cache
Expansion
To expand FAST Cache, free drives of the same size and type currently used in
FAST Cache must exist within the system. FAST Cache is expanded in pairs of
drives and can be expanded up to the system maximum. In the example shown, an
extra pair of SSD drives is being added to the existing FAST Cache configuration.
FAST Cache
RAID 1 pair RAID 1 pair
Empty page
Dirty page
Clean page
The example shows the completion of the FAST Cache expansion. The
reconfiguration provides the new space to FAST Cache and is available for its
operations.
FAST Cache
RAID 1 pair RAID 1 pair
Empty page
Dirty page
Clean page
When FAST Cache is enabled on the Dell Unity XT system, FAST Cache can be
expanded up to the system maximum. To expand FAST Cache from Unisphere, go
to the FAST Cache page found under Storage Configuration in the Settings
window. From this window, select Expand to start the Expand FAST Cache wizard.
Only free drives of the same size and type currently configured in FAST Cache are
used to expand FAST Cache. In this example, only 400 GB SAS Flash 2 drives are
available to be selected because FAST Cache is currently configured with those
drives. From the drop-down list, you can select pairs of drives to expand the
capacity of FAST Cache up to the system maximum. In this example, two drives
are being added to the current two drive FAST Cache configuration. After the
expansion, FAST Cache is configured with four drives arranged in two RAID 1 drive
pairs.
FAST Cache can be shrunk online with the Dell Unity XT system. Shrinking FAST
Cache is performed by removing drives from the FAST Cache configuration and
can be performed while FAST Cache is servicing I/O. In the following series of
examples, FAST Cache is shrunk by removing an existing pair of drives from the
FAST Cache configuration.
A FAST Cache shrink operation can be initiated at any time and is issued in pairs
of drives. A shrink operation allows the removal of all but two drives from FAST
Cache. Removing drives from FAST Cache can be a lengthy operation and can
impact system performance.
FAST Cache
RAID 1 pair RAID 1 pair
Shrink
Empty page
Dirty page
Clean page
Start
FAST Cache
Shrink
new promotions are blocked to each pair of drives selected for removal from FAST
Cache. Next, the FAST Cache dirty pages within the drives being removed are
cleaned. The dirty page cleaning ensures that data is flushed to the LUN back-end
disks.
Shrink Shrink
FAST Cache
Shrink Completed
After all dirty pages are cleaned within a set of drives, the capacity of the set is
removed from the FAST Cache configuration. For this example, the FAST Cache
configuration has been shrunk from two drive pairs down to a single drive pair.
Data which existed on FAST Cache drives that were removed may be promoted to
FAST Cache again through the normal promotion mechanism.
FAST Cache FAST Cache
Shrink
Promotions blocked
FAST Cache supports online shrink by removing drives from its configuration. It is
possible to remove all but one RAID 1 pair – each RAID 1 pair is considered a
FAST Cache object.
1 2
1: To shrink the FAST Cache, select the system Settings option in Unisphere and
navigate to the Storage Configuration section.
Select the Shrink option and the Shrink FAST Cache window opens.
2: In the drop-down list, select the number of drives to remove from the
configuration. In this example, the current FAST Cache configuration includes four
drives and two drives are being removed.
3: A message is displayed stating that removing the drives from FAST Cache
requires the flushing of dirty data from each set being removed to disk.
To remove all drives from FAST Cache, the Delete operation is used. FAST Cache
delete is often used when drives must be repurposed to a pool for expanded
capacity. The delete operation is similar to a shrink operation in that any existing
dirty pages must be flushed from FAST Cache to back-end disks. Then the disks
are removed from FAST Cache. The delete operation can consume a significant
amount of time, and system performance is impacted.
1
2
1: To Delete FAST Cache, select the system Settings option in Unisphere and go
to the Storage Configuration section. Select the Delete option and the Delete
message window opens.
2: The message states that deleting FAST Cache requires the flushing all data
from the FAST Cache drives. Click Yes to confirm the delete operation.
Demonstration
Movie:
Dell Unity XT Host I/O Limits, also referred to Quality of Service [QoS], is a feature
that limits I/O to storage resources: LUNs, attached snapshots, VMFS, and vVol
[Block] datastores. Host I/O Limits can be configured on physical or virtual
deployments of Dell Unity XT systems. Limiting I/O throughput and bandwidth
provides more predictable performance in system workloads between hosts,
applications, and storage resources.
Host I/O Limits are Active when the global feature is enabled, policies are created,
and assigned to a storage resource. Host I/O Limits provides a system-wide or a
specific host pause and resume control feature. Limits can be set by throughput, in
IOs per second [IOPS], or bandwidth, defined by Kilobytes or Megabytes per
second [KBPS or MBPS], or a combination of both types of limits. If both thresholds
are set, the system limits traffic according to the threshold that is reached first.
Only one I/O limit policy can be applied to a storage resource. For example, an I/O
limit policy can be applied to an individual LUN or to a group of LUNs. When an I/O
limit policy is applied to a group of LUNs, it can also be shared. When a policy is
shared, the limit applies to the combined activity from all LUNs in the group. When
a policy is not shared, the same limit applies to each LUN in the group.
Throughput
IOPs
System-wide Pause/Resume
Host I/O Limits are based on a
control and individual policy
user created policy
control
The Host I/O Limit feature is useful for service providers to control service level
agreements.
• Absolute:
− An absolute limit applies a maximum threshold to a storage resource
regardless of its size.
• It can be configured to limit the amount of I/O traffic up to a threshold
amount based on IOPS, bandwidth or both. If both thresholds are set, the
storage system limits traffic according to the threshold that is reached
first. The limit is also shared across resources.
• Burst configuration supported
• Density-based:
Absolute
In the example, there are three LUNs under the same policy. Setting an absolute
policy for the LUNs would limit each LUN to 1000 IOPS regardless of LUN size.
Density-Based
The density-based Host I/O Limit configuration is calculated by taking the Resource
Size x [multiplied by] the Density Limit that is set by the Storage Administrator.
After set, the Host I/O Limits driver throttles the IOPS based on the calculation.
Shared Policies
Host I/O Limit allows administrators to implement a shared policy when the initial
Host I/O policy is created. The policy is in effect for the life of that policy and cannot
be changed. Administrators must create another policy with the Shared check box
cleared if they want to disable the setting. When the Shared check box is cleared,
each individual resource is assigned a specific limit or limits. When the Shared
check box is selected, the resources are treated as a group, and all resources
share the limits that are applied in the policy.
In the example, a Host I/O Limit policy has been created to limit the number of
hosts IOPS to 100. In this case, both LUN 1 and LUN 2 share this limit. Shared
limits do not guarantee the limits are distributed evenly. From the example with a
shared limit of 100 IOPS, LUN 1 can service I/O at 75 IOPS and LUN 2 can service
25 IOPS. Also, if limits are shared across Storage Processors, it does not matter if
the LUNs are owned by different SPs. The policy applies to both.
SPA LUN 1
SPB LUN 2
The Density-based Shared Host I/O Limit calculation takes the combined size of all
resources sharing the policy multiplied by the Density Limit set by the Storage
Administrator. After it is set, the Host I/O Limits driver throttles the IOPS based on
the calculation. In the example, LUN A is a 100 GB LUN, LUN B is 500 GB, so the
calculation is 100 + 500 [combined resource size] x 10 [Density Limit]. This sets the
maximum number of IOPS to 6000.
Multiple resources can be added to a single density-based Host I/O Limit policy.
Each resource in that policy can have a different limit that is based on the capacity
of the resource. If a Storage Administrator decides to change the capacity of a
given resource, the new capacity is now used in the calculation when configuring
the IOPS.
In this example, a LUN resource is configured at 500 GB at initial sizing and the
density-based limit is configured a 10 IOPS. The IOPS is 5000 based on the
calculation [500 x 10 = 5000].
For attached snapshots, the maximum IOPS is determined by the size of the
resource [LUN] at the point in time that the snapshot was taken. In the example, a
snapshot was created for LUN A. Using the same density limit of 10 the maximum
number of IOPS would be 1000 for the snapshot. [100 x 10 = 1000 IOPS].
LUN A For attached snapshots, the maximum applies to the size of the
Snap resource at the point in time of the snapshot
100 GB
When configuring density-based limits, there are minimum and maximum values
the user interface accepts. The values are shown below. If a user tries to configure
a value outside the limits, the box is highlighted in Red to indicate that the value is
incorrect. An example is shown below. Hover over the box to view the max allowed
value.
The Burst feature typically allows for one-time exceptions that are set at some
user-defined frequency. This allows for circumstances such as boot storms, to
periodically occur. For example, if a limit setting was configured to limit IOPS in the
morning, you may set up an I/O Burst policy for some period to account for possible
increased login traffic. The Burst feature provides Service Providers with an
opportunity to upsell an existing SLA. Service Providers can afford end users the
opportunity to use more IOPS than the original SLA called for. If applications are
constantly exceeding the SLA, they can go back to the end user and sell additional
usage of the extra I/Os allowed.
− Provides insight into how much more I/O the end user is consuming
o Usage of extra I/Os allowed may warrant a higher limit
Burst Creation
Users can select the Optional Burst Settings from the Configuration page of the
wizard when creating a Host I/O Limit policy. Also, if there is an existing policy in
place, users can edit that policy anytime to create a Burst configuration. Users can
configure the duration and frequency of when the policy runs. This timing starts
from when the Burst policy is created or edited. It is not tied in any way to the
system or NTP server time. Having the timing run in this manner prevents several
policies from running simultaneously, say at the top of the hour. Burst settings can
be changed or disabled at any time by clearing the Burst setting in Unisphere.
Burst Configuration
The For option is the duration in minutes to allow burst to run. This setting is not a
hard limit and is used only to calculate the extra I/O operations that are allocated
for bursting. The actual burst time depends on I/O activity and can be longer than
defined when activity is lower than the allowed burst rate. The For option
configurable values are 1 to 60 minutes.
The Every option is the frequency to allow the burst to occur. The configurable
setting is 1 hour to 24 hours. The example shows a policy that is configured to
allow a 10% increase in IOPS and Bandwidth. The duration of the window is 5
minutes, and the policy will run every 1 hour.
Percentage Increase
Burst %
1% to 100%
How often
Every
(Hours) 1 hr to 24 hrs
Burst settings
The example shows how an I/O burst calculation is configured. What the policy
does is allow X number of extra I/O operations to occur based on the percentage
and duration the user input.
In this case, the absolute limit is 1000 IOPS with a burst percentage of 20%. The
policy is allowed for a five-minute period and will reset at 1-hour intervals. The
number of extra I/O operations in this case is calculated as: 1000 x 0.20 x 5 x 60 =
60,000. The policy will never allow the IOPS to go above this 20% limit, 1200 IOPS
in this case. After the additional I/O operations allocated for bursting are depleted,
the limit returns to 1000 IOPS. The policy cannot burst again until the 1-hour
interval ends.
Note that the extra number of burst I/O operations are not allowed to happen all at
once. The system will only allow the 20% increase to the configured I/O limit of
1000 IOPS for the burst. In this case, the system would allow a maximum of 1200
IOPS for the burst duration of 5 minutes.
Burst Scenarios
Shown here are two scenarios that may be encountered when configuring a Burst
limit. In the first case, the Host target I/O is always above the Host I/O limit and
Burst limit. There are both a Host I/O Limit and Burst Limit that is configured, but
the incoming Host target I/O continually exceeds these values.
In the second scenario, the Host target I/O is above Host I/O Limit, but below the
Burst Limit. The Host IOPS generated are somewhere in between these two limits.
Burst Scenario 1
Host target I/O is always above the Host I/O limit and Burst limit. There is both a
Host I/O Limit and a Burst Limit that is configured, but the incoming Host target I/O
continually exceeds these values.
Host target I/O stays above the Host I/O Limit and Burst Limit
• IOPS will never go above the Burst Limit ceiling
− If Burst is 20%, then only 20% more IOPS allowed at any point in time
• Duration of the extra IOPS matches the “For” setting
− Note: This is not a set window
• Once all Extra IOPS has been consumed, burst allowance ends
For this scenario, the duration of the extra IOPS matches the “For” setting. For the
scenario where the host target I/O is below the burst limit, the burst duration
window will be longer. Once all the extra I/O operations have been consumed, the
burst allowance ends and only the Host I/O Limit is applied for the remainder of the
defined burst limit policy period. Extra burst I/O will be available again in the next
burst limit policy period.
Here is a graph showing the Total incoming IOPS on the “Y” axis and the time in
minutes (60 min) on the “X” axis. The Host I/O Limits are configured to be a
maximum of 1000 IOPS with a burst percentage of 20 (1200 IOPS). The duration of
the burst is 5 minutes and will refresh every hour. The graphics show that the Host
target IOPS is around 1500, well above the Host I/O and Burst Limit settings. This
is the I/O that the host is performing. The blue line is what the Host I/O limit is, so
we will try to keep the I/O rate at this limit of 1000 IOPS. The Burst Limit is the limit
that was calculated from the user input and is at 1200 IOPS. The policy will never
allow the IOPS to go above the burst limit. It also means that you will match the
“For” window for the duration period since the Host I/O is always above the other
limits. The I/O comes in and are throttled by the Host I/O Limit of 1000 IOPS. I/O
continues up until the 42-minute mark where it comes to the 5-minute window.
During this period, the I/O is allowed to burst to 1200 IOPS.
Burst Limit
Total
IOPS
Burst: 20%
For: 5 minutes
Every: 1 Hour
Let us look a bit closer at how the Burst feature throttles the I/Os. The calculations
are the same as the previous scenario where the total number of extra I/O
operations was based on the Limit x % x For x Frequency. So, we calculated
60,000. The I/O burst period starts, and a calculation is taken between minute 39
and 40 (60 seconds). In that 60 seconds, an extra 200 IOPS is allowed (1200 –
1000), so 200 x 60 produces the value of 12,000 I/O operations. So, every 60-
second sample period allows 12,000 I/O operations.
Minute 1
Our “For” value is 5 minutes, so in a 5-minute period we should use our 60,000
extra I/O operations. (12000 * 5 = 60,000). The 12,000 is subtracted from our total
of 60,000 for each 60 sec. period (60,000 – 12,000 = 48,000). This continues for
the frequency of the burst. Every 60-second period subtracts an additional 12,000
I/O operations until the allotted extra I/O operations value is depleted.
Minute 2
Again, this continues for the frequency of the burst. Every 60-second period
subtracts an additional 12,000 I/O operations until the allotted extra I/O value is
depleted. Here, another 12,000 is subtracted from our total of 60,000 for this 60
sec. period (60,000 – 12,000 – 12,000 = 36,000).
Minute 3
The burst is continuing therefore another 12,000 is subtracted from our total of
60,000 (60,000 – 12,000 – 12,000 -12,000 = 24,000).
Minute 4
Since the burst continues, another 12,000 is subtracted from our total of 60,000
(60,000 – 12,000 – 12,000 -12,000 – 12,000 = 12,000). This happens as long as
the Host I/O rate is above our calculated values during the period. The extra I/O
operations are used within the 5-minute window.
Minute 5
Since the burst is still continuing, an additional and final 12,000 I/O operations are
subtracted and now the allotted extra I/O value is depleted. During the burst, since
the Host I/O rate was always above our calculated values during this period, the
extra I/O operations were used within the 5-minute window. Once the burst
frequency ends, it will start again in 1 hour as determined by the “Every” parameter.
In this scenario, a Host I/O Limit and Burst Limit are configured, and the incoming
Host target I/O continually exceeds these values.
Movie:
Burst Scenario 2
In this case where the Host target I/O is above Host I/O Limit, but below the Burst
Limit. The Host IOPS generated are somewhere in between these two limits.
In this second scenario, the same calculations are used as in the previous slides
however the Host I/O being generated is around 1100 IOPS, right between our two
limits of 1000 and 1200. As Host I/O continues, we see at the 39-minute mark the
start of the I/O burst that in this case is a 10-minute period. The thing to note is the
I/O does not cross the 1100 IOPS since this is all the I/O the host was attempting to
do. Also, since the number of IOPS is smaller, it continues to run for a longer
period before the total Extra I/O count is reached.
Burst Limit
Time (Minutes)
10 Minute Burst
Look at the calculations for this scenario. The Host I/O is between the two limits
and is only generating 1100 IOPS. The difference between the Host I/O Limit of
1000 and the actual Host I/O is 100 IOPS. So, the calculation is based on 100 x 60
= 6,000 I/O operations.
The total number of I/O operations calculated based on the original numbers is
60,000 I/O operations. So, for each 60-second period 6,000 I/O operations get
subtracted from the 60,000 I/O operation total. Effectively, this doubles the “For”
time since it will take 10 minutes to deplete the 60,000 I/O operations that the burst
limit allows. So even though the “For” period was 5 minutes, the number of IOPS
allowed were smaller, thus allowing for a longer period of the burst than the
configured time.
Time (Minutes)
Minute 1
Since the “For” period is set for 5 minutes, and that the number of IOPS allowed
were smaller this allows for a longer period of 10 minutes of burst than the
configured 5 minutes.
Therefore, for a 10-minute period we should use our 60,000 extra I/O operations.
(12000 x 5 = 60,000). Now only 6,000 is subtracted from our total of 60,000 for
each 60-second period (60,000 – 6,000 = 54,000). This continues for the frequency
of the burst. Every 60-second period will subtract an additional 6,000 I/O operations
until the allotted extra I/O value is depleted.
Time (Minutes)
Minute 2
Here, another 6,000 is subtracted from our total of 60,000 for this 60-second period
[60,000 – 6,000 – 6,000 = 48,000].
Time (Minutes)
Minute 3
Another 6,000 is subtracted from our total of 60,000 for this 60-second period
(60,000 – 6,000 – 6,000 – 6000 = 42,000).
Time (Minutes)
Minute 4
Another 6,000 is subtracted from our total of 60,000 for this 60-second period
(60,000 – 6,000 – 6,000 – 6000 – 6000 = 36,000).
Time (Minutes)
Minutes 5 through 10
This continues until the extra I/O operations for the burst are depleted. As you can
see, even though the “For” period was 5 minutes, the number of I/O operations per
60 seconds were smaller and allowed for a longer period of burst than the
configured time.
In this scenario, the Host target I/O is above the Host I/O Limit, but below the Burst
Limit.
Movie:
Here are the available policy level controls and status conditions that are displayed
in Unisphere.
Host I/O Limits provides the ability to pause and resume a specific host I/O limit.
This feature allows each configured policy to be paused or resumed independently
of the others, whether or not the policy is shared. Pausing the policy stops the
enforcement of that policy. Resuming the policy immediately starts the enforcement
of that policy and throttles the I/O accordingly. There are three status conditions for
Host I/O Limit policies: Active, Paused, or Global Paused.
Policy
Pause stops the enforcement of the policy Status Paused
Global
Paused
Resume starts the enforcement of the policy
The table looks at the policies and their relationship for both System Settings and
Policy Status. When a policy is created, the policy is displayed as Active by default.
System Settings are global settings and are displayed as either Active or Paused.
When the System Settings are displayed as Active, the Policy Status will be
displayed as “Active” or “Paused” depending on the status of the policy when the
System Settings were changed.
Active Active/Paused
Paused Global/Paused
Paused Paused
For example, if the System Setting was “Active” and the user had configured three
policies A, B and C. A user could pause A, and the system would update the status
of “A” to “Paused.” The other two policies B and C would still display an “Active”
status. At this point if the user decided to change the System settings to “Pause”
the Policy status will be displayed as “Global Paused” on policies B and C but
“Paused” on A.
Active/Paused
Active
Paused Global/Paused
Paused Paused
When both the System setting and Policy Setting are “Paused,” the Policy Status
will be shown as “Paused.”
Active Active/Paused
Paused Global/Paused
Paused Paused
To set a Host I/O Limit at the System level, click the Settings icon then go to
Management > Performance. The Host I/O Limits Status is Active. All host I/O
limits are being enforced now. The limits can be temporarily lifted by clicking
Pause.
If there are “Active” policies, you can pause the policies on a system-wide basis.
Once you select “Pause,” you will be prompted to confirm the operation. (Not
shown)
The Performance > Host I/O Limits page shows the policies that are affected by
the Pause. In the example, three policies display a Status of “Global Paused”
indicating a System-wide enforcement of those policies.
The Host I/O Limits Status now displays a “Paused” Status, and users can
“Resume” the policy. Select Resume to allow the system to continue with the
throttling of the I/O according to the parameters in the policies.
1 - Select Policy
The example displays the Host I/O Limits policies from Unisphere under System >
Performance > Host I/O Limits window. There are several policies that are
created, three of which show a default status of Active. The Density_Limit_1 policy
is selected.
Select policy
2 - Pause Policy
From the More Actions tab, users have a chance to Pause an Active session
(resume will be unavailable).
Pause policy
3 - Confirm Pause
Once the Pause option is selected, a warning message is issued to the user to
confirm the Pause operation.
Confirm pause
4 - Verify Pause
Selecting Pause will start a background job and after a few seconds, causes the
Status of the policy to be displayed a Paused. All other policies are still Active since
the pause was done at the Policy level, not the System level.
Verify pause
Demonstration
These demos show how to setup different types of host I/O limit policies. Click the
associated links to view the videos.
Topics Link
In Dell Unity XT systems, the UFS64 architecture allows users to extend file
systems. Performing UFS64 file system extend operations is transparent to the
client meaning the array can still service I/O to a client during extend operations.
• On a physical Dell Unity XT systems, the maximum size a file system can be
extended to is 256 TB
− The maximum file system size on Dell UnityVSA is defined by its license.
• The capacity of thin and thick file systems can be extended by manually
increasing their total size.
• Auto-extension works only on thin file systems
− Thin file systems are automatically extended by the system based on the
ratio of used-to-allocated space.
− File systems automatically extend when used space exceeds 75% of the
allocated space.
− Auto-extension operation happens without user intervention and does not
change the advertised capacity.
For thin-provisioned file systems, the manual extend operation increases visible
or virtual size without increasing the actual size allocated to the file system from the
storage pool.
For thick file systems, the manual extend operation increases the actual space
allocated to the file system from the storage pool.
Thin UFS64
Extend
Old Size
Thick UFS64
Extend
New Size
The system automatically allocates space for a thin UFS64 file system along with
space consumption. Auto extend happens when the space consumption threshold
is reached. The threshold is the percentage of used space in the file system
allocated space (system default value is 75%). It cannot exceed the file system
visible size. Only allocated space increases, not the file system provisioned size.
The file system cannot auto-extend past the provisioned size.
Auto Extend
In Dell Unity XT, the UFS64 architecture enables the reduction of the space the file
system uses from a storage pool.
• UFS64 architecture allows the underlying released storage of the file system to
be reclaimed.
• The storage space reclamation is triggered by the UFS64 file system shrink
operations.
• UFS64 shrink operations can be:
− Manually initiated by user for both thin and thick file systems.
− Automatic initiated only on thin file systems when the storage system
identifies allocated, but unused, storage space that can be reclaimed back to
the storage pool.
A storage administrator can manually shrink the provisioned size of a thin or thick-
provisioned file system into, or within, the allocated space.
The thin-provisioned file system currently has 450 GB of space allocated from the
storage pool. The allocated space consists of 250 GB of Used Space and 200 GB
of Free Space. The system performs any evacuation that is necessary to allow the
shrinking process on the contiguous free space.
Used Space
Used Space
450 GB
Reclaimed Space
UFS64
1 TB
2. Evacuation: Evacuate portions of the file system block address space Allocated Space
450 GB
1 TB
Pool
1 TB
In the example, the provisioned space for the thin-provisioned file system is
reduced by 700 GB. The total storage pool free space is increased by 150 GB. The
file system Allocated Space and Pool Used Space is decreased. The Allocated
space after the shrink drops below the original allocated space, enabling the
storage pool to reclaim the space. Observe that the only space that is reclaimed is
the portion of the shrink that was in the original allocated space of 450 GB. This is
because the remaining 550 GB of the original thin file system was virtual space that
is advertised to the client.
Used Space
Free Space
3. Truncate File System: Provisioned Space is reduced by 700 GB
100GB 50GB 100GB
Virtual Space
Used Space
300 GB
Poo
l
300 GB
Thin-provisioned file systems are automatically shrunk by the system when certain
conditions are met. Automatic shrink improves space allocation by releasing any
unused space back to the storage pool. The file system is automatically shrunk
when the used space is less than 70% [system default value] of the allocated
space after a period of 7.5 hours. The file system provisioned size does not shrink,
only the allocated space decreases.
Reclaimed Space
Allocated Space
UFS64
Auto Shrink
Threshold
To change the size of a file system, select the File page under the Storage section
in Unisphere. Then select the File System tab from the top menu. The properties of
the File System can be launched by double-clicking the File System from the list or
by clicking the pencil icon from the menu on the top of the File Systems list. From
the General tab, the size of the file system can be extended by increasing the Size
field. To shrink the file system, you must decrease the Size field. The Apply button
must be selected to commit the changes. The change to the file system
configuration [size and percentage of allocation space] will be displayed in the list.
In this example, the fs02 file system size is manually set to 110 GB.
FLR Overview
NAS Server
2086
2086 2056
– Not locked
– Append-only
– Locked (WORM)
– Expired
File-level Retention (FLR) protects files from modification or deletion through SMB,
NFS, or FTP access based on a specified retention date and time. The retention
period can be increased but cannot be reduced. The FLR use case is for file data
content archival and compliance needs. FLR is also beneficial in preventing users
from accidental file modification and deletion.
For full details of the FLR feature, reference the Dell EMC Unity: File-Level
Retention (FLR) white paper available on Dell EMC Online Support.
FLR can only be enabled during the creation of a file system. Once FLR is enabled
for a file system, it cannot be disabled after the file system is created. Therefore, it
is critical to know if FLR is required at file system creation time. When a file system
is enabled for FLR, a nonmodifiable FLR clock is started on the file system. The
FLR clock is used to track the retention date. An FLR activity log is also created on
the file system when it is FLR enabled. The activity log provides an audit record for
files stored on the file system.
There are two different types of FLR; FLR-E (Enterprise) and FLR-C (Compliance).
FLR-E protects file data that is locked from content changes that are made by SMB
and NFS users regardless of their administrative rights and privileges. An
appropriately authorized Dell EMC Unity administrator (with the Unisphere
Administrator or Storage Administrator role) can delete an FLR-E enabled file
system, even if it contains locked files.
FLR-C protects file data that is locked from content changes that are made by SMB
and NFS users regardless of their administrative rights and privileges. File systems
containing locked files cannot be deleted by any authorized Dell EMC Unity
administrative role. FLR-C enabled file systems are compliant the Securities and
Exchange Commission (SEC) rule 17a-4(f) for digital storage. FLR-C also includes
a data integrity check for files that are written to an FLR-C enabled file system. The
data integrity check affects write performance to an FLR-C enabled file system.
Files within an FLR enabled file system have different states; Not Locked, Append-
only, Locked and Expired.
Not Locked: All files start as not locked. A not locked file is an unprotected file that
is treated as a regular file in a file system. In an FLR file system, the state of an
unprotected file can change to Locked or remain as not locked.
Append-only: Users cannot delete, rename, and modify the data in an append-
only file, but users can add data to it. A use case for an append-only file is to
archive logfiles that grow over time. The file can remain in the append-only state
forever. However, a user can transition it back to the Locked state by setting the file
status to read-only with a retention date.
Locked: Also known as “Write Once, Read Many” (WORM). A user cannot modify,
extend, or delete a locked file. The path to locked files is protected from
modification. That means a user cannot delete or rename a directory containing
locked files. The file remains locked until its retention period expires. An
administrator can perform two actions on a locked file: 1. Increase the file retention
date to extend the existing retention period. 2. If the locked file is initially empty,
move the file to the append-only state.
Expired: When the retention period ends, the file transitions from the locked state
to the expired state. Users cannot modify or rename a file in the expired state, but
can delete the file. An expired file can have its retention period extended such that
the file transitions back to the locked state. An empty expired file can also transition
to the append-only state.
The FLR feature is supported on the entire Dell Unity family of storage systems. It
is available on all physical Dell Unity XT models and the Dell UnityVSA.
The Dell Unity Replication feature supports FLR. When replicating an FLR enabled
file system, the destination file system FLR type must match the source. If the
replication session is created with the Unisphere GUI, the system automatically
creates the destination file system to match the source file system FLR type. If the
replication session is being created with UEMCLI, the destination file system
provisioning and FLT type selection are done manually.
FLR enabled file systems are supported with NDMP backup and restore
operations. The retention period and permissions of files are captured in the
backup but the file lock status is not. When an FLR enabled file system is restored
with NDMP, read-only files are restored as locked files. Append-only files are
restored as normal files.
FLR supports the Dell EMC File Import feature. If the source VNX file system
imported is FLR enabled, the target Dell Unity file system is migrated as a type
matched FLR enabled file system. The source VNX must be DHSM enabled. The
DHSM credentials are used when the import session is created on the Dell Unity
system.
FLR supports the Dell Unity Snapshots feature. FLR-C file systems support read-
only snapshots but do not support snapshot restore operations. FLR-E file systems
support read-only and R/W snapshots, and support snapshot restores. When an
FLR-E file system is restored from a snapshot, the FLR file system clock is set back
in time, corresponding to the snapshot time. Note that the change to the FLR clock
effectively extends the retention period of locked files.
The first step in the process is to enable FLR on the file system. It must be done at
file system creation time. The file system creation wizard includes a step to enable
FLR where either the FLR-E or FLR-C type can be selected. If FLR-C is selected,
there is a separate step to enable its data integrity check. The data integrity check
is controlled by the writeverify NAS Server parameter.
The next step is to define retention period limits for the file system and is done
within the FLR step of file system creation wizard. The retention period limits can
also be defined after the file system is created from the FLR tab of the file system
Properties. A minimum limit, a default limit, and a maximum limit are defined for the
FLR enabled file system.
The next step is to set a lock or append-only state for files on the file system. There
is a process to set a file to the lock state and a process to set a file to the append-
only state. For NFS files, setting the file state is done from an NFS client. For SMB
files, setting the file state is done using the FLR Toolkit application. A retention time
can also be placed on files in an automated fashion by the system. This is enabled
from the FLR tab of the file system Properties.
Enabling a file system for FLR is only done during the creation of the file system in
the Create File System wizard. The FLR step of the wizard by default has the FLR
option Off. Select either Enterprise to enable FLR-E or select Compliance to enable
FLR-C. The example illustrates FLR-C being enabled for the file system.
When the user confirms to enable FLR, options are exposed for defining the
minimum, default, and maximum retention periods for FLR. Shown in the example
are the default retention periods for FLR-C. The retention period values can also be
defined after the file system creation from the FLR tab of the file system Properties.
The retention periods for the file system are covered on a following slide.
~ writeverify disabled ~
~ writeverify disabled ~
When FLR-C is enabled on a file system, the user must also turn on the data
integrity check. It is required for compliance before files are locked on the file
system. The NAS Server FLRCompliance.writeverify parameter controls the
data integrity check. The parameter is set using the svc_nas CLI command from
an SSH session to the system. When the parameter is enabled, all write operations
on all FLR Compliance file systems mounted on the NAS Server are read back and
verified. The integrity check ensures that the data has been written correctly. The
system performance may degrade during this procedure due to the amount of work
being performed.
In the example, the first svc_nas command is used to check if the parameter is
enabled. From its output, the current value is set to 0 indicating that writeverify is
disabled.
The second svc_nas command sets the value of the parameter to 1, to enable
writeverify.
This example illustrates the information and retention period configuration available
from the FLR tab of the file system Properties.
The FLR Type for the file system is shown. In this example,, the file system has
been enabled for Compliance. Also displayed are the number of protected files. In
this example file system has no protected files. The FLR clock time is displayed.
The tab also displays the date when the last protected file expires.
An FLR enabled file system has retention period limits that can be customized to
user needs. Retention periods define how short or long a user can define a file to
be locked. The retention periods can be set within the FLR step of the File System
Creation wizard as seen previously. The Retention periods can also be configured
any time after the file system is created. This example illustrates the retention
periods that are defined for the file system. The respective tables show the default,
minimum and maximum values for each of the retention periods.
The Minimum Retention Period value specifies the shortest time period a user can
specifically lock files for. The value of the Minimum Retention Period must be less
than or equal to the Maximum Retention Period value.
The Default Retention Period specifies the time period a file is locked for when the
user does not explicitly set a retention time for files. The Default Retention Period is
also used when automation is configured to lock files on the file system. The
Default Retention Period value must be greater than or equal to the Minimum
Retention Period value. It must also be less than or equal to the Maximum
Retention Period value.
The Maximum Retention Period specifies the longest time period that files can be
locked for. The value must be greater than of equal to the Minimum Retention
Period value.
Note: The FLR retention periods can be modified at any time. The modification only
affects the retention times for newly locked files by the user or automation.
Previously locked files remain unchanged.
The file state of locked or append-only is set using an NFS client that is mounted to
the exported file system.
A file lock state is achieved by setting the last access time of the file to the wanted
file retention date and time, and then change the file permission bits to read-only.
To set the file last access date and time, use the touch command with the –at
option and the wanted retention date and time. In the example, a file that is named
lockedfile has its last access time set to 23:59, Dec 31, 2024 as shown in the ls
output for the file. Then the file is set to read-only using the chmod command with
the –w option to remove the write permission.
When setting the locked file retention date and time, it must be equal to or less than
the Maximum Retention Period defined on the file system. Any attempt to set a file
retention date and time greater than the Maximum Retention Period results in the
retention date and time setting equal to the Maximum Retention Period setting. In a
similar manner, any attempt to set a file retention date and time less than the
Minimum Retention Period results in the retention date and time setting equal to
the Minimum Retention Period setting. Files that are locked without specifying a
retention date and time results in the retention date and time setting equal to the
Default Retention Period setting.
CLI: flrapply
Properties
Windows does not have a native UI/CLI to set retention date and time to lock files.
The Dell FLR Toolkit is an application available for download from Dell Online
Support. Install the application on a Windows client in the same domain as the FLR
enabled file system to be accessed. The application uses the Windows API
SetFileTime function for setting retention date and time to lock files on FLR enabled
file systems. The toolkit includes a CLI function called flrapply. Another aspect of
the FLR toolkit is an enhancement to Windows Explorer. An FLR Attributes tab is
available in Windows Explorer file Properties. The FLR toolkit also has an FLR
Explorer which has FLR related reporting and retention time capabilities. FLR
Explorer is not shown in this training.
FLR Toolkit requires that DHSM be enabled on the NAS Server that is associated
with the FLR enabled file system. Do not check Enforce HTTP Secure when
enabling DHSM on the NAS Server.
The examples illustrate setting retention date and time on a file to set its lock state.
In the flrapply CLI example, an SMB file is set to the lock state with a retention data
and time of 12:00 PM May 8, 2024. The second example illustrates the Windows
Explorer FLR Attributes tab enhancement in the file properties window. The tab
displays the FLR expiration date of the file. The example illustrates the retention
date and time being extended on the file to 12:00 PM Aug 8, 2024. As with NFS,
when specifying file retention dates and times, they must be within the Minimum
and Maximum Retention Period values. If not, the settings defined for the Retention
Period are used to lock the file.
Files can be locked through automation on FLR enabled file systems using options
available on the FLR tab of the file system Properties. The automation options are
disabled by default.
When the Auto-lock New Files option is enabled, the Auto-lock Policy Interval
configuration is exposed. The system automatically locks files if they are not
modified for a user specified time period, defined by the Auto-lock Policy Interval.
Automatically locked files use the Default Retention Period setting. Files in append-
only mode are also subject to automatic locking.
When enabled, the Auto-delete Files When Retention Ends option automatically
deletes locked files after their retention date and time have expired. The auto-
delete happens at 7-day intervals. Its timer starts when the auto-delete option is
enabled.
m. In Dell Unity XT, the UFS64 architecture enables the reduction of the space
the file system uses from a storage pool.
- A storage administrator can manually shrink the size of a provisioned file
system.
- Thin-provisioned file systems are automatically shrunk by the system
when the used space is less than 70%.
57. File-level Retention (FLR)
For more information, see the Dell EMC Unity Family Configuring
Pools, Dell EMC Unity: NAS Capabilities, and Dell EMC Unity:
File-Level Retention (FLR) on the Dell Technologies Support site.
Data Reduction
In general, Data Reduction reduces the amount of physical storage that is required
to save a dataset. Data Reduction helps reduce the Total Cost of Ownership (TCO)
of a Dell Unity XT storage system.
Thin LUNs within Consistency Groups Thin VMware VMFS and NFS Datastores
Data Reduction can also be enabled on Block and File storage resources
participating in replication sessions. The source and destination storage resources
in a replication session are independent. Data Reduction with or without the
Advanced Deduplication option can be enabled or disabled separately on the
source and destination resource.
Considerations
• Hybrid pools created on Unity XT model systems support Data Reduction with
and without Advanced Deduplication enabled on Traditional or Dynamic pools.
• To support Data Reduction, the pool must contain a flash tier and the total
usable capacity of the flash tier must meet or exceed 10% of the total pool
capacity.
− Data Reduction can be enabled on an existing resource if the flash capacity
requirement is met.
• The Advanced Deduplication switch is available only on:
For Data Reduction enabled storage resources, the Data Reduction process occurs
during the System Cache proactive cleaning operations or when System Cache is
flushing cache pages to the drives within a Pool. The data in this scenario may be
new to the storage resource, or the data may be an update to existing blocks of
data currently residing on disk.
In either case, the Data Reduction algorithm occurs before the data is written to the
drives within the Pool. During the Data Reduction process, multiple blocks are
aggregated together and sent through the algorithm. After determining if savings
can be achieved or data must be written to disk, space within the Pool is allocated
if needed, and the data is written to the drives.
Process:
72. System write cache sends data to the Data Reduction algorithm during
proactive cleaning or flushing.
73. Data Reduction logic determines any savings.
74. Space is allocated in the storage resource for the dataset if needed, and the
data is sent to the disk.
Data is sent to the Data Reduction algorithm during proactive cleaning or flushing
of write path data.
In the example, an 8 KB block enters the Data Reduction algorithm and Advanced
Deduplication is disabled.
• The 8 KB block is first passed through the deduplication algorithm. Within this
algorithm, the system determines if the block consists entirely of zeros, or
matches a known pattern within the system.
• If a pattern is detected, the private space metadata of the storage resource is
updated to include information about the pattern, along with information about
how to re-create the data block if it is accessed in the future.
• Also, when deduplication finds a pattern match, the remainder of the Data
Reduction feature is skipped for those blocks which saves system resources.
None of the 8 KB block of data is written to the Pool at this time.
• If a block was allocated previously, then the block can be freed for reuse. When
a read for the block of data is received, the metadata is reviewed, and the block
will be re-created and sent to the host.
• If a pattern is not found, the data is passed through the Compression Algorithm.
If savings are achieved, space is allocated on the Pool to accommodate the
data.
• If the data is an overwrite, it may be written to the original location if it is the
same size as before.
The example displays the behavior of the Data Reduction algorithm when
Advanced Deduplication is disabled.
Deduplication algorithm
looks for zeros and
8 KB block common patterns Update private space
Pattern Detected metadata to include pattern
reference
End
Each 8 KB block receives a fingerprint, which is compared to the fingerprints for the
storage resource. If a matching fingerprint is found, deduplication occurs. The
private space within the resource is updated to include a reference to the block of
data residing on disk. No data is written to disk at this time.
Deduplication algorithm
Fingerprint Calculation
Compare
Fingerprint Compare
Match
Fingerprint
Update private space
No Match
Cache
to include data
resource
Compression algorithm
Update
End
Through machine learning and statistics, the fingerprint cache determines which
fingerprints to keep, and which ones to replace with new fingerprints. The
fingerprint cache algorithm learns which resources have high deduplication rates
and allows those resources to consume more fingerprint locations.
• If no fingerprint match is detected, the blocks enter the compression algorithm.
• If savings can be achieved, space is allocated within the Pool which matches
the compressed size of the data, the data is compressed, and the data is written
to the Pool. When Advanced Deduplication is enabled, the fingerprint for the
block of data is also stored with the compressed data on disk.
• The fingerprint cache is then updated to include the fingerprint for the new data.
Read Operation
Data Reduction can be enabled on resources that are built from hybrid flash pools
within the Dell Unity XT 380, 480, 680 and 880 systems.
The properties page of a multi-tiered pool that includes SAS Flash 3 and SAS drives
Flash Capacity
To support Data Reduction, the proportion of flash capacity on the pool must be
equal to or exceed 10% of the total capacity of the pool.
This Flash Percent (%) value allows enabling data reduction for resources that are built from the
pool
Storage Resource
Enabling Data Reduction is only possible if the pool flash percent value
requirements are met.
• For pools with a lower flash capacity, the feature is unavailable and a message
is displayed. Go here to see an example.
• Advanced Deduplication is also supported for the data reduction enabled
resources.
Both Data Reduction and Advanced Deduplication can be enabled for a LUN built from Pool 0
The proportion of flash capacity utilization can also be verified from the Details
pane of a selected pool.
Pools page with selected pool showing the flash capacity utilization on the details pane
In the example, Pool 1 is selected and the details pane shows that the pool has
57% of flash capacity utilization.
Add the Data Reduction and Advanced Deduplication columns to the resources
page, to verify which resources have the features enabled.
LUNs page showing the Data Reduction and Advanced Deduplication columns
In the example, all the LUNs created on the dynamic pool Pool 0 are configured
with data reduction and advanced deduplication.
To monitor the Data Reduction Savings, select the storage resource and open the
properties page. The savings are reported in GBs, percent savings, and as a ratio
on the General tab.
LUN properties showing the Data Reduction Savings on the General tab
In the example, the properties of LUN-1 show a data reduction savings of 38.5 GB.
The percentage that is saved and ratio reflect the savings on the storage resource.
To remove Data Reduction savings for block resources, use the Move operation.
For file resources, since there is no Move operation, users can use host-based
migration or replication. For example, you can asynchronously replicate a file
system to another file system within the same pool using UEMCLI commands.
Data Reduction stops for new writes when sufficient resources are not available
and resumes automatically after enough resources are available.
Enable DR by
selecting the box. the
Advanced
Deduplication box now
becomes available for
selection
No DR on the LUN
To review which Consistency Groups contain Data Reduction enabled LUNs, select
the Consistency Group tab, which is found on the Block page.
On this page, columns that are named Data Reduction and Advanced
Deduplication can be added to the current view.
• Click the Gear Icon and select Data Reduction or Advanced Deduplication
under Column.
• The Data Reduction and Advanced Deduplication columns have three potential
entries, No, Yes, and Mixed.
− No is displayed if none of the LUNs within the Consistency Group has the
option enabled.
− Yes is displayed if all LUNs within the Consistency Group have the option
enabled.
− Mixed is displayed when the Consistency Group has some LUNs with Data
Reduction enabled and other LUNs with Data Reduction disabled.
The Local LUN Move feature, also known as Move, provides native support for
moving LUNs and VMFS Datastores online between pools or within the same pool.
This ability allows for manual control over load balancing and rebalancing of data
between pools.
Local LUN Move leverages Transparent Data Transfer (TDX) technology, a multi-
threaded, data copy engine. Local LUN Move can also be leveraged to migrate a
Block resource's data to or from a resource with Data Reduction or Advanced
Deduplication enabled.
If Advanced Deduplication is supported and enabled, the data also passes through
the Advanced Deduplication algorithm. This allows space savings to be achieved
during the migration.
When migrating to a resource with Data Reduction disabled, all space savings that
are achieved on the source are removed during the migration.
Launch Wizard
To add drives from different tiers to an All-Flash pool (with Data Reduction enabled
resources), select the pool and Expand Pool.
Details pane of a dynamic pool showing the Flash Percent, and number of storage resources built
from it
In the example, the All-Flash pool Pool 2 has four LUNs with Data Reduction and
Advanced Deduplication enabled.
Select Tiers
Select the storage tiers with available drives to expand the pool, and optionally
change the hot spare capacity for new added tiers.
The Performance and Capacity tiers are selected. The Extreme Performance
tier cannot be selected since there are no unused drives.
Select Drives
Select the amount of storage from each tier to add to the All-Flash pool. The
number of drives must comply with the RAID width plus the hot spare capacity that
is defined for the tier.
In the example, six SAS drives comply with the RAID 5 (4+1) plus one hot spare
requirement. Seven NL-SAS drives comply with the RAID 6 (4+2) plus one hot
spare requirement.
If the expansion does not cause an increase in the spare space the pool requires,
the new free space is made available for use. When extra drives increase the spare
space requirement, a portion of the space being added is reserved. The reserved
space is equal to the size of one or two drives per 32.
Summary
Verify the proposed configuration with the expanded drives. Select Finish to start
the expansion process.
In the example, the expansion process adds 13 drives to the All-Flash pool and
converts it to a multi-tiered pool.
Pool Expanded
Verify the pool conversion on the details pane. The number of tiers and drives
increased. Observe that the flash percent supports the Data Reduction.
Details pane of a selected pool showing the Flash Percent, number of tiers and drives
In the example, the expansion included two new tiers and added 13 new drives to
the pool. The flash percent is over 10% which ensures that Data Reduction is
supported.
If the pool contains Data Reduction enabled resources and the resulting flash
percent would be below 10%, the expansion would not be allowed.
If the additional capacity drops the Flash Percent value below 10%, the expansion
of a dynamic pool with Data Reduction enabled resources is blocked.
• To support Data Reduction, the proportion of flash capacity must be equal to or
exceed 10% of the total capacity of the pool.
• If the requirement is not met, the wizard displays a warning message when
trying to advance to the next step.
In the example, the Flash Percent capacity utilization of Pool 0 is 19%, and the
addition of 14 NL-SAS drives reduces the value below the 10% requirement.
To conclude the expansion, select a number of drives that keep the flash percent
capacity utilization within the Data Reduction requirements.
In Hybrid pools, the metadata has priority over user data for placement on the
fastest drives.
• Algorithms ensure that metadata go to the tier that provides the fastest access.
Usually this is the Extreme Performance (SAS Flash) tier.
• If necessary, user data is moved to the next available tier (Performance or
Capacity), to give space to metadata created on the pool.
The system monitors how much metadata is created as the resources grow, and
the pool space is consumed.
• The system automatically estimates how much metadata can be created based
on the free capacity.
− The estimate considers the amount of metadata that is generated, the pool
usable capacity, and the free space within each tier.
FAST VP tries to relocate user data out of the Flash tier to free space for metadata.
In this situation, the pool status changes to OK, Needs Attention (seen from the
Pools page, and the pool properties General tab). A warning informs the
administrator to increase the amount of flash capacity within the pool to address
the issue.
In the example, the system identifies metadata blocks to move from the
Performance to the Extreme Performance tier.
To review the status of Data Reduction and Advanced Deduplication on each of the
LUNs created on the system, go to the Block page in Unisphere. The page can be
accessed by selecting Block under Storage in the left pane.
To add these and other columns to the view, click the Gear Icon in the upper right
portion of the LUNs tab and select the columns to add under the Columns option.
Data Reduction provides savings information at many different levels within the
system, and in many different formats.
• Savings information is provided at the individual storage resource, pool, and
system levels.
• Savings information is reported in GBs, percent savings, and as a ratio.
• Total GBs saved includes the savings due to Data Reduction on the storage
resource, Advanced Deduplication savings, and savings which are realized on
any Snapshots and Thin Clones taken of the resource.
• The percentage that is saved and the ratio reflect the savings within the storage
resource itself. All savings information is aggregated and then displayed at the
Pool level and System level.
Space savings information in the three formats is available within the Properties
window of the storage resource.
For LUNs, you must either access the Properties page from the Block page, or on
the LUN tab from within the Consistency Group Properties window.
Shown is the total GBs saved, which includes savings within data used by
Snapshots and Thin Clones of the storage resource. Also shown is the % saved
and the Data Reduction ratio, which both reflect the savings within the storage
resource. File System and VMware VMFS Datastores display the same
parameters.
Data Reduction savings are shown on the General tab within the LUN Properties
Window.
Data Reduction information is also aggregated at the Pool level on the Usage tab.
Savings are reported in the three formats, including the GBs saved, % savings, and
ratio.
• The GBs savings reflect the total amount of space saved due to Data Reduction
on storage resources and their Snapshots and Thin Clones.
• The % saved and the Ratio reflect the average space that is saved across all
Data Reduction enabled storage resources.
System level Data Reduction Savings information is displayed within the System
Efficiency view block that is found on the system Dashboard page. If the view
block is not shown on your system, you can add it by selecting the Main tab,
clicking Customize, and adding the view block.
The system level aggregates all savings across the entire system and displays
them in the three formats available, GBs saved, % saved, and ratio.
• For the GBs saved, this value is the total amount of space saved due to Data
Reduction, along with savings achieved by Snapshots and Thin Clones of Data
Reduction enabled storage resources.
• The % savings and ratio are the average savings that are achieved across all
data reduction enabled storage resources.
Overview
The space reporting updates affect the System, Pool, and Storage Resource
values. Users can use the formulas, displayed here to calculate, and verify the
Data Reduction Savings percentage and ratio for the System, Pools, and Storage
Resources.
The system level aggregates all savings across the entire system and displays
them in the three formats available, GBs saved, % saved, and ratio.
• For the GBs saved, this value is the total amount of space saved due to Data
Reduction, along with savings achieved by Snapshots and Thin Clones of Data
Reduction enabled storage resources.
• The % savings and ratio are the average savings that are achieved across all
Data Reduction enabled storage resources.
Example
The example shows the calculation for Data Reduction savings ratio on a LUN.
44.7 GB + 45.7 GB
=:1
44.7 GB
90.4
= 2.02:1
44.7 GB
Example
The example displays the formula for calculating the Data Reduction percentage
savings.
45.7 GB
*100
45.7 GB +44.7 GB
45.7 GB
*100 = 51%
90.4 GB
Overview
Storage resources using Data Reduction can be replicated using any supported
replication software. Native Synchronous Block Replication or Native
Asynchronous Replication to any supported destination is supported.
• Replication can occur to or from a Dell Unity XT that does not support Data
Reduction.
• Data Reduction can be enabled or disabled on source or destination
independently.
As data is migrated from the source VNX system to the Dell Unity XT system, it
passes through the Data Reduction algorithm as it is written to the Pool.
Text
Text
File Import
Data Reduction and Advanced Deduplication with Native File and Block Import
FAST VP
FAST VP Overview
When reviewing the access patterns for data within a system, most access patterns
show a basic trend. Typically, the data is most heavily accessed near the time it
was created, and the activity level decreases as the data ages. This trending is
also seen as the lifecycle of the data. Dell EMC Unity Fully Automated Storage
Tiering for Virtual Pools - FAST VP monitors the data access patterns within pools
on the system.
Flash
drives
SAS
drives
NL-SAS
drives
FAST VP classifies drives into three categories, called tiers. These tiers are:
• Extreme Performance Tier – Comprised of Flash drives
Dell EMC Unity has a unified approach to create storage resources on the system.
Block LUNs, file systems, and the VMware datastores can all exist within a single
pool, and can all benefit from using FAST VP. In system configurations with
minimal amounts of Flash, FAST VP uses the Flash drives for active data,
regardless of the resource type. For efficiency, FAST VP uses low cost spinning
drives for less active data. Access patterns for all data within a pool are compared
against each other. The most active data is placed on the highest performing drives
according to the storage resource’s tiering policy. Tiering policies are explained
later in this document.
Tiering Policies
FAST VP Tiering policies determine how the data relocation takes place within the
storage pool. The available FAST VP policies are displayed here.
The Tier label is used to describe the various categories of media used within a
pool. In a physical system, the tier directly relates to the drive types used within the
pool. The available tiers are Extreme Performance Tier using Flash drives, the
Performance Tier using SAS drives, and the Capacity Tier using NL-SAS drives.
On a Dell EMC UnityVSA system, a storage tier of the virtual drives must be
created manually. The drives should match the underlying characteristics of the
virtual disk.
The table shows the available tiering policies with its description, and the initial tier
placement which corresponds to a selected policy.
Users can select the RAID protection for each one of the tiers being configured
when creating a pool. A single RAID protection is selected for each tier, and after
the RAID configuration is selected, and the pool is created, it cannot be changed.
Only when you expand the pool with a new drive type, you can select a RAID
protection.
This table shows the supported RAID types and drive configurations.
When considering a RAID configuration which includes many drives - 12+1, 12+2,
14+2, consider the tradeoffs that the larger drive counts contain. Using larger drive
sizes can lead to longer rebuild times and possible faulted domains.
FAST VP Management
The user can change the system-level data relocation configuration using the
Global settings window.
Select the Settings option on the top of the Unisphere page to open the Settings
window.
Storage Pool
FAST VP relocation at the pool level can be also verified from the pool properties
window.
In Unisphere, select a pool and click the edit icon to open its properties window.
Then select the FAST VP tab.
You also have the option to manually start a data relocation by clicking the Start
Relocation button. To modify the FAST VP settings, click the Manage FAST VP
system settings link in the upper right side of the window.
At the storage resource level, the user can change the tiering policy for the data
relocation.
In Unisphere, select the block or file resource and click the pencil icon to open its
properties window. Then select the FAST VP tab.
From this page, it is possible to edit the tiering policy for the data relocation.
The example shows the properties for LUN_2. The FAST VP page displays the
information of the tiers that are used for data distribution.
Thin Clones
A Thin Clone is a read/write copy of a thin block storage resource that shares
blocks with the parent resource. Thin Clones created from a thin LUN, Consistency
Group, or the VMware VMFS datastore form a hierarchy.
A Base LUN family is the combination of the Base LUN, and all its derivative Thin
Clones and snapshots. The original or production LUN for a set of derivative
snapshots, and Thin Clones is called a Base LUN. The Base LUN family includes
snapshots and Thin Clones based on child snapshots of the storage resource or its
Thin Clones.
Data available on the source snapshot is immediately available to the Thin Clone.
The Thin Clone references the source snapshot for this data. Data resulting from
changes to the Thin Clone after its creation is stored on the Thin Clone.
A snapshot of the LUN, Consistency Group, or VMFS datastore that is used for the
Thin Clone create and refresh operations is called a source snapshot. The original
parent resource is the original parent datastore or Thin Clone for the snapshot on
which the Thin Clone is based.
Thin Clones are created from attached read-only or unattached snapshots with no
auto-deletion policy and no expiration policy set. Thin Clones are supported on all
Dell Unity models including Dell UnityVSA.
In the example, the Base LUN family for LUN1 includes all the snapshots and Thin
Clones that are displayed in the diagram.
LUN 1
RO Thin
Clone 1
Snap 1
Application 1
RO Thin
Clone 2
Snap 2 Snap 3
Application 2
Snap 5
Snap 4
A thin clone is displayed in the LUNs page. The page shows the details and
properties for the clone.
You can expand a thin clone by selecting the clone, then selecting the View/Edit
option. If a thin clone is created from a 100 GB Base LUN, the size of the thin clone
can be later expanded.
All data services remain available on the parent resource after the creation of the
thin clone. Changes to the thin clone do not affect the source snapshot, because
the source snapshot is read-only.
With thin clones, users can make space-efficient copies of the production
environment. Thin clones are based on pointer-based technology, which means a
thin clone does not consume much space from the storage pool. The thin clones
share the space with the base resource, rather than allocate a copy of the source
data for itself, which provide benefits to the user.
Users can also apply data services to thin clones. Data services include; host I/O
limits, host access configuration, manual or scheduled snapshots, and replication.
With the thin clone replication, a full clone is created on the target side which is an
independent copy of the source LUN.
A maximum of 16 thin clones per Base LUN can be created. The combination of
snapshots and thin clones cannot exceed 256.
Thin Clone operations Users can create, refresh, view, modify, expand, and
delete a thin clone.
Data Services Most LUN data services can be applied to thin clones:
host I/O limits, host access configuration,
manual/scheduled snapshots, replication.
The use of Thin Clones is beneficial for the types of activities that are explained
here.
Thin Clones allow development and test personnel to work with real workloads and
use all data services that are associated with production storage resources without
interfering with production.
For parallel processing applications which span multiple servers the user can use
multiple Thin Clones of a single production dataset to achieve results more quickly.
An administrator can meet defined SLAs by using Thin Clones to maintain hot
backup copies of production systems. If there is corruption of the production
dataset, the user can immediately resume the read/write workload by using the
Thin Clones.
Thin Clones can also be used to build and deploy templates for identical or near-
identical environments.
Development and Work with real workloads and all data services that are
test environments associated with production storage with no effect to
production.
Any-Any Refresh From base LUN only Yes, any Thin Clone can
be refreshed from any
snapshot.
The Create operation uses a Base LUN to build the set of derivative snapshots,
and Thin Clones.
Refreshing a Thin Clone updates the Thin Clone’s data with data from a different
source snapshot. The new source snapshot must be related to the base LUN for
the existing Thin Clone. In addition, the snapshot must be read-only, and it must
have expiration policy or automatic deletion disabled.
This example shows that the user is refreshing Thin Clone3 with the contents of
source Snap1.
After the Thin Clone is refreshed, the existing data is removed and replaced with
the Snap1 data. There are no changes to the data services configured in the Thin
Clone, and if the Thin Clone has derivative snapshots they remain unchanged.
In this example, the source snapshot of the Thin Clone changes. So instead of
being Snap3, the source snapshot is now Snap1.
Observe that the original parent resource does not change when a Thin Clone is
refreshed to a different source snapshot. The new source snapshot comes from the
same base LUN.
Refreshing a Base LUN updates the LUNs data with data from any eligible
snapshot in the Base LUN family including a snapshot of a Thin Clone. The new
source snapshot must be related to the Base LUN family for the existing Thin
Clone. In addition, the snapshot must be read-only, and the retention policy must
be set to no automatic deletion.
This example shows the user refreshing LUN1 with the data from Snap3. When the
LUN is refreshed, the existing data is removed from LUN1 and replaced with the
data from Snap3.
There are no changes to the data services configured on the Thin Clone. If the Thin
Clone has derivative snapshots, the snapshots remain unchanged.
This page shows the Unisphere LUNs page with the example of a Base LUN and
its respective Thin Clones.
In the example, Base_LUN1 has two Clones that are taken at different times:
• The Base_LUN1 has an allocated percentage of 63.1.
• TC1OriginalData was a clone of the original Base_LUN1 and has an allocation
of 2.1 percent.
• TC2 AddedFiles were taken after adding files to the Base_LUN1.
For the bottom window, the Base_LUN1 has been selected and the Refresh option
is used to populate the base LUN.
In the top window, the SnapOriginalData resource has been selected. Note the
Attached and Auto-Delete options must display a No status state.
The bottom window shows that the results after the Base_LUN1 has been updated
with the SnapOriginalData snapshot. The properties of Base_LUN1 show that the
Allocated space is only 2.1% after the refresh.
Thin Clones
Dell Unity Snapshots and Thin Clones are fully supported with data reduction and
Advanced Deduplication. Snapshots and Thin Clones also benefit from the space
savings that are achieved on the source storage resource.
When writing to a Snapshot or Thin Clone, the I/O is subject to the same data
efficiency mechanism as the storage resource. Which efficiency algorithms are
applied depends on the Data Reduction and Advanced Deduplication settings of
the parent resource.
− Hard and soft limits set on the amount of disk space allowed for
consumption.
Dell EMC recommends that quotas are configured before the storage system
becomes active in a production environment. Quotas can be configured after a file
system is created.
Default quota settings can be configured for an environment where the same set of
limits are applied to many users.
These parameters can be configured from the Manage Quota Settings window:
• Quota policy: File size [default] or Blocks
• Soft limit
• Hard limit
• Grace period
The soft limit is a capacity threshold. When file usage exceeds the threshold, a
countdown timer begins. The timer, or grace period, continues to count down as
long as the soft limit is exceeded. However, data can still be written to the file
system. If the soft limit remains exceeded and the grace period expires, no new
data may be added to the particular directory. Users associated with the quota are
also prohibited from writing new data. When the capacity is reduced beneath the
soft limit before the grace period expires, access is allowed to the file system
The grace period can be limited by days, hours and minutes, or unlimited. When
the grace period is unlimited, data can be written to file system until the quota hard
limit is reached.
A hard limit is also set for each quota configured. When the hard limit is reached,
no new data can be added to the file system or directory. The quota must be
increased, or data must be removed from the file system before more data can be
added.
Quota Usage
File system quotas can track and report usage of a file system.
File System
Quota Usage
Quota Limit
Soft Limit
Grace Period
File System (One Day)
In this example, the user crosses the 20 GB soft limit. The storage administrator
receives an alert in Unisphere stating that the soft quota for this user has been
crossed.
The Grace Period of one day begins to count down. Users are still able to add data
to the file system. Before the expiration of the Grace Period, file system usage
must be less than the soft limit.
Grace Period
Grace Period
(One Day)
File System
Quota Usage
When the Grace Period is reached and the usage is still over the soft limit, the
system issues a warning. The storage administrator receives a notification of the
event.
The transfer of more data to the file system is interrupted until file system usage is
less than the allowed soft limit.
Hard Limit
Grace Period
File System (One Day)
Soft Limit
Block hard quota reached / exceeded Hard Limit
(20 GB)
(25 GB)
Quota Usage
If the Grace Period has not expired and data remains being written to the file
system, eventually the Hard Limit is reached.
When the hard limit is reached, users can no longer add data to the file system and
the storage administrator receives a notification.
tt. Dell Unity XT systems support file system quotas which enable storage
administrators to track and limit usage of a file system.
uu. Quota limits can be designated for users, a directory tree, or users within a
quota tree.
vv. Quota policy can be configure to determine usage per File Size (the default),
or Blocks.
ww. The policies use hard and soft limits set on the amount of disk space
allowed for consumption.
For more information, see the Dell EMC Unity: Data Reduction,
Dell EMC Unity: FAST Technology Overview, Dell EMC Unity:
Snapshots and Thin Clones A Detailed Review, and Dell EMC
Unity: NAS Capabilities on the Dell Technologies Info Hub.
The local LUN Move is a native feature of Unity XT to move LUNs within a single
physical or virtual Unity XT system. It moves LUNs between different pools within
the system. Or it can be used to move LUNs within the same pool of a system. The
move operation is transparent to the host and has minimal performance impact on
data access.
There are several use cases for the feature. It provides load balancing between
pools. For example, if one pool is reaching capacity, the feature can be used to
move LUNs to a pool that has more capacity. It can also be used to change the
storage characteristics for a LUN. For example, a LUN could be moved between
pools composed of different disk types and RAID schemes. The feature can also
be used to convert a thin LUN to a thick LUN, or a thick LUN to a thin LUN.
Another use of the feature is for data reduction of an existing thin LUN. For
example, an existing thin LUN without Data Reduction enabled can be moved to an
All-Flash pool where data reduction can be enabled. The data reduction process is
invoked during the move operation resulting in data reduction savings on the
existing LUN data.
LUN move
Pool 3
Use cases:
Load balancing
Change storage
characteristics
Data reduction
When a LUN is moved, the moved LUN retains its LUN attributes and some extra
LUN feature configurations. For example, if a LUN is moved that is configured with
snapshots, its existing Snapshot schedule is moved. But any existing snapshots
are not moved. The system deletes any existing snapshots of the LUN after the
move completes.
Also, if Replication is configured on the LUN, the system prevents the LUN move
operation. The LUN replication must be deleted before the LUN move operation is
permitted. After the LUN move operation completes, a reconfiguration of the LUN
replication is permitted. The graphic details the LUN attributes that are and are not
imported.
SP ownership SP ownership
Unique ID Unique ID
CG container
Snapshots
Replication config
Before using the local LUN Move feature, a host has access the LUN created from
a specific pool in a normal fashion. The following series of slides illustrates the
process of the local LUN Move operation.
1. Host access before move
Pool 1 Pool 2
LUN
The local LUN Move feature uses Transparent Data Transfer (TDX) technology. It
is a transparent data copy engine that is multithreaded and supports online data
transfers. The data transfer is designed so its impact to host access performance is
minimal. TDX makes the LUN move operation transparent to a host.
1. Host access before move
2. Move uses Transparent
Data Transfer (TDX) technology
TDX
Pool 1 Pool 2
LUN
When a move operation is initiated on a LUN, the move operation uses TDX and
the move begins.
TDX
Pool 1 Pool 2
LUN Move
LUN
Move Begins
As TDX transfers the data to move the LUN, the move operation is transparent to
the host. Even though TDX is transferring data to move the LUN, the host still has
access to the whole LUN as a single entity.
1. Host access before move 3. Start LUN Move operation
TDX
Pool 1 Pool 2
LUN Move
Move Completes
Eventually TDX transfers all of the data, and the LUN move completes.
1. Host access before move 3. Start LUN Move operation 5. Move completes
TDX
Pool 1 Pool 2
The original LUN will no longer exist, and the host has access the moved LUN in its
normal fashion.
1. Host access before move 3. Start LUN Move operation 5. Move completes
2. Move uses Transparent 4. Move begins transparent to host 6. Host access after move
Data Transfer (TDX) technology
TDX
Pool 1 Pool 2
LUN LUN
• Storage resources
− Standalone LUNs
− LUNs within a Consistency Group
− VMware VMFS datastore LUNs
− Not thin clones or have derived thin clones
• To successfully move a LUN, it cannot be:
− In a replication session
− Expanding/shrinking
− Restoring from snapshot
− Being imported from VNX
− Offline/requiring recovery
• System cannot be upgraded during a LUN move session
The local LUN Move feature capabilities are the same for all physical Dell Unity XT
models and the Dell UnityVSA systems.
The local LUN Move feature has a “Set and Forget” configuration. There is nothing
to preconfigure to perform a LUN Move operation. To initiate the move, select
LUNs (1), select the storage resource to move (2), and then select More Actions -
> Move (3). Next, select a Session Priority from the drop-down (4). The priority
defines how the move session is treated in priority compared to production data
access, thus affecting the time to complete the move session. With an Idle priority
selection, the move runs during production I/O idle time. A High selection runs the
move session as fast as possible. The next session configuration selection is the
Pool. It defines where the storage resource is moved to. Its drop-down list is
populated with pools available on the system. Another configuration for the move
session is the Thin check box option. It is checked by default and can be cleared to
make the moved resource thick provisioned. The Data Reduction option is exposed
if the selected pool is an All-Flash pool. The data moved is processed through the
Data Reduction algorithms.
After the move is started, the operation runs automatically. The operation then
continues to completion and cannot be paused or resumed. When a session is in
progress it can be canceled.
The move is transparent to the host. There are no actions or tasks needed on the
host for the move. After the move is completed, the session is automatically
cutover and the host data access to the LUN continues normally.
Set and
Forget
Automatic operation
No Pause/Resume
When a move session is started, its progress can be monitored from a few
locations.
From the LUNs page, with the LUN selected that is being moved, the right side
pane displays move session information. The move Status and Progress are
displayed. The Move Session State, its Transfer Rate, and Priority are also shown.
From the General tab of the LUN Properties page, the same information is
displayed. The page does provide the added ability to edit the session Priority
setting.
A LUN Cancel Move operation cancels a move session. The operation is only
available if a move session is ongoing. The operation cancels the move. Select
More Actions -> Cancel Move (1). The operation then returns any moved data to
the original location (2), where you can access it normally from its pool.
The local NAS Server mobility feature moves a NAS Server between the Dell Unity
XT Storage Processors. The move effectively changes the ownership of the NAS
Server to the peer Storage Processor. The entire configuration, file systems,
services, and features of the NAS Server remain the same, it is only the Storage
Processor ownership that is changed.
The move is transparent to NFS clients and the SMB3 clients configured with
Continuous Availability. Clients running either SMB2, or SMB3 without CA are
disrupted due to their protocols’ stateful nature. However, most current client
operating systems will automatically retry the connection and reconnect to the NAS
Server after the move is complete.
The NAS Server mobility feature can be used for balancing the load across the Dell
Unity XT system Storage Processors. It can also be used to provide data access
during maintenance events. For example, during network connectivity maintenance
for a Storage Processor, the NAS Server could be moved to the peer SP allowing
continued client access to data.
The local NAS Server mobility feature supports moving a single NAS Server at a
time. Multiple simultaneous moves of NAS Servers are not supported.
Only move a NAS Server that is in a healthy OK state. The system prevents
moving any NAS Server when its state would cause a problem being moved, such
as faulted or not accessible states.
A NAS Server that is a destination of a File Import session cannot be moved. The
NAS Server can only be moved after the File Import session completes.
If a NAS Server is moved that is actively running an NDMP job, the move stops the
job. After the NAS Server move completes, the NDMP job must be manually
restarted.
Then, from the properties page, select the peer SP for ownership. A confirmation
window is displayed stating the move disrupts running NDMP jobs. The message
also states the operation disrupts data access to clients other than NFS and SMB3
CA configured clients. After the confirmation is accepted and the NAS Server
configuration change is applied, the move operation runs in a “set and forget”
fashion. It has no pause, resume or cancel functions.
The status of the NAS Server move operation is displayed in several locations
within Unisphere.
This demo covers the local NAS Server mobility feature. A NAS Server is moved to
the peer Storage Processor.
Movie:
mmm. The local NAS Server mobility feature moves a NAS Server between
the Dell Unity Storage Processors.
– The operation effectively changes the ownership of the NAS Server to
the peer Storage Processor.
– The move is transparent to NFS clients and the SMB3 clients configured
with Continuous Availability.
nnn. The NAS Server mobility feature can be used for balancing the load
across the Dell Unity system Storage Processors.
ooo. The local NAS Server mobility feature supports moving only a single
NAS Server at a time. The NAS server must be in a healthy OK state.
For more information, see the Dell EMC Unity Family Configuring
and managing LUNs, and Dell EMC Unity: NAS Capabilities on
the Dell EMC Unity Family Technical Documentation portal at Dell
Technologies site.
Snapshots Overview
Snapshots Overview
Pool
Disks
The Snapshots feature is enabled with the Local Copies license which enables
space efficient point-in-time snapshots of storage resources for block, file, and
VMware "data" vVols. The snap images can be read-only or read/write and used in
various ways. They provide an effective form of local data protection. If data is
mistakenly deleted or corrupted, the production data can be restored from a
snapshot to a known point-in-time data state. Hosts access snapshot images for
data backup operations, data mining operations, application testing, or decision
analysis tasks. The upcoming slides detail the feature architecture, capabilities,
benefits, and specifics of its operations and uses.
Caution: Snapshots are not full copies of the original data. Dell
Technologies recommends that you do not rely on snapshots for
mirrors, disaster recovery, or high-availability tools. Snapshots of
storage resources are partially derived from the real-time data in the
relevant storage resource. If the primary storage becomes
inaccessible, snapshots can also become inaccessible (not readable).
Storage resource
Production data access Snapshot image access
Parent pool
With Redirect on Write technology, when a snapshot is taken, the existing data on
the storage resource remains in place. The snapshot provides a point-in-time view
of the data. Production data access also uses this view to read existing data.
With Redirect on Write technology, when writes are made to the storage resource,
those writes are redirected. A new location is allocated as needed from the parent
pool in 256 MB slices. New writes are stored in 8 KB chunks on the newly allocated
slice. Reads of the new writes are serviced from this new location as well.
If the snapshot is writable, any writes are handled in a similar manner. Slice space
is allocated from the parent pool, and the writes are redirected in 8 KB chunks to
the new space. Reads of newly written data are also serviced from the new space.
Storage space is needed in the pool to support snapshots as slices are allocated
for redirected writes.
Because of the on-demand slice allocation from the pool, snapped thick file
systems transition to thin file system performance characteristics.
The table defines various combined snapshot capabilities for each of the Dell Unity
XT models. These combined limits have an interaction between each other. For
example, if a model 380 system had 20 LUNs and 20 file systems, each LUN and
file system could not have 256 user snapshots. The number of user snapshots
would exceed the maximum of 8000 for the system.
Snapshots consume space from the parent storage pool that the storage resource
uses. To prevent the pool from running out of space due to snapshots, there are
two options for automatically deleting the oldest snapshots. These options can be
set from the Properties page of the pool and selecting the Snapshot Settings tab.
One option triggers the automatic deletion that is based on the total pool space
consumed. Another option triggers the automatic deletion that is based on the total
snapshot space consumed. Either option can be used singularly, or they can both
be used in combination. Both options allow the configuration of a space threshold
value to start the deletion and a space threshold value for stopping the automatic
deletion. When a pool is created, the Total pool consumption option is set by
default. The option cannot be changed during pool creation but can be modified
after the pool is created. If both options are cleared, this setting disables the
automatic deletion of the oldest snapshots based on space that is used. Automatic
snapshot deletion is still configurable based on snapshot retention values.
Creating Snapshots
CG
LUNs
Snapshots are created on storage resources for block, file, and VMware. All are
created in a similar manner. For block, the snapshot is created on a LUN or a
group of LUNs within a Consistency Group. For file, the snapshot is configured on
a file system. For VMware, the storage resource is either going to be a LUN for a
VMFS datastore or a file system for an NFS datastore. When each of these storage
resources are created, the system provides a wizard for their creation. Each wizard
provides an option to automatically create snapshots on the storage resource.
Each resource snapshot creation is nearly identical to the other resources. For
storage resources already created, snapshots can be manually created for them
from their Properties page. As with the wizard, the snapshot creation from the
storage resource Properties page is nearly identical to the other resources.
More details on the snapshot creation within the block storage LUN creation wizard
and the file storage file system creation wizard are shown in separate topics. Each
topic also details the creation of manual snapshots from the LUN and file system
properties pages.
Snapshot Operations
CG
The operations that can be performed on a snapshot differ based on the type of
storage resource the snapshot is on. Operations on LUN-based snapshots are
Restore, Attach to host, Detach from host, Copy, Refresh, and Replicate.
Operations on file system-based snapshots are Restore, Refresh, and Copy.
Snapshot Schedules
• Internally, Unity systems use the UTC time zone for time and scheduling.
− The operating system, logs, schedules, and other features all use UTC.
• When you connect to Unisphere, the times that are displayed are adjusted by
Unisphere to the time zone of the browser.
• When changing time on the system, or the timing on a feature, it is stored
internally in UTC format.
• By default, Unity systems do not consider Daylight Savings Time (DST).
When you create a schedule, the time to take the snapshot is stored
internally in UTC time.
Because snapshot schedules are stored internally in UTC time, the time that a
snapshot is taken is not adjusted when Daylight Savings Time begins or ends.
Take Snapshot
To continue having snapshots taken at 1:00 AM local time, the operator must edit
the schedule.
Beginning with Dell EMC Unity OE 5.1, users can enable Time Zone support.
• Newly created schedules adjust automatically if the local time zone implements
Daylight Saving Time.
• NOTE: Schedules that were created before upgrading to OE 5.1 must be edited
one time to update the snapshot time.
Before enabling Schedule Time Zone support, the schedule appears as shown.
Beginning with Dell EMC Unity XT OE 5.1, the system supports a time zone option
to correct timing issues for snapshot schedules and asynchronous replication
throttling.
The setting applies to system defined and user created snapshot schedules. The
setting is not a system setting, and does not apply to other features such as logs or
FAST VP.
After upgrading to version 5.1, the schedule time zone is set to UTC Legacy. No
changes are made to schedules when upgrading.
If the user enables the time zone settings, the internal snapshot schedule is NOT
updated to the same absolute time. The user must check to see whether the
snapshot schedule must be updated after enabling time zone support.
To enable the feature, choose System Settings > Management > Schedule Time
Zone.
When Schedule Time Zone is enabled, the system updates the schedule with the
value that had been stored internally, in UTC time. The user must check the
schedule, and edit it if necessary.
In the example shown, the schedule must be edited to return the snapshot time to
its previous value, 1:00 AM.
LUNs
Attach
Multiple hosts attach
Multiple snapshots to snapshots for RO
capture different point-in- or RW access
time data states
It is also possible to copy a snapshot. In this example, the four o’clock snapshot is
copied. Other than having a unique name, the copy is indistinguishable from the
source snapshot and both capture identical data states.
Multiple hosts can be attached to any specific LUN snapshot or multiple snapshots
within the tree. When a host is attached to a snapshot for access to its data, the
attach can be defined for read-only access or read/write access. In the example, a
host attaches to the three o’clock snapshot for read-only access and the snapshot
remains unmodified from its original snapped data state. A different host is
attached to the four o’clock snapshot copy for read/write access. By default, the
system creates a copy of the snapshot to preserve its original data state. The user
can optionally not create the snapshot copy. When the snap is read/write attached,
its data state is marked as modified from its source.
LUN snapshots can easily be created in several ways. Within the wizard to Create
LUNs, there is an option to automatically create snapshots for the LUN based on a
schedule. The wizard contains a drop-down list selection that has three different
system defined schedules to select from to create the LUN snapshots. There is
also a snapshot retention value that is associated with each of the three schedules.
A customized schedule can also be created for use. The scheduler has the
granularity to configure a snapshot frequency by the hour, day, or week. A
snapshot retention policy can also be defined.
Note: Configuration fields that are annotated with a red asterisk are required.
For existing LUNs, snapshots are easily created from the LUN Properties page by
selecting the Snapshots tab. To create a snapshot of the LUN, select the + icon.
The snapshot must be configured with a name; by default the system provides a
name having a year, month, day, hour, minute, second format. Customized names
can also be configured. A Description field for the snapshot can be annotated as
an option. One of three Retention Policies must be configured. The default
retention configuration is the Pool Automatic Deletion Policy. It automatically
deletes the snapshot if pool space reaches a specified capacity threshold that is
defined on the pool. A customized retention time can alternately be selected and
configured for snapshot deletion on a specified calendar day and time. The other
alternative is to select the No Automatic Deletion option if the snapshot must be
kept for an undetermined amount of time.
Initial State
The Snapshot Restore operation rolls back the storage resource to the point-in-
time data state that the snapshot captures. In this restore example, a LUN is at a
five o’clock data state. It is restored from a snapshot with a four o’clock data state.
Attach
Attach
LUN
Before performing a restore operation, detach any host attached to the LUN
snapshot been restored. Also ensure that all hosts have completed all read and
write operations to the LUN you want to restore. Finally, disconnect any host
accessing the LUN. This action may require disabling the host connection on the
host-side.
Start Restore
Now the restore operation can be performed. From the four o’clock snapshot,
select the Restore operation. The system automatically creates a snapshot of the
current five o’clock data state of the LUN. This snapshot captures the current data
state of the LUN before the restoration operation begins.
Reconnect Hosts
The LUN is restored to the four o’clock data state of the snapshot. The hosts can
now be reconnected to the resources they were connected to before the restore
operation and resume normal operations.
Attach
Attach
Before the restore:
1. Detach hosts from LUN Perform Restore:
snapshots
1. Select snapshot Restore
2. Quiesce host I/O to LUN
To restore a LUN from a snapshot, access the Properties page for the LUN.
Attach
LUN
The first step is to select a snapshot to attach to. The next step is to select an
Access Type, either read-only or read/write. Then the host or hosts are selected to
be attached.
Attach
Perform Attach:
1. Select snapshot
2. Select Access Type Read-
only or Read/write
3. Select hosts
LUN
Next, the system optionally creates a copy of the snapshot if a read/write Access
Type was selected. The snapshot copy preserves the data state of the snapshot
before the attach. Finally, the selected host is attached to the snapshot with the
Access Type selected.
Attach
Attach
Perform Attach:
1. Select snapshot
2. Select Access Type Read-
only or Read/write
3. Select hosts
4. System optionally creates
copy of snapshot
LUN
5. Snapshot attached
To attach a host to a snapshot of a LUN, access the Properties page for the LUN.
The Snapshot Detach operation detaches a connected host from a LUN snapshot.
In this detach example, a secondary host is going to detach from the three o’clock
snapshot of the LUN.
Attach
Attach
LUN
Attach
Attach
LUN
Now the detach operation can be performed. From the three o’clock snapshot,
select the Detach from host operation.
Attach
Attach
The secondary host is detached from the three o’clock snapshot of the LUN.
Attach
To detach a host from a snapshot, first access the Properties page of the storage
resource.
The Snapshot Copy operation makes a copy of an existing snapshot that is either
attached or detached from a host. In this example, a copy of an existing four
o’clock snapshot is being made.
Attach
Attach
LUN
Attach
Attach
LUN
Snapshot Copied
A copy of the selected snapshot is made. The copy inherits the parent snapshot
data state of four o’clock and its retention policy.
Attach
Attach
2. Snapshot copied
3. Copy inherits parent data
LUN
state and retention policy
To copy a snapshot of a LUN, first access the Properties page of the LUN.
Attach
LUN
Next, from the Snapshots tab, a snapshot is selected and the snapshot operation
Attach to host is performed.
Attach
Attach
Dell EMC Unity tasks:
LUN
Now tasks from the host must be completed. The host must discover the disk
device that the snapshot presents to it. After the discovery, the host can access the
snapshot as a disk device.
Attach
Attach
Dell EMC Unity tasks:
File system
File system snapshots can easily be created in several ways. Within the wizard to
Create a File System, there is an option to automatically create snapshots for the
file system based on a schedule. File system snapshots that are created with a
schedule are read-only. The wizard contains a drop-down list selection that has
three different system defined schedules to select from to create the file system
snapshots. Each schedule includes a snapshot retention value. A customized
schedule can also be created for use. The scheduler includes the granularity to
configure a snapshot frequency by the hour, day, or week. A snapshot retention
policy can also be defined. Configuration fields that are annotated with a red
asterisk are required.
Snapshots of existing file systems are easily created from the file system
Properties page by selecting the Snapshots tab. A manually created file system
snapshot can be read-only or read/write. To create a snapshot of the file system,
select the + icon. The snapshot must be configured with a Name. By default the
system provides a name that is based on the creation time in a year, month, day,
hour, minute, second format. Customized names can also be configured. A
Description field for the snapshot can optionally be configured. One of three
Retention Policies must be configured. The default retention configuration is the
Pool Automatic Deletion Policy. That policy automatically deletes the snapshot if
pool space reaches a specified capacity threshold defined on the pool. A
customized Retention Time can alternately be selected and configured for
snapshot deletion on a specified calendar day and time within a year of creation.
The other alternative is to select the No Automatic Deletion option if the snapshot
must be kept for an undetermined amount of time. The Access Type section
requires configuration by selecting one of the two options for the snapshot; read-
only or read/write.
Initial State
The Snapshot Restore operation for a file system is similar to the restore operation
of a LUN. It rolls back the file system to a point-in-time data state that a read-only
or read/write snapshot captures. This example restores a file system from a
snapshot. The file system is at a five o’clock data state and is restored from a read-
only snapshot with a four o’clock data state.
File system
Disconnect Clients
Quiesce I/O
Before performing a restore operation, disconnect the client connected to the File
system snapshot been restored. Also quiesce I/O to the file system being restored.
Clients can remain connected to the file system but should close any opened files.
File system
Now the Restore operation can be performed. From the four o’clock snapshot,
select the Restore operation.
Current Data
State Snapped
The system automatically creates a snapshot of the current five o’clock data state
of the file system. It captures the current data state of the file system before the
restoration operation begins.
Connections and
I/O Resumed
The file system is restored to the four o’clock data state of the snapshot. The
connections and I/O to the resources can now be resumed for normal operations.
To restore a file system from a snapshot, access the Properties page for the
snapshot.
Initial State
The Snapshot Copy operation makes a copy of an existing file system snapshot
that is either read-only or read/write shared or unshared. In this example, a copy of
an existing four o’clock read-only snapshot is being made.
Select Snapshot
to Copy
Copy Created
The snapshot copy is created and is read/write. It also inherits the parent snapshot
data state of four o’clock and its retention policy.
To copy a snapshot of a file system, first access the Properties page of that file
system.
Initial State
File system
On the storage system, an NFS and or SMB share must be configured on the
read/write snapshot of the file system. This task is completed from their respective
pages.
File system
Now tasks from the client must be completed. The client must be connected to the
NFS/SMB share of the snapshot. After connection to the share, the client can
access the snapshot resource.
Initial State
File system
The first task for an NFS client is to connect to an NFS share on the file system.
File system
Access Snapshot
.ckpt
path
Similarly, the first task for an SMB client is to connect to an SMB share on the file
system.
.ckpt
path
Access to the read-only snapshot is established by the SMB client accessing the
Previous Versions tab of the SMB share. It redirects the client to the point-in-time
view that the read-only snapshot captures.
.ckpt Previous
path versions
2. Access the
2. Access the snapshot
File system snapshot previous
hidden .ckpt data path
versions tab
Through CVFS
The read-only snapshot is exposed to the clients through the CVFS mechanism.
Therefore the clients can directly recover data from the snapshot without any
administrator intervention. For example, if a user either corrupted or deleted a file
by mistake, that user could directly access the read-only snapshot. Then from the
snapshot the user can get an earlier version of the file and copy it to the file system
for recovery.
.ckpt Previous
path versions
NFS client tasks: SMB client tasks:
1. Connect to file system 1. Connect to file
NFS share system SMB share
2. Access the snapshot 2. Access the snapshot
hidden .ckpt data path previous versions tab
File system
Clients can recover data directly from the snapshot via CVFS
Dell Unity XT storage systems support creating snapshots of “data” (.vmdk) vVols
in Unisphere, Unisphere CLI, or REST API.
vVol snapshot restoration is also supported using Unisphere, Unisphere CLI, and
REST API.
• Snapshots that are created in vSphere using the VASA Provider can be
restored using vSphere, Unisphere, Unisphere CLI, or REST API.
• Snapshots that are created using Unisphere, Unisphere CLI, or REST API
cannot be restored using vSphere.
• Virtual Machines should be powered off before issuing a restore operation.
From Unisphere, you can create snapshots only for "data" vVols. Check the box to
select a data vVol. Then click the pencil icon to edit the selected virtual volume.
Select Snapshots
Create Snapshot
Initially, no snapshots exist for the vVol. Click the plus sign + to create a snapshot.
The system automatically generates a name for the snapshot. You can accept the
name, or enter a different name for the snapshot. Click OK to create the snapshot.
Select Snapshots
249. Select the Snapshots tab to view available snapshots for the data
vVol.
Restore Snapshot
250. Check the box to select the wanted snapshot. From the More
Actions drop-down menu, select Restore.
Confirm Operation
251. Verify that the virtual machine is powered off, and click Yes to
restore the vVol from the snapshot.
- The Restore operation rolls back the LUN or Consistency Group LUN
members to the point-in-time data state captured in the snapshot.
- The Snapshot Attach to host operation grants a host a defined access
level to a LUN snapshot.
- The Snapshot Dettach from host operation revokes the host access to
the LUN snapshot.
- The Copy operation makes a replica of an existing LUN snapshot which
inherits the parent snapshot data state and retention policy.
254. File System Snapshots
yyy. Dell Unity XT Snapshots provide a mechanism for capturing a
snapshot of file systems.
– Multiple snapshots of a file system capture different point-in-time data
states.
– Snapshots of file systems can be scheduled or manually created.
zzz. Snapshots of a file system can be created either as read-only or
read/write, and are accessed in different manners.
- Read/write snapshots of a file system can be shared by creating an NFS
or SMB share.
- The read-only file system snapshot is exposed to the client through a
checkpoint virtual file system (CVFS) mechanism that Snapshots
provides.
aaaa. Dell Unity XT system operations on file system-based snapshots are
Restore and Copy.
- The Restore operation rolls back the file system to the point-in-time data
state captured in the read-only or read/write snapshot.
- The Copy operation makes a replica of an existing file system snapshot
that is either read-only or read/write shared or unshared.
255. Native vVol Snapshots
cccc. Dell EMC XT vVol snapshot restore operations are supported using
Unisphere, Unisphere CLI, and REST API, or the vCenter Server.
- vVol snapshots created in the vSphere environment can be restored
using either vCenter Server or the Dell Unity XT management interfaces.
- vVol snapshots created using the Dell EMC Unity XT management
interfaces cannot be restored using vCenter Server.
For more information, see the Dell EMC Unity Family Configuring
and managing LUNs, Dell EMC Unity: NAS Capabilities and Dell
EMC Unity Family Configuring vVols on the Dell EMC Unity
Family Technical Documentation portal at Dell Technologies site.
Replication Overview
Replication Overview
This topic provides an overview of the Dell Unity XT Replication feature. The
architectures of Asynchronous and Synchronous Replications are discussed and
the benefits and capabilities are listed.
Replication
Replica
Storage Resource
− Long-distance replication
o Does not impact latency
− DR solution using Recovery Point Objective
o Data amount measured in units of time
− Available on physical and virtual systems
Dell Unity XT Replication is a data protection feature that replicates storage
resources to create synchronized redundant data. With replication, it is possible to
replicate storage resources within the same system or to a remote system. The
replication feature is included in the Unity XT licensing at no additional cost.
Asynchronous
Pool A Pool B
− LUNs
− Consistency Groups
− Thin Clones
− VMware vStorage VMFS datastores
− VMware NFS datastores
− File systems
− NAS servers
Asynchronous
Synchronous
Site A Site B
− LUNs
− Consistency Groups
− Thin Clones
Because file system access depends on a NAS server, to remotely replicate a file
system, the associated NAS server must be replicated first.
Site A Site B
When a NAS server is replicated, any file systems that are associated with the NAS
server are also replicated.
The system creates separate replication sessions; a session for the NAS server
and a session for each associated file system.
Synchronous session
Resource Resource
Management
Source Destination
Site A Site B
Synchronous replication architecture uses Write Intent Logs (WIL) on each of the
systems that are involved in the replication. These logs are internal structures and
are created automatically on each system. There is a WIL for SPA and one for SPB
on each system. During normal operations, these logs are used to maintain
synchronization of the source and the replica. The Write Intent Logs hold fracture
logs, designed to track changes to the source storage resource should the
destination storage resource become unreachable. When the destination becomes
One-Directional
Bi-Directional
Initial State
The synchronous replication of a storage resource has an initial process, and then
an ongoing synchronization process. The starting point is a data populated storage
resource on the source system that is available to production and has a constantly
changing data state.
Primary Secondary
Resource
Data
Management
Source Destination
Site A Site B
The first step of the initial process for synchronous replication is to create a storage
resource of the exact same capacity on the destination system. The system creates
the destination storage resource automatically. The new destination resource
contains no data.
Primary Secondary
Resource Resource
Data
Management
Source Destination
Site A Site B
2-Create WILs
In the next step, SPA, and SPB Write Intent Logs are automatically created on the
source and destination systems.
Primary Secondary
Resource Resource
Management
Source Destination
Site A Site B
3-Initial Copy
Initial synchronization of the source data is then performed. It copies all the existing
data from the source to the destination. The source resource is available to
production during the initial synchronization, but the destination is unusable until
the synchronization completes.
Primary Secondary
Resource Resource
Data
Management
Source Destination
Site A Site B
4-Host Writes
to Source
Primary Secondary
Resource Resource
Data
Management
Source Destination
Site A Site B
Primary Secondary
Resource Resource
Data
Management
Source Destination
Site A Site B
6-Dest. Ack.
After the destination system has verified the integrity of the data write, it sends an
acknowledgement back to the source system.
Primary Secondary
Resource Resource
Data
Management
Source Destination
Site A Site B
At that point, the source system sends the acknowledgement of the write operation
back to the host. The data state is synchronized between the source and
destination. Should recovery be needed from the destination, its RPO is zero.
Primary Secondary
Resource Resource
Movie:
Active In Sync
Syncing
Paused
Out of Sync
Failed Over
Consistent
Synchronous replications have states for describing the session and its associated
synchronization.
An Active session state indicates normal operations and the source and
destination are In Sync.
A Paused session state indicates that the replication has been stopped and has
the sync state of Consistent. This state indicates that the WIL is used to perform
synchronization of the destination.
A Failed Over session has one of two sync states. It can show an Inconsistent
state meaning the sync state was not In Sync or Consistent before the Failover. If
the sync state was In Sync before the Failover, it will be Out of Sync after session
Failover.
from one of its other states. For example, if the system has been recovered from
the Lost Sync Communications state.
The table details the various maximum capabilities for synchronous replication that
are based on specific Dell Unity XT models. The maximum replication sessions
include all replication sessions on the system, which include both synchronous and
asynchronous replication sessions, local or remote. The replication destination
storage resources count towards the system maximums, even though they are not
host accessible as a destination image. In Dell Unity XT, only one replication
connection that is used for synchronous replication, or synchronous and
asynchronous replication, can be created. Only one pair of systems can replicate
synchronously with each other.
Synchronous session
Resource Resource
Management
Source Destination
Site A Site B
The steps for creating remote replication sessions are different depending upon the
replication mode; either asynchronous or synchronous. Synchronous remote
replication steps are covered here. Before a synchronous replication session is
created, communications must be established between the replicating systems.
The first step is to identify the Synchronous FC Ports on the source and destination
systems for use to establish FC connectivity. This connectivity forms the
connections that carry the data between the two replicating systems.
Next, create Replication Interfaces on both the source and destination systems.
The interfaces must be created on the Sync Replication Management Ports and
form a portion of the management channel for replication.
After the connection is created, verify it can be seen on the peer system.
selected during the resource creation wizard. Or if the storage resource already
exists, it can be selected from the storage resource Properties page.
Next, configure the replication settings which define the replication mode and
destination system. The system automatically creates a destination resource and
the Write Intent Logs on both systems.
Before you create a synchronous remote replication session, you must configure
active communications channels between the two systems. The active
communications channels are different for the two replication modes, synchronous
is shown here.
1. Synchronous FC connections
FC-based connectivity
between source and
destination SPs
Carries replicated
data
Data
Management
2: Replication
Source Interfaces [sync] Destination
3: Replication
Connection
Mode of replication
Channel for management
One of several Fibre Channel ports on each SP of the Dell Unity XT system is
configured and used for synchronous replication. If available, the system uses
Fibre Channel Port 4 of SPA and SPB. If not available, and then the system uses
Fibre Channel Port 0 of I/O module 0. If that is not available, and then Port 0 of I/O
module 1 is used.
After the Synchronous FC Ports on the replicating systems are verified, the Fibre
Channel connectivity is established between the corresponding SP ports on each
system. Direct connect or zoned fabric connectivity is supported.
Although the Synchronous FC ports can also support host connectivity, Dell
recommends that they be dedicated to synchronous replication.
Unisphere
In Unisphere, navigate to the System View page for the rear view of the DPE to
identify the Synchronous FC Ports. In the example, the SPA FC Port 4 is selected
and its Replication Capability lists Sync replication.
UEMCLI
Ordering of SPA/SPB
Synchronous FC Ports:
1. FC Port 4
2. Module 0 FC Port 0
3. Module 1 FC Port 0
After the Replication Connection between systems has been created, the
connection is verified from the peer system using the Verify and Update option.
This option is also used to update Replication Connections if anything has been
modified with the connection or the interfaces. The updated connection status is
displayed.
A synchronous replication session is created as part of the wizard that creates the
storage resource.
From the LUN creation wizard example, the Replication step within the wizard is
shown.
281. Checking the Enable Replication option exposes the Replication
Mode and Replicate To fields required to configure the session.
282. The mode must be set to Synchronous to create a synchronous
replication session.
283. A Destination Configuration link is also exposed to provide
information concerning the destination resources used for the session.
The next step defines what resources on the destination system the replicated item
will use. The Name and Pool settings are required. More options are available
based on the destination system. In this example, the destination is a Hybrid model
that supports Data Reduction.
The wizard presents a Summary screen for the configured replication. In the
example, the session settings for the replication and destination are displayed.
The creation Results page displays the progress of the destination resource
creation and the session creation. When it is complete, the created sessions can
be viewed from the Replications page by selecting the Sessions tab.
The architecture for asynchronous local replication is shown here. The difference
between the local and remote architecture that was seen previously is that the local
architecture does not require the communications to a remote peer. The
management and data replication paths are all internal within the single system.
Otherwise, local replication uses Snapshots in the same manner. Local replication
uses source and destination objects on the two different pools similar to how
remote replication uses source and destination on two different systems.
Rep
Rep
Snap 2
Snap 2
Delta
Rep Rep
Snap 1 Snap 1
Pool Pool
Data
Management
Delta
Rep
Rep
Snap 1
Snap 1
Asynchronous session
Resource Resource
Data
Management
Source Destination
Site A Site B
The architecture for Dell Unity XT asynchronous remote replication is shown here.
Fundamental to remote replication is connectivity and communication between the
source and destination systems. A data connection is needed for carrying the
replicated data, and it is formed from Replication Interfaces. They are IP-based
connections that are established on each system. A communication channel is also
needed for management of the replication session. The management channel is
established on Replication Connections. It defines the management interfaces and
credentials for the source and destination systems.
One-to-Many
One-Directional
Many-to-One
Bi-Directional
For the One-to-Many and Many-to-One replication topology examples, the One-
Directional replication is depicted. One-Directional replication is not a requirement
when configuring the One-to-Many and Many-to-One replication topologies. Each
individual Replication Connection can be used for the Bi-Directional replication
between systems, which enables more replication options than depicted here.
Again, a single storage resource can only be replicated to a single destination
storage resource.
Initial State
The asynchronous replication process is the same for local and remote replication.
Shown here is remote replication. The asynchronous replication of a storage
resource has an initial process and an ongoing synchronization process. The
starting point is a data populated storage resource on the source system that is
available to production and has a constantly changing data state.
Resource
Data
Management
Site A (Source) Site B (Destination)
The first step of the initial process for asynchronous replication is to create a
storage resource of the exact same capacity on the destination system. The
system automatically creates the destination storage resource. The destination
storage resource contains no data.
Resource Resource
Data
Management
Site A (Source) Site B (Destination)
In the next step, corresponding snapshot pairs are created automatically on the
source and destination systems. They capture point-in-time data states of their
storage resource.
Rep Rep
Snap 2 Snap 2
Rep Rep
Snap 1 Snap 1
Resource Resource
Data
Management
Site A (Source) Site B (Destination)
The first snapshot on the source system is used to perform an initial copy of its
point-in-time data state to the destination storage resource. This initial copy can
take a significant amount of time if the source storage resource contains a large
amount of existing data.
Rep Rep
Snap 2 Snap 2
Rep Rep
Snap 1 Snap 1
Copy
Resource Resource
Data
Management
Site A (Source) Site B (Destination)
After the initial copy is complete, the first snapshot on the destination system is
updated. The data states that are captured on the first snapshots are now identical
and create a common base.
Rep Rep
Snap 2 Snap 2
Rep Rep
Snap 1 Snap 1
Copy
Resource Resource
Data
Management
Site A (Source) Site B (Destination)
Because the source storage resource is constantly changing, its data state is no
longer consistent with the first snapshot point-in-time. In the synchronization
process, the second snapshot on the source system is updated, capturing the
current data state of the source.
Initial process
Synchronization process
1. Create destination resource 5. Source Rep Snap 2 updated
2. Create rep snapshot pairs
3. Source Rep Snap 1 copied to destination
4. Destination Rep Snap 1 updates
Rep
Rep Snap 2
Snap 2
Rep Rep
Snap 1 Snap 1
Copy
Resource Resource
Data
Management
Site A (Source) Site B (Destination)
A data difference or delta is calculated from the two source system snapshots. A
delta copy is made from the second snapshot to the destination storage resource.
Rep Rep
Snap 2 Snap 2
Delta Copy
Rep Rep
Snap 1 Snap 1
Copy
Resource Resource
Data
Management
Site A (Source) Site B (Destination)
After the copy is complete, the second snapshot on the destination system is
updated to form a common base with its corresponding source system snapshot.
Rep Rep
Snap 2 Snap 2
Delta Copy
Rep Rep
Snap 1 Snap 1
Copy
Resource Resource
Data
Management
Site A (Source) Site B (Destination)
The cycles of delta copies continue for the session by alternating between the first
and second snapshot pairs that are based on the RPO value. The first source
snapshot is updated, the data delta is calculated and copied to the destination. The
first destination snapshot is then updated forming a new common base. The cycle
repeats using the second snapshot pair upon the next RPO synchronization time.
Rep Rep
Snap 2 Snap 2
Delta Copy
Rep Rep
Snap 1 Snap 1
Copy
Resource Resource
Data
Management
Site A (Source) Site B (Destination)
Movie:
• Asynchronous replication of
snapshots
− Primary storage resource must
also be replicated Asynchronous
− Unattached block and read-
only file snapshots
− Scheduled or user created
snapshots
• Primary storage resource
snapshots can be:
− LUNs
− Consistency Groups
− Thin Clones
− VMware VMFS datastores
− VMware NFS datastores
− File systems
• Replica retention policies can be customized
− Cost savings
− Compliance
With the Dell Unity XT replication feature, it is also possible to asynchronously
replicate a snapshot of a primary storage resource. They can be replicated either
locally or remotely. Also referred to as “Snapshot Shipping,” snapshot replication
requires that the primary storage resource is replicated. Block-based unattached
snapshots and file-based read-only snapshots are supported. The snapshots can
either be user created or created by a schedule.
The snapshot replicas can have retention policies that are applied to them that are
different from the source. The feature has multiple use case scenarios, one is cost
savings. With snapshots that are replicated to a lower-end system on the
destination site, the source snapshots can be deleted off the higher-end production
source site system. Therefore, saving capacity and its associated costs on a
production system. Another use case is for compliance needs. The retention policy
on the snapshot replicas can be tailored to any compliance needs, such as for
medical or governmental storage requirements.
Rep
Rep
Snap 2
Snap 2
Asynchronous session
Delta
User User
Snap Snap Rep
Rep Snap 1
Snap 1
Asynchronous session
Resource Resource
Data
Management
Source Destination
Site A Site B
since the snapshot data does not change, it is only replicated a single time. After
that, it is not part of any RPO-based synchronization cycle that is needed for the
replicated primary storage resource.
The table details the various maximum capabilities for asynchronous replication
that are based on specific Dell Unity XT models. The maximum replication sessions
include all replication sessions on the system, which include both synchronous and
asynchronous replication sessions, local or remote. The replication destination
storage resources count towards the system maximums, even though they are not
host accessible as a destination image.
Rep Rep
Snap 2 Snap 2
Asynchronous session
Resource Resource
Data
Management
Source Destination
Site A Site B
The steps for creating remote replication sessions are different depending upon the
replication mode; either asynchronous or synchronous. Asynchronous remote
replication steps are covered here. Before an asynchronous replication session can
be created, communications must be established between the replicating systems.
290. Create Replication Interfaces on both the source and destination
systems. The interfaces form the connection for replicating the data between
the systems.
291. Create a Replication Connection between the systems. This step is
performed on either the source or the destination. It establishes the
management channel for replication.
292. Verify the connection from the peer system. Communications are
now in place for the creation of a replication session for a storage resource.
293. Define a session for a storage resource during the resource creation
or, select a source for replication if the storage resource already exists.
294. Define the replication settings which include the replication mode,
RPO, and the destination. The system automatically creates the destination
resource and the Snapshot pairs on both systems.
295. Create the replication session.
Before you create an asynchronous remote replication session, you must configure
active communications channels between the two systems. The active
communication channels are different for the two replication modes,
asynchronous is shown here.
1. Replication Interface:
IP-based connectivity
between source and
destination SPs
Carries replicated
data
Data
Management
2: Replication
Source Connection Destination
Pairs Replication
Interfaces
Mode of replication
Channel for management
− The connection pairs together the Replication Interfaces between the source
and destination systems.
− An IP address and subnet mask must be provided for both SPs. Gateway
addressing is optional, and a VLAN ID configuration is also provided if
needed.
Replication Interfaces must be created on both of the replicating systems. The
creation of Replication Interfaces must be repeated on the peer system.
After the Replication Connection between systems has been created, the
connection is verified from the peer system using the Verify and Update option.
This option is also used to update Replication Connections if anything has been
modified with the connection or the interfaces. The updated connection status is
displayed.
From the NAS Server creation wizard example, the Replication step within the
wizard is shown.
306. Checking the Enable Asynchronous Replication option exposes
the Replication Mode, RPO, and Replicate To fields required to configure the
session.
− The mode must be set to Asynchronous to create an asynchronous
replication session.
− To create a remote replication session, select the remote system from the
Replicate To drop down. Select Local if configuring a local replication
session.
307. A Destination Configuration link is also exposed to provide
information concerning the destination resources used for the session.
− When selected, the Replicate all existing snapshots option and the
Replicate scheduled snapshots options becomes available.
312. When the Reuse destination resource option is selected, the
system automatically searches for a resource with the same name on the
destination and replicate to it if found. If one does not exist, it creates a new
destination resource.
The next step defines what resources on the destination system the replicated item
will use and how the replica is configured. For the NAS server example, the Name,
the Pool, and Storage Processor settings are required. By default, the system
configures the replica as close as possible to the source. The user can choose to
customize the replica configuration as needed.
In the NAS server example shown, the NAS server has an associated file system
and a separate replication session is created for it. The table details the destination
resources that are used for the file system. The user can select the file system and
edit its destination configuration to customize the resources that the replica uses.
The wizard presents a Summary screen for the configured replication session. In
the example, sessions for the NAS server and its associated file system are
configured for creation.
The creation Results page displays the progress of the destination resource
creation and the session creation. When it is complete, the created sessions can
be viewed from the Replications page by selecting the Sessions tab.
Replication Operations
When replication is in place on systems, resources that are being replicated are
displayed on various Unisphere pages. Unisphere resource filtering provides a
method for administrators to identify system resources as replication source or
destination resources. The Source, Destination, and All resource filter buttons are
on Unisphere pages for administrators to filter the displayed resources. Block
storage LUNS and Consistency Groups pages include resource filter buttons. File
storage File Systems, NAS Servers, NFS Shares, and SMB Shares pages
include resource filter buttons. VMware storage Datastores and Datastore Shares
pages include resource filter buttons. The Replication Sessions page includes
resource filter buttons.
For resources being replicated locally, the Sessions page displays those sessions
in all three views. Click through each tab to see the filtered views. The
FASTVP_CG resource is being locally replicated and is seen in all views.
Source
With the Source button selected, the page displays only source resources.
Destination
With the Destination button selected, the page displays only destination
resources.
All
With the default All button selected, source, destination and any resources not
replicated are displayed.
System level replication operations are available for Pause, Resume, andFailover
of replicated resources. The system level operations are run on replication
sessions for specified replication connections to remote systems. If multiple
replication sessions exist, the system level operation impacts all sessions for the
specified remote systems to reduce discrete administrative tasks.
In Unisphere, the system level operations are available from the Replication
Connections page when a replication connection is selected.
System level
operations
Replication Connection
System level Pause and Resume operations are performed from the Replication
Connections page.
• Select the remote system with replication sessions that you want to perform the
operation on.
LUN
Asynchronous
session
Synchronous
session
UnityA-300F UnityVSA-1
Replication Connections
Pause Operation
UnityB-400
LUN
Asynchronous
session
Synchronous
session
UnityA-300F UnityVSA-1
Pause operation
Pause Results
• Source and destination sessions for selected remote systems are paused.
UnityA-300F UnityVSA-1
Resume Operation
The Resume operation is performed in the same manner as the Pause operation.
Only paused session support the resume operation.
317. From More Actions, select the Resume operation.
318. If a connection supports asynchronous and synchronous replication,
select the session type to perform the operation on.
UnityB-400
UnityA-300F UnityVSA-1
Resume operation
Resume Results
• Source and destination sessions for selected remote systems are resumed.
UnityA-300F UnityVSA-1
The system level Failover operation is designed for unplanned failover. It is run
from the destination system. The single operation fails over all file replication
sessions to the destination system.
Replication Connection
Asynchronous session
Synchronous
session
UnityA-300F UnityVSA-1
Replication Connection
Failover Operation
UnityB-400
UnityA-300F UnityVSA-1
Failover operation
Failover Results
• Replication sessions for the selected remote system are failed over.
UnityB-400
Sessions
failed over
UnityA-300F UnityVSA-1
Failover results
Session Operations
Asynchronous
Unplanned
Planned
Synchronous
Source Destination
The example compares operations from healthy asynchronous and synchronous sessions on their
source and destination systems.
Planned failover operations are performed from the source system when both
systems are available.
• Planned failover would be run to test the DR solution or if the source site was
scheduled to be down for maintenance.
Unplanned failover operations are performed from the destination system when
the source system is not available.
• For example, if the source site is affected by a power outage or natural disaster.
The Failover operation would make data available from the destination site.
From the source, it is also possible to perform session Pause or Sync operations.
Because a NAS server has networking associated with it, when the server is
replicated its network configuration is also replicated.
During replication, the source NAS server interface is active and the destination
NAS server interface is not active. Having the source and destination NAS server
interfaces the same is fine for sites that share common networking.
For sites where the source and destination have different networking, it is important
to modify the network configuration of the destination NAS server. The modification
is needed to ensure correct NAS server operation in a failover event.
Source Destination
The modification is performed from the NAS server Properties page on the
destination system.
• Select the Override option and configure the destination NAS server for the
networking needs of the destination site.
Because the NAS server effectively changes its IP address when failed over,
clients may need to flush their DNS client cache. The client is then able to connect
to the NAS server when failed over.
Because of the dependence between a NAS server and its associated file systems,
certain NAS server replication session operations are also performed on its file
systems.
Operations grouped to NAS server level Operations not grouped to NAS server level
- Failover - Create
- Failover with sync - Sync
- Failback - Delete
- Pause - Modify
- Resume
Failover
Grouped operation
Site A Site B
In the example shown, if a failover operation is performed on the NAS server, its two associated file
systems also failover.
Grouped operations are only available to sessions that are in a healthy state.
• Grouped operations are prohibited if a NAS server is in the paused or error
state.
• Grouped operations skip any file system sessions that are in paused, error, or
non-replicated states.
The operations capable of being grouped are: Failover, Failover with sync,
Failback, Pause, and Resume.
The Create, Sync, Delete, and Modify replication operations are not grouped and
are performed discretely per session. Discrete operations are also permitted on file
system replication sessions.
Site A Site B
Select Action
The process starts with issuing the Failover with sync operation from site A which
is the primary production site.
Site A Site B
Site A Site B
A synchronization from the site A object to the site B object happens next.
Sync
Site A Site B
Pause Replication
After the sync process completes, the replication session is then paused.
Sync
Site A Site B
The site B object is and then made available for access to complete the operation.
Sync
Site A Site B
Site A Site B
Select Action
The process starts with issuing the Failover operation from site A which is the
primary production site.
Site A Site B
Site A Site B
The ongoing synchronous replication session synchronizes the data state of the
site B object to the site A object.
Sync
Site A Site B
Reverse/Restart Session
Reverse
Sync
Site A Site B
The site B object is and then made available for access to complete the operation.
Reverse
Sync
Site A Site B
Site A Site B
Unplanned event: Primary production site unavailable
The primary production site becomes unavailable and all its operations cease. Data
is not available, and replication between the sites can no longer proceed.
Site A Site B
Select Action
A Failover operation is issued from site B which is the secondary production site.
Site A Site B
Pause Replication
The operation pauses the existing replication session so that the session does not
start again should site A become available.
Site A Site B
The site B object is made available for production access to complete the
operation.
Site A Site B
Site A Site B
Replication resumes in reversed direction
The Site A replicated object must be available before the replication session can be
resumed.
Resume process
1. Site A becomes available
Site A Site B
Select Action
Resume process
1. Site A becomes available
2. Resume issued from site B
Site A Site B
Reverse/Restart Session
Resume process
1. Site A becomes available
2. Resume issued from site B
3. Paused session is reversed and restarted
Site A Site B
The operation updates the site A object with any changes that may have been
made to the site B object during the failover. The replication session then resumes
in the reverse direction and returns to a normal state. For asynchronous file
replication sessions, there is an option available to perform a synchronization of the
site A data to site B. The option overwrites any changes that are made to site B
during the failover. After the overwrite synchronization, replication is then restarted
in the reverse direction; from site B to site A in this example.
The Resume operation is preferred over Failback in situations where large amounts
of production change have accumulated due to long session pauses.
Site A Site B
Replication returns to the state before Failover
The site A replicated object must be available before the Failback operation can be
initiated on a session.
Failback process
1. Site A becomes available
Site A Site B
Select Action
Failback process
1. Site A becomes available
2. Failback issued from site B
Site A Site B
The operation removes access to the site B object and synchronizes the site A
object to the data state of the site B object.
Failback process
1. Site A becomes available
2. Failback issued from site B
3. Sync from site B to site A
Sync
Site A Site B
The operation then enables access to the site A object for production.
Failback process
1. Site A becomes available
2. Failback issued from site B
3. Sync from site B to site A
4. Access to site A object allowed
Sync
Site A Site B
Replication Restarts
Replication is restarted using the site A object as a source and the site B object as
a destination. This single operation returns the object’s replication state as it was
before the failover.
Failback process
1. Site A becomes available
2. Failback issued from site B
3. Sync from site B to site A
4. Access to site A object allowed
5. Replication restarts from site A to site B
Sync
Site A Site B
This demo covers the synchronous remote replication of a LUN, and the
synchronous remote replication of a NAS Server and file system.
Movie:
Replica Access
- DR testing
User snap
Block Replica
resource resource
Source Destination
When block resources are replicated remotely, there are many benefits of being
able to access the replica data. One benefit of replica data access is that there is
no impact on the source system resources that are used for production. Another
positive aspect of accessing replica data is to have no impact on the existing
remote replication session. With replica access to data, backup and recovery of
data are possible. Testing and data mining are also possible with replica data
access. Having access to the replica data is also a valid way to test and validate
the remote replication DR solution.
Access to the replica block data on the destination system is not performed directly
to the replicated block resource. The remote replica is marked as a destination
image by the system and blocks read/write access to it. Access to the data is
accomplished through a user snapshot of the resource and attaching a host to the
snapshot. The snapshot can be a replicated user snapshot from the source. Or it
can be a user created snapshot of the replica resource on the destination system.
User snap
NAS Proxy
Server NAS NAS
Server Server
File
resource Replica
resource
Source Destination
When file resources are replicated remotely, there are many benefits of being able
to access the replica data. One benefit of replica data access is that there is no
impact on the source system resources that are used for production. Another
positive aspect of accessing replica data is to have no impact on the existing
remote replication session. With replica data access, backup and recovery of data
are possible. Testing and data mining are also possible with replica data access.
Having access to the replica data is also a valid way to test and validate the remote
replication DR solution.
Access to the replica file data is not achieved as directly as access to the source
file resource. File data is accessed through the NAS server that is associated with
the file system data. But a replicated NAS server has its production interfaces
inactive. The recommended method of accessing replica file data is to have a user
snapshot of the file resource. Access of replica file data can be done several ways.
A user snapshot of the resource could be made on the source system and
replicated to the destination system for access. A Read-only or Read/Write user
snapshot can also be made from the replica resource on the destination.
Once the user snapshot is on the destination system, there are several methods for
gaining access to its data. One way is to create a Backup and Test IP interface on
the replica NAS server. If a Read/Write snap is made, an NFS client can access a
share that is configured on the snapshot through the replica NAS server Backup
and Test IP interface. Another method for replica file data access is to create a
Proxy NAS server on the destination system. The Proxy NAS server is created in
the same manner as any NAS server. It becomes a Proxy NAS server by running a
CLI command to associate it to the replica NAS server. The Proxy NAS server must
be created on the same SP as the replica NAS server. If the replica NAS server is
configured for SMB, the Proxy NAS server must also have the same SMB
configuration. Read-only administrative access to the replica data is provided
through the Proxy NAS server. SMB client Read/Write access to the replica data is
also supported with the Proxy NAS server. For Read/Write access, the user
snapshot must be Read/Write and have a share that is configured on the Proxy
NAS server with the share path that is configured to the Read/Write user snapshot.
Create a NAS Server on destination system Configure proxy settings to replicated NAS Server
command(s) succeeded Example configures a proxy NAS Server named "nas02_Proxy" for a
output is complete
replicated NAS Server named "nas02" and root access to an NFS client.
Command succeeded
To create a proxy NAS server for accessing replica file data, a new NAS server is
created on the destination system. The new NAS server that is the proxy NAS
server must be created on the same storage processor as the replicated NAS
server. It must be configured with the same access protocols that are used on the
replicated NAS server. A similar multitenancy configuration is needed for the proxy
NAS server as the replicated NAS server. A proxy NAS server can support multiple
NAS servers file data access.
The new NAS server is then configured with the proxy settings for the replicated
NAS server. This configuration is performed with the service command svc_nas
over a secure shell connection to the destination system. Use the svc_nas -proxy
-help to view the command syntax to configure a proxy NAS server. Use the
svc_nas -proxy_share -help to view the command syntax to configure an SMB
share for proxy access to a file system on another NAS server. The example
configures a proxy NAS server that is named nas02_Proxy for a NAS server that is
named nas02 and gives a specific NFS client root access.
Appendix
The score can help a storage administrator spot where the most severe health
issues are, based on the five core factors (health categories). The area with the
highest risk to the system's health hurts its score until actions are taken towards
remediation.
These categories are not customer configurable but built into the CloudIQ software.
Data Protection RPOs not being met, last snap not taken.
Inefficient provisioning
• Storage is managed in RAID Group units.
− If a storage administrator wants to add capacity, the administrator must add
a RAID Group.
− This process usually provides more capacity than needed at a higher cost
since disk drives cost money.
• A RAID Group is limited to a maximum of 16 drives and is composed of drives
of the same type.
• There is no mixing of drive types in a RAID Group.
− The hot spare that was swapped in at the time of failure remains in the
storage pool even after the failed drive is replaced.
• Spare drives cannot be used to mitigate flash drive wear through
overprovisioning or improve pool performance.
Single-tier Pool
DPG 1
RAID 5 (4+1)
64 Total Drives
DPG 2
RAID 5 (4+1)
6 Total Drives
There may be one or more drive partnership groups per dynamic pool.
• Every dynamic pool contains at least one drive partnership group.
• Each drive is member of only one drive partnership group.
• Drive partnership groups are built when a dynamic pool is created or expanded.
6 to 9 4+1
RAID 5 10 to 13 8+1
14 or more 12+1
7 or 8 4+2
9 or 10 6+2
11 or 12 8+2
RAID 6
13 to 14 10+2
15 or 16 12+2
17 or more 14+2
3 or 4 1+1
5 or 6 2+2
RAID 1/0
7 or 8 3+3
9 or more 4+4
The storage system automatically generates IQN (iSCSI Qualified Name) and the
IQN alias. The IQN alias is the alias name that is associated with IQN. Both the
IQN and the IQN alias are associated with the port, and not the iSCSI interface.
4.3 / 4.4 Data Reduction All Flash Pool 1 300 | 400 | 500 |
600
300F | 400F | 500F
| 600F
350F | 450F | 550F
| 650F
5.0 / 5.1 Data Reduction All Flash Pool 1 300 | 400 | 500 |
600
300F | 400F | 500F
| 600F
350F | 450F | 550F
| 650F
380 | 480 | 680 |
880
380F | 480F | 680F
| 880F
3The pool must contain a flash tier, and the total usable capacity of the flash tier
must meet or exceed 10% of the total pool capacity.
In the example, the Flash Percent (%) of Pool 1 does not comply with the data
reduction requirements.
Data Reduction is disabled and unavailable for any storage resource that is created
from the pool. The feature is grayed out, and a message explains the situation.
The example shows that data reduction is not available when creating a LUN on
Pool 1.
Create LUN wizard with the Data Reduction option unavailable for configuration
Glosary
Drive Extent
A drive extent is a portion of a drive in a dynamic pool. Drive extents are either
used as a single position of a RAID extent or can be used as spare space. The size
of a drive extent is consistent across drive technologies – drive types.
PACO
The Proactive Copy feature or PACO enables disks to actively copy the data to the
hot spare. The operation is triggered with the number of existing media errors on
the disk. PACO reduces the possibility of two bad disks by identifying whether a
disk is about to go bad and proactively running a copy of the disk.
RAID Extent
A collection of drive extents. The selected RAID type and the set RAID width
determine the number of drive extents within a RAID extent.
Each RAID extent contains single drive extent from a specific number of drives
equal to the RAID width.
RAID extents can only be part of a single RAID Group and can never span across
drive partnership groups.
Spare Space
Spare space refers to drive extents in a drive extent pool not associated with a
RAID Group. Spare space is used to rebuild a failed drive in the drive extent pool
vVol
Virtual volumes (vVols) are VMware objects that correspond to a Virtual Machine
(VM) disk, and its snapshot and its clones. Virtual machine configuration, swap
files, and memory are also stored in virtual volumes.