0% found this document useful (0 votes)
16 views

Unity Deep Dive - Lecture

The document outlines a lab-intensive course designed for storage professionals to gain practical experience with the Unity storage system, covering installation, management, and support in diverse environments. It includes information on the Unity platform, its features, and the UnityVSA software-defined storage solution, along with details about the Unity Proven Professional exam. The course emphasizes hands-on lab exercises and provides resources for studying and preparation for the exam.

Uploaded by

j.shahriari63
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Unity Deep Dive - Lecture

The document outlines a lab-intensive course designed for storage professionals to gain practical experience with the Unity storage system, covering installation, management, and support in diverse environments. It includes information on the Unity platform, its features, and the UnityVSA software-defined storage solution, along with details about the Unity Proven Professional exam. The course emphasizes hands-on lab exercises and provides resources for studying and preparation for the exam.

Uploaded by

j.shahriari63
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 492

Welcome to Unity Deep Dive.

Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks
of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the
USA.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE.

Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license. The trademarks, logos, and service marks
(collectively "Trademarks") appearing in this publication are the property of DELL EMC Corporation and other parties. Nothing contained in this publication should be construed
as granting any license or right to use any Trademark without the prior written permission of the party that owns the Trademark.

AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager,
AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC
CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information Model, Compuset,
Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge ,
Data Protection Suite. Data Protection Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS
ECO, Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter,
EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic
Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC
LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy, Multi-Band
Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere,
ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak,
Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne,
SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex,
UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize
Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression,
xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.

Revision Date: <Published DATE /MM/YYYY>

Revision Number: MR-1LP-UNIDD. 4111.9138882.1

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 1
This lab intensive course provides storage professionals with the practical experience needed to install,
deploy, manage and support a Unity storage system in heterogeneous host environments. The course
covers the knowledge necessary to understand the features, functionality and key use cases of a Unity
storage system. The topics include an overview of the Unity platform, hardware components, UnityVSA,
installation and configuration, storage provisioning, data protection and mobility, and maintenance
activities.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 2
Copyright © 2017 Dell Inc.
[email protected] Unity Deep Dive 3
Copyright © 2017 Dell Inc.
[email protected] Unity Deep Dive 4
This course provides a “hands-on lab” focus and supports a Unity Proven Professional exam. The Unity
Solutions Specialist E20-393 exam for Implementation Engineers is a 60 question exam that allows 90
minutes to complete. A practice test is available to check your readiness.

To provide a “hands on” lab exercise focus, the course lecture sections have been shortened. The
downloadable Student Resource Guide PDF file provides a complete set of information to support this
exam and is designed to be your study guide for it. The Lecture Slides PDF file covers the lecture slides
and is not intended to fully support the exam. Please be sure to reference the Student Resource Guide to
studying for the exam.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 5
Copyright © 2017 Dell Inc.
[email protected] Unity Deep Dive 6
Copyright © 2017 Dell Inc.
[email protected] Unity Deep Dive 7
This module focuses on focuses on the EMC Unity platform. It includes the EMC Unity models, Unity
storage resources, features and functions, and the management options and capabilities, and provides an
overview of Unity hardware components. The module also introduces the UnityVSA software-defined
storage solution and discusses Unity integration with virtualized environments.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 8
This lesson covers the Unity solutions offered by EMC, discuss its unified architecture, software
packaging, and identify the key use cases supported by the Unity platform. The lesson also introduces the
UnityVSA software-defined storage solution, and its product offerings.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 9
The Unity platform is a unified Block and File offering in a single product that can be managed with one
easy to use GUI. Unity is a storage product designed for a mid-range environment that includes small to
mid-range customers. The Unity storage platforms support the NAS protocols (SMB/CIFS for Windows
and NFS for UNIX/Linux), as well as native block protocols (iSCSI and Fibre Channel).

Unity is optimized for core IT applications. These include transactional workloads such as Oracle, SAP,
SQL, Exchange or SharePoint, server virtualization and end user computing such as VDI, and all other
applications that need traditional file, block or unified storage. All models are available as an AF, All Flash
option.

Unity is also a good fit for partner lead configurations optimized for virtual applications with VMware and
Hyper-V integration. The Unity platform with multi-core optimized architecture unleashes the power of
Flash, taking full advantage of the latest Intel multi-core technology.

The Unity family of storage arrays comes in two types of storage, an All FLASH version as shown on the
left and a HYBRID version on the right. To identify the different models, the front bezels have a slightly
different appearance with the ALL FLASH bezels being white and the HYBRIDs blue and white.

To differentiate the two models, the naming conventions are slightly different as well.
• An ALL FLASH model is designated by the addition of the “F” after the name. For example, Unity
400F.
• HYBRID models simply use the model number such as Unity 600. All Unity models directly
integrate with the current VNX portfolio of products.

Unity is also available as a virtualized product UnityVSA.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 10
Unity provides storage resources suited for the needs of specific applications, host operating systems, and
user requirements. These storage resources are categorized in Block storage (LUNs or Consistency
Groups), File storage (NAS Servers, file systems and shares), and VMware datastores (VMFS or NFS, or
VVol datastores).

LUNs and Consistency Groups provide generic block-level storage to hosts and applications that use the
Fibre Channel (FC) or iSCSI protocol to access storage in the form of virtual disks. LUN (Logical Unit) is a
single element of storage while Consistency Group is a container with one or more LUNs.

File Systems and shares provide network access to NAS clients in Windows and Linux/UNIX
environments. Windows environments use the SMB/CIFS protocol for file sharing, Microsoft Active
Directory for authentication, and Windows directory access for folder permissions. Linux/UNIX
environments use the NFS protocol for file sharing and POSIX access control lists for folder permissions.

VMware datastores provides storage for VMware virtual machines through datastores that are accessible
through the FC, or iSCSI protocols (VMFS) and the NFS protocol.

Another modality of supported VMware datastores are the VVol (Block) and VVol (File) datastores. These
storage containers will store the virtual volumes (VVols) which are VMware objects that corresponds to a
Virtual Machine (VM) disk, and its snapshot and its clones. These VVol (File) use NAS protocol endpoints
and VVol (Block) uses SCSI protocol endpoints for I/O communications from the host to the storage
system. The protocol endpoints provides access points for ESXi hosts communication to the storage
system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 11
Unity software is made up of the Unity Essentials Software Package which includes different licenses for
Hybrid and All Flash Arrays.

The Software Package provides management software including Unisphere Element Manager; Unisphere
Central, a consolidated dashboard and alerting software; thin provisioning; Proactive Assist to configure
remote support, online chat and open a service request; Quality of Service for block storage; and EMC
Storage Analytics Adapter for VMware vRealize. Unified protocols include file, block and VVols. Local
data protection is provided by local point-in-time copies, anti-virus software, and an optional controller-
based encryption which is provided by a separate license.

It also provides remote data protection with Native Asynchronous Block and File Replication, and Native
Synchronous Block Replication, and performance optimization with FAST Cache and FAST VP.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 12
Unity Virtual Storage Appliance (UnityVSA) is a software defined storage platform that allows the easy
deployment of advanced unified storage and data management features of the Unity family on VMware
environments. UnityVSA runs on general purpose hardware on a VMware ESXi hypervisor, as a single SP
configuration only.

The UnityVSA is a Unified Array, providing both Block (iSCSI) and File (NFS & SMB/CIFS), and VVols in
one integrated platform. Easy configuration and management of the storage array is possible using the
same HTML5 Unisphere interface as Unity purpose-built storage arrays. A consistent feature set and data
services such as Unified Snapshots and Replication are available with the UnityVSA.

Benefits of this approach is a low acquisition cost option for hardware consolidation, multi-tenant storage
instances, remote/branch office storage environment, and easier to build, maintain, destroy environment
for staging and testing. UnityVSA can coexist with and provide storage to applications running on the
same server hardware, enabling customers to implement an affordable software-defined solution. Multiple
VSA instances can be deployed on a single server.

It is available as a free 4TB capacity Community Edition and a Professional Edition (10TB, 25TB and
50TB) subscription product offering with EMC support. The Community Edition is for test/dev
environments and the Professional Edition is for production environments

For users who initially purchase a 10TB or 25TB subscription and require additional capacity, the following
capacity upgrades are supported:

• 10TB to 25TB upgrade

• 10TB to 50TB upgrade

• 25TB to 50TB upgrade

Capacity upgrades and license renewals can be installed non-disruptively. When a capacity upgrade is
installed, the limits on the system also scale accordingly.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 13
The pressure in Exchange environments to eliminate backup windows and reduce recovery times is
stronger than ever before. With dynamic, policy-based protection management, these dynamic
environments can easily handle the demanding recovery objectives using EMC Unity advanced
technologies such as snapshots and continuous protection technology. It also helps efficiently use storage
resources and optimize protection for Microsoft Exchange. The Unity storage array performance is
optimal under heavy Exchange workload.

Traditionally, the best practices for optimizing storage performance involved manual, resource intensive
processes. Unity allows SQL administrators to leverage an easy-to-use and potentially hands-off
mechanism for optimizing the performance of the most demanding applications. Automating the
movement of data between storage tiers saves both time and resources. Unity eliminates the need to
spend hours manually monitoring and analyzing data to determine a storage strategy, then maintaining,
relocating and migrating LUNs (Unity logical volumes) to the appropriate storage tiers.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 14
Online Transaction Processing (OLTP) database applications tend to be mission-critical and usually have
stringent I/O latency requirements. Traditionally, these OLTP databases are deployed on a huge number
of rotating Fibre Channel (FC) spindles to meet the low I/O latency requirement. Consequently, the
effective capacity utilization of these spindles is very low. Unity reduces the need to buy more drives to
keep up with database growth. Also, Unity automatically and non-disruptively migrates hot and cold data
between the available storage tiers, thereby improving the effective storage utilization.

The common business requirement in SAP environments is reducing TCO while improving performance
and service level delivery. Frequently, responsiveness to sensitive SAP applications has deteriorated
over time due to increased data volumes, unbalanced data stores, and changing business requirements.
By using Unity with block data, SAP deployments can gain a significant performance boost without the
need to redesign the applications, adjust the data layouts, or reload significant amounts of data. With
automated sub-LUN level tiering and extended cache, Administrators can properly balance data
distribution across the tiers that allow capacity and performance optimization.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 15
The Unity platform is optimized for virtualization, and thus supports all leading Hypervisors, simplifying
desktop creation and storage configuration. Unity leverages advanced technologies to optimize
performance for the virtual desktop environment, helping support service level agreements.

Virtualization management integration allows the VMware administrator or the Microsoft Hyper-V
administrator to extend their familiar management console for Unity related activities.

VMware vStorage APIs for Array Integration (VAAI) for both SAN and NAS connections, allows Unity to be
fully optimized for virtualized environments. EMC Virtual Storage Integrator (VSI) is targeted towards the
VMware administrator. VSI supports Unity provisioning within vCenter, full visibility to physical storage,
and increases management efficiency.

In the Microsoft Server 2012 and Hyper-V3.0 space, the Array Offloaded Data Transfer (ODX) allows
Unity to be fully optimized for Windows virtual environments. This technology offloads storage-related
functions from the server to the storage system.

EMC Storage Integrator (ESI) for Windows provides the ability to provision block and file storage for
Microsoft Windows or Microsoft SharePoint sites.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 16
This lesson covers the scalability, performance and efficiency features available in Unity. The lesson also
describes the Unity data security, protection and mobility features.

Topics include NAS servers, the UFS64 file system, FAST Cache, Data at Rest encryption, antivirus
protection, 3-way NDMP backup, the native local and remote protection solutions, the native File Import
and SANCopy features, and Unity support for file archiving to the cloud with the integration of CTA and
Virtustream.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 17
Scalability and performance features available with Unity includes the virtual file servers known as NAS
servers, the use of UFS64 Enterprise File System architecture for File data, and the FAST Cache.

Unity has a single-enclosure, two storage processor architecture with no concept of designated hardware
for File services. File data is served through the use of virtual file servers known as NAS servers.

A NAS server is required prior to creating file systems. NAS servers are used for NAS protocols only. And
with Unity, the storage pools are shared by all resource types, meaning that File systems, LUNs and
VMware Virtual Volumes or VVols can be provisioned out of the same unified pools without need for a
second level “file pool.

Each NAS server is a separate file server. Users on one NAS server cannot access data on another NAS
server. Each NAS server has a separate configuration with independent network interfaces, sharing
protocols, directory services, NDMP backup and Security.

UFS64 is a 64-bit file system architecture used in Unity systems and has a 64TB maximum file system
size as of Unity OE v4.1. All NAS servers and file systems in Unity will use only UFS64. Better
performance is also provided through faster fail overs, file shrink and expand, space efficient snapshots
and simpler quotas.

The FAST Cache is a large capacity secondary cache that uses SAS Flash 2 drives to improve system
performance by extending the storage system's existing caching capacity. The FAST Cache can scale up
to a larger capacity than the maximum DRAM Cache capacity.

FAST Cache reduces the load on back-end hard drives by identifying when a chunk of data on a LUN is
accessed frequently, and copying it temporarily to FAST Cache. The storage system then services any
subsequent requests for this data faster from the Flash disks that make up the FAST Cache

Subsets of the storage capacity are copied to the FAST Cache in 64KB chunks of granularity.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 18
Unity offers compression functionality for storage space efficiency using Inline-compression (ILC)
technology to reduce the amount of physical storage required to store datasets. The technology supports
compression on thin LUNs and VMFS Datastores only. Compression-enabled LUNs must be created from
All Flash pools (AFP).

Compression is supported on physical hardware only. For Hybrid arrays, the functionality is only
supported on All Flash pools with no additional licenses required.

Unity software also includes the Fully Automated Storage Tiering for Virtual Pools or FAST VP as a
storage efficiency feature.

FAST VP enables the system to retain the most frequently accessed or important data on fast, high-
performance disks and move the less frequently accessed and less important data to lower-performance,
cost-effective disks.

FAST VP tracks data in a Pool at a granularity of 256 MB – a slice – and ranks slices according to their
level of activity and how recently that activity took place.

Unity supports the File System Quotas which will enable a storage administrator to track and/or limit
usage of a file system.

Quota limits can be designated for users, or a directory tree. Limits are stored in quota records for each
user and quota tree. Limits are also stored for users within a quota tree.

Unity Quality of Service (QoS) is a feature that limits I/O to Block storage resources: LUNs, snapshots,
and VMFS Datastores. QoS or Host I/O limits can be set on physical or virtual deployments of Unity
systems.

Limits can be set by throughput in IOs per second or Bandwidth defined by Kilobytes or Megabytes per
second, or a combination of both limits. If both thresholds are set, the system limits traffic according to the
threshold that is reached first.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 19
Unity offers controller-based Data At Rest Encryption (D@RE) as a security feature. D@RE primarily
achieves the security of information stored on disks in the event of theft, ensuring that data is encrypted
with strong encryption algorithms.

D@RE uses hardware embedded in the SAS IO controller chip in all SAS IO modules and embedded in
the Storage Processor instead of hardware embedded in the disk drive as with self-encrypting drives
(SEDs). D@RE supports all drive types. The IO modules encrypt data as it’s sent to the disks.

Since the encryption/decryption functions occur in the SAS controller, it has minimal impact on data
services such as replication, snapshots, etc.

EMC Common Anti Virus Agent provides an antivirus solution to clients using a Unity system. It uses an
industry-standard CIFS protocol in a Microsoft Windows Server domain as well as supporting Windows
clients. The antivirus agent uses third-party antivirus software to identify and eliminate known viruses
before they infect files on the storage system.

The Network Data Management Protocol or NDMP support, enables backup of Unity file data to a tape
library or virtual tape appliance such as Data Domain or Avamar.

Unity supports remote NDMP, also known as 3-way NDMP. Direct attach (2-way) is not supported.
Maximum concurrent streams are: two for UnityVSA, eight for Unity 300-500, and twenty for Unity 600.

The introduction of UFS64 will require a new tape format. The format is named Format N. The previous
generation format for UFS32 is named Format N-1.

The backup module will format the data on tape in different ways based on the type of file system on
which the backup is performed. When backing up data on a UFS64 file system, the data will be written to
the tape in Format N. When backing up data on a UFS32 file system, the data will be written to the tape in
Format N-1.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 20
Unity offers native data protection features, such as Unified Snapshots and Replication.

Unified Snapshots provide point-in-time copies of data. Snapshots are provided for both Block and File
resources. The snap images can be read-only or read/write and can be used for local data protection to
restore the production data to a known point-in-time.

Auto-delete and expiration can be configured so that snapshots are automatically deleted at a specified
time or based on user defined storage consumption thresholds.

Replication is a licensed feature for Unity that enables replication between Unity systems for storage
resources. Replication connections can be asynchronous, synchronous, or both.

Replication is a method that enables data centers to avoid disruptions in operations. In a disaster recovery
scenario, if the source site becomes unavailable, the replicated data will still be available for access from
the remote site.

Replication uses a Recovery Point Objective (RPO) which is an amount of data, measured in units of time
to perform automatic data synchronization between the source and remote systems. The remote data will
be consistent to the configured RPO value.

Replication is also beneficial for keeping data available during planned downtime scenarios. If a
production site has to be brought down for maintenance or testing, the replica data can be made available
for access from the remote site.

Other Data Protection solutions supported by Unity are RecoverPoint, RecoverPoint for VMs, and
AppSync.

RecoverPoint provides Block replication functionality across all RecoverPoint supported platforms and can
be used for VNX1/VNX2 migration or replication to Unity. RecoverPoint for Virtual Machines provides VM-
granular protection of Virtual Machines and associated data.

AppSync is a policy driven, self-service, software for managing copies of various applications/databases
running on various Dell EMC arrays. Unity uses AppSync to create consistent snapshots.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 21
Unity also includes some data mobility features to enable migration to or from a Unity array.

File Import is a Unity native file migration feature that supports the migration of NFS configured VDMs and
their File Systems from legacy VNX systems. The use case of the feature is for replacing VNX1 and VNX2
systems with Unity. The feature migrates a VNX NFSv3 VDM to a Unity NAS Server.

File Import is supported on all physical Unity models and UnityVSA. The operation is transparent to host
I/O with little or no disruption to client access of data.

SANCopy import is a Unity native block migration feature. It migrates block storage resources (LUNs and
Consistency Groups) from a VNX1/VNX2 system to a Unity system. The use case of the feature is also for
replacing the VNX legacy systems with Unity.

SANCopy must be enabled on the system. The SANCopy enabler is contained in the VNX Installation
Toolbox on the EMC Support site if needed.

The import of block data is configured and controlled from the Unity system using the SANCopy engine
running on the VNX. Then the data is migrated to Unity using a SANCopy push from the VNX.

The Native SANCopy Import feature is managed from Unisphere, UEMCLI commands and REST API
calls.

Unity OE supports the Distributed Hierarchical Storage Management (DHSM) functionality using the
Cloud Tiering Appliance (CTA) release 11 as a policy manager and Virtustream cloud storage as a
destination.

Unity storage arrays are used as source File Servers allowing the tiering of file data to cloud storage
based on policies. The release also included support to. Microsoft Azure and Amazon S3 are also
supported as cloud destinations.

The only supported tiering destinations are Virtustream, Microsoft Azure, or Amazon S3.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 22
This lesson covers the different user interfaces used for administration and management of Unity storage
systems, and the Unity support for VMware and Microsoft virtualized environments.

Topics include the administrative and centralized management interfaces, the integration with VMware
tools, VASA/VVols support, and integration with Windows environments with ESI and SCVMM.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 23
Administration and management of Unity systems is performed with the Unisphere graphical user
interface (GUI), and the command line interface (CLI). Unity Systems can also be managed via the
Unisphere Management REST API.

EMC Unisphere for Unity provides a flexible, integrated experience for managing Unity storage systems.
Unisphere for Unity supports a wide range of browsers including Google Chrome, Internet Explorer,
Mozilla Firefox, and Apple Safari. It is HTML5 based so it does not require browser plugins.

The Unisphere wizards help the user to provision and manage the storage while automatically
implementing best practices for the configuration.

Unisphere contains a complete ecosystem, the highlight of which is Proactive Assist with call home, and
Cloud-based management. Proactive Assist is a self-service portal with a robust on-line set of community
activities (live chat, videos, documentation, and more), direct parts ordering, system views, and dial home
assistance.

In Unisphere, the user can also analyze system performance by viewing and interacting with line charts
that display historical and real-time performance metrics.

UEMCLI is a management tool which provides a way for users to manage a system through prompts from
a Microsoft Windows or UNIX/LINUX platform. UEMCLI is intended for advanced users who want to use
commands in scripts for automating routine tasks, such as provisioning storage or scheduling snapshots
to protect stored data.

Unity also allows programs to easily integrate with the storage system using REST API. Representational
State Transfer (REST) is a popular web Application Programming Interface (API) design model that uses
simple HTTP calls to create, read, update, and delete information on a server.

The REST API allows interaction with Unisphere management functionality, including system settings,
host and remote system connections, network settings, storage management, and data protection.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 24
Unity systems can be monitored through the Unisphere Central and CloudIQ interfaces.

The Unisphere Central management tool can be used for federated management of multiple Unity
systems, purpose built or VSA.

Unisphere Central is a network application that remotely monitors the status, activity, and resources of
multiple supported EMC storage systems from a central location. The application allows administrators to
take a look at their storage environment from a single interface and rapidly access the systems that need
attention or maintenance.

CloudIQ is a software-as-a-service cloud management dashboard that provides intelligent performance,


CPU, and configuration analytics for health-based reporting and resolution.

CloudIQ is a proactive management system that contains all of the capabilities needed to resolve
problems fast. Once a user is signed up to the CloudIQ ecosystem, they can:

• Let EMC support troubleshoot and resolve issues remotely.

• Setup email alerts when something looks wrong.

• Log into the self service portal to troubleshoot the system.

• Live chat with an EMC support rep and ask questions of other CloudIQ community members.

• Order replacement parts directly from the system.

CloudIQ requires that Unity is configured with ESRS and a valid support account in order to work.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 25
Unity is optimized for virtualized environments, not only in its storage capabilities, but also in its close integration with VMware.

Unity supports the creation of storage containers for VMware Virtual Volumes (VVols). VVols are storage objects provisioned
automatically by the system to store virtual machine data. VVols are stored in storage containers which are created by the storage
administrator. The storage Containers have a 1:1 mapping with VVol datastores.

VVols are based on the vStorage API for Storage Awareness (VASA) 2.0 protocol and requires vSphere 6.0 in order to run.

Unity has EMC tools to enhance its integration with VMware, plus it works closely with existing VMware features.

Storage Analytics for Unity provides automated operations management using patented analytics and an integrated approach to
performance, capacity and configuration management. SRA is a storage replication adapter that extends the disaster-restart
management functionality of VMware vCenter Site Recovery Manager (SRM) to the Unity storage environment.

Key VMware features which Unity seamlessly integrates are VMware vSphere Storage APIs Array Integration for SAN, VMware
vSphere Storage APIs Array Integration for NAS, Virtual Storage Integrator, and VMware vCenter Site Recovery Manager.

The Unity array is designed to integrate with the Microsoft Windows Server and its System Center Virtual Machine Manager
(SCVMM), and it provides APIs to support the storage health monitoring feature. Microsoft Windows Server 2016 and SCVMM use
the SMI-S API to manage external storage.

Health monitoring requires storage vendors to deliver lifecycle indication of alerts on specific storage objects, including: Array,
LUN/Disk, LUN/Pool Capacity, File System, File Share, File System Capacity, Fan/Power supply, LUN/LUN group replication as
defined in the SMI-S* Indication and Health Profiles.

Health monitoring requires storage vendors to deliver lifecycle indication of alerts on specific storage objects, including: Array,
LUN/Disk, LUN/Pool Capacity, File System, File Share, File System Capacity, Fan/Power supply, LUN/LUN group replication as
defined in the SMI-S Indication and Health Profiles.

Our platforms also offer great free tools for Microsoft centric management, such as the EMC Storage Integrator for Windows or ESI.
In a typical environment, a storage admin creates a LUN, a Database admin creates a database and then a SharePoint admin
creates the Web application on the database.

If all the admins execute their respective tasks in a sequential order it would take about 90 minutes. In the real world, this takes a
lot longer as you are waiting for one or more admins to get to this work order.

With ESI however, all you need is a Storage Pool and one can accomplish the same task in about 20 minutes. ESI also includes
System Center integrations such as System Center Operations Manager (SCOM), and SCVMM.

* SMI-S (Storage Management Initiative Specification) is a standard developed by the Storage Network Industry Association (SNIA)
that is intended to facilitate the management of storage devices from multiple vendors in storage area networks (SANs).

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 26
This lesson covers the Unity storage system hardware components. Topics include the hardware
components in a Unity Disk Processor Enclosure (DPE), and a Unity Storage Processor Enclosure (SPE),
the I/O expansion modules available for a Unity storage system, the types of Disk Array Enclosures
(DAEs) and supported disk drives available for a Unity storage system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 27
The hardware of a Unity system is shown here. The base unit of a Unity system is the Disk Processor
Enclosure [DPE] and its front and rear views are also shown. The DPE houses disk drives, Storage
Processors, and power supplies. The disk drives are accessed from the front and, as seen from the front
view, it comes in two different disk drive layout formats. One format holds up to twelve 3.5 inch disk drives.
The other format holds up to twenty five 2.5 inch disk drives. The rear view of the DPE is the same for
both DPE formats. System power and connectivity cabling are accessed from the rear.

For adding storage capacity, Unity uses another hardware unit called a Disk Array Enclosure [DAE].
Shown here are the two different DAEs. One DAE has a disk drive layout format that holds up to twenty
five 2.5 inch disk drives. The other DAE format holds up to fifteen 3.5 inch disk drives. The DAE disks are
accessed from the front and the power and connectivity cabling are accessed from the rear.

DPEs and DAEs have redundant power and cooling for operational resilience if a hardware component
fails. The DPEs and DAEs and the components they house have power and status LEDs to quickly
identify any hardware component failures.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 28
The DPE houses two Storage Processor Enclosures [SPEs]designated as A and B. Each enclosure
contains a Storage Processor [SP], a power supply and two expansion slots. The SPE power cabling is
connected to each power supply. Each power supply connects to a single internal power zone, supplying it
with enough primary 12V power to run the entire DPE should the peer power supply fail.

Each Unity Storage Processor includes embedded or onboard devices and cabling ports that facilitate
connectivity to Unity. These embedded devices provide Ethernet connectivity for managing and servicing
the system. Additional Ethernet ports are available to provide data access to file clients and block hosts.
The SP also has embedded Converged Network Adaptors [CNA] that can be configured for either Fibre
Channel or Ethernet connectivity for data access. Embedded Serial Attach SCSI ports are also available
to provide connectivity to additional DAEs. Each SPE also contains two expansion slots for installing
expansion modules to provide additional connectivity options.

The components inside each SPE are viewable when the SPE cover is removed. The SP CPU, DIMMs,
and a non-volatile M.2 SSD are shown here. Redundant component cooling is provided by five dual fan
packs. A Battery Backup Unit is included to provide DPE components power if the main house power fails.
A single Battery Backup Unit is capable of powering both Storage Processors’ CPUs, DIMMs, M.2 SSD,
and the first four DPE disks. The system will protect any data in-flight during house power events by de-
staging the in-flight data to the non-volatile M.2 SSD and then perform an orderly shutdown of the system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 29
The Unity DPE has two expansion slots in each of its SPEs to support additional connectivity options.
There are several different types of I/O expansion modules that can be installed into the DPE expansion
slots. The I/O modules must be installed in pairs, symmetrically within each SPE in the DPE. For example,
if a specific I/O module type is installed in slot 0 of SPE A, the same specific I/O module type will need to
be installed into slot 0 of SPE B. Additionally, once I/O modules are installed, they cannot be removed or
changed.

The 4-port 12Gb SAS I/O module is available for installation in the Unity 500 and 600 hybrid and All Flash
models to add more SAS buses to the system.

The 4-port 16Gb/s Fibre Channel module is available to support additional Fibre Channel connected hosts
for block storage resources on Unity.

There are four different types of Ethernet I/O modules available to support gigabit and 10 gigabit Ethernet
speeds on copper or optical media. The Ethernet modules support simultaneous iSCSI block access to
data and NAS file access to data.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 30
The components for the two types of DAEs are shown here. Like the DPE, the DAEs have an A-side and a
B-side. The DAE components are the A and B side Link Control Cards [LCC] and the power and cooling
for the A and B sides. The A and B side power supplies connect to a common power mid-plane of the
DAE. Should a DAE power supply fail, the peer side power supply can provide power for the entire DAE;
its disks, LCCs, and cooling fans.

The A and B side Link Control Cards provide SAS ports to connect the disks within the DAE to the A and
B side SAS buses. Should a SAS bus or LCC fail, the DAE disk can be accessed over its peer SAS bus.
The LCCs also have SAS ports to extend SAS connectivity to additional DAEs.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 31
There are a number of disk drive types supported within the Unity DPEs and DAEs. There are also a
variety of rules on the type and number of drives to be used in certain configurations. For example, SAS
Flash 3 drives cannot be used in FAST Cache or with FAST VP.

There are three types of rotating drives available for hybrid Unity models; 10K SAS drives, a 15K SAS
drive, and Near Line SAS drives. These spinning drives are available in a number of different capacities.

There are three different types of solid state drives available for Unity hybrid and All Flash models; SAS
Flash 2, SAS Flash 3 and SAS Flash 4 dives. They are available in a number of different capacities as
shown.

The NL-SAS drives are available only in the 3.5 inch form factor. All other drives are available in the 2.5
inch form factor. A 3.5 inch carrier is available for fitting a 2.5 inch form factor drive into a DAE or DPE that
supports 3.5 inch drives.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 32
This module covered the EMC Unity platform, Unity models, supported storage resources, features and
functions, and the management options and capabilities, and provided an overview of Unity hardware
components. The module also discussed Unity integration with virtualized environments, the UnityVSA
software-defined storage solution.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 33
This module focuses on the procedure to install and initialize a Unity physical storage system for use in a
production environment.

The module will also discuss how to deploy and initialize a Unity Virtual Storage Appliance (UnityVSA).

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 34
This lesson covers the steps for racking a Unity Disk Processor Enclosure (DPE) and Disk Array
Enclosures, cabling the components, performing a power up of the components and verifying components
status.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 35
Shown here are the basic steps required to get a Unity physical storage system powered up. Use the
Unity Hardware Installation Guide available on support.emc.com to install the disk processor enclosure
(DPE) into the cabinet using the provided rail kits. Make sure to obtain the help of a second person to
install the DPE into the rack or use a mechanical lift.

Follow the guide to install any optional disk array enclosures (DAEs) and cable them to the DPE. Next,
attach the Storage Processor (SP) management ports to the end user’s network and power the system up.
Let’s take a look at the cabling and power up sequence.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 36
Now that the Unity system is ready to be configured, let’s take a look at the options for configuring the
system network information.

If the end user has an enabled DHCP environment on their network, the Unity storage system will
automatically acquire an IP address when you power it up and no further action is required. You can open
a web browser and point it to the DHCP acquired address to open Unisphere. If the end user requires a
static IP address for the Unity storage system, then you will need to download and install the Unity
Connection Utility or use the InitCLI tool. We will discuss the InitCLI tool a little later.

The Unity Connection Utility software is a program used to discover and configure Unity systems with IP
networking and hostname information. The Connection Utility is available for download at
support.emc.com or the Unity All-Flash & Hybrid Info Hub site. It must be installed on a Windows host.
The installer package includes the 32-bit JAVA Platform Standard Edition 1.8 Update 60. Note that you
may need to stop host firewall services to be able to discover storage systems using the Connection
Utility.

Before running the Connection Utility, the Unity Family Configuration Worksheet should be downloaded
and completed. The Connection Utility requires the Product Serial Number Tag (PSNT), a user defined IP
address, subnet mask, and gateway. These can all be found on a properly filled out Configuration
Worksheet. The PSNT can be found in the packing materials that came with the system or on the tag on
the front of the DPE.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 37
Once a management IP address has been assigned to the Unity storage system, the user can open an
Internet browser session and log into Unisphere with the default credentials (admin/Password123#) .

Unisphere will launch the Initial Configuration wizard to walk the user through the initial setup of Unisphere
and prepare the system for use. After the initial setup is concluded, the wizard can be launched at any
time from the settings window.

The wizard sequence is displayed here. You will run through the wizard during the labs. As mentioned
earlier, the Unity Family Configuration Worksheet should be filled out prior to using the Initial Configuration
wizard. The user will also need to create an EMC Online Support account for support.emc.com if he or
she hasn’t already done so. This information should be recorded on the Configuration Worksheet.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 38
Another way to assign an IP address to the Unity storage system is to use the InitCLI tool. It is installed on
a Windows host and run from the Windows command prompt. The advantage to the InitCLI tool, is that it
does not require the host to run the Java platform. A disadvantage is that the tool does not support
configuring IPv6 or DHCP addresses.

The InitCLI tool is available on support.emc.com. The tool must be run from a Windows host on the same
subnet as the Unity storage system. There are two parameters, initcli discover and initcli configure.

The discover parameter searches the network for available systems to be configured and will list the
information about the system including its serial number (ID or PSNT) as shown in the first example with
the red box. The output qualifier specifies the output format as NVP (Name Value Pair) or CSV (Comma
Separated Value). In this case the CSV format was chosen.

The discovered Unity system can then be configured by using the configure parameter and specifying the
serial number of the system or PSNT, and the IP address, subnet mask, gateway, and a friendly name. In
this case, the PSNT is also used for the friendly name.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 39
This lesson will cover the steps for the deployment of a Unity Virtual Storage Appliance (UnityVSA).
Discussions will cover the UnityVSA operations and compare how a UnityVSA system differs from a Unity
physical storage system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 40
The virtualized Unity models are the replacement for the VNXe1600 systems. The Unity storage software
stack, running on purpose built hardware, is virtualized and encapsulated into an OVA file which can be
deployed on an ESXi host.

Reliable block storage is provided to the UnityVSA via a RAID card associated with the hypervisor and
host hardware (direct attached storage) or via an external RAID protected storage array (EMC arrays or
third-party storage arrays).

RAID protection is provided at the physical level – UnityVSA adds no RAID protection on top of the virtual
disks. Storage is provisioned to the UnityVSA via Fibre Channel/iSCSI (block) or NFS (file).

VMware vSphere datastores are built from file systems (NFS) or LUNs (VMFS) provisioned by the
backend.

vDisks for the UnityVSA are created from the provisioned ESXi datastores.

UnityVSA storage pools can then be provisioned from the vDisks. Unity storage resources (block, file, and
VMware datastores) can be provisioned to hosts using the storage pools. UnityVSA provision block
storage to hosts only using the iSCSI protocol.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 41
The UnityVSA OVA deployment is straightforward. A VMWare ESXi 5.5 or later environment is required.
Standard tools in the vSphere web client are used for the deployment. Right click on the server chosen for
the Unity deployment and select ‘Deploy OVF Template’ (an OVA file is synonymous with OVF template).

It is a good idea to check that the ESXi server used for OVA deployment has access to the client server
running Unisphere and access to the client server to perform I/O operations from the UnityVSA.

From the vSphere Web Client Deploy OVF Template wizard, locate the OVA file on the local machine
and follow the next steps to deploy the virtual appliance.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 42
When deploying the Virtual Unity system OVA file in vSphere Web Client, it is possible to configure the
IPv4 and/or IPv6 address for the management interface from the Customize template page of the
Deploy OVF Template Wizard. If no configuration is done on this page, the virtual system will try to get a
dynamic IP address from a DHCP server when it is powered on. If the network includes a DHCP server
and a DNS server, then an IP address can be assigned automatically to the management interface.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 43
The deployment creates a UnityVSA that is ready to power on when the deployment finishes. The
UnityVSA self initialization continues for at least another 30 minutes or more.

Please note that the UnityVSA MUST be powered on prior to making any edits to the UnityVSA virtual
machine settings.

Changes to the physical configuration of a UnityVSA VM such as adding or removing network interfaces or
modifying the cache size are not supported. The only accepted modifications of the physical configuration
of a UnityVSA VM is the addition of virtual disks to store user data.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 44
In this video demonstration a UnityVSA .ova file is deployed in VMware and the UnityVSA is initialized.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 45
To verify that the UnityVSA Initialization is complete, log in to the UnityVSA console as ‘service’ and run
the svc_diag command. The command should return a clean output with no failures as shown in the
example.

The initialization is now complete and you can move on to creating vDisks for the UnityVSA.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 46
If a UnityVSA system is deployed on a network that includes a DHCP server and a DNS server, an IP
address can be assigned automatically to the management interface. However, if you would rather
manually assign a static IP address to the management port, there are four methods that follow.

A static IP address can be assigned to the management interface of the UnityVSA when the OVA package
is deployed using the OVF Template wizard in vSphere. Also, from vSphere client interface it is possible to
run the svc_initial_config command from the Console tab to initialize the management port.

Alternatively, the Unity Connection Utility or the InitCLI applications can be run from a Windows host to
discover and configure the network settings for the management interface.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 47
When the UnityVSA is deployed, three virtual disks (vmdks) are automatically created for the VM’s system
data. These are the virtual disks identified as 1 to 3. These disks should not be modified or deleted.

After the initialization completes, at least one virtual disk must be added for user data prior to performing
the initial configuration in Unisphere. More vDisks can be added when additional storage for user data is
needed. Up to a maximum of 16 user vDisks are supported.

From Hosts and Clusters, expand the ESXi server where the UnityVSA was deployed, right click the
UnityVSA and select Edit Settings from the pull down menu.

From the bottom of the Virtual Hardware tab, use the New Device menu to select New Hard Disk. Enter
the appropriate settings to the screen

If more than 12 user vDisks are added, another SCSI controller will be automatically added to
accommodate the additional disks.

Ensure the SCSI controller type is "paravirtual". Other types are not supported and can cause boot
problems and unrecognizable disks. vSphere can be used to view and change the controller type if
needed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 48
The maximum size supported for the New Hard Disk depends on the installed license and the free
capacity of the underlying provisioned storage (in the example the maximum capacity of the used
datastore is 2.10 TB). It changes depending on the license installed. Be sure to select the ‘Thick provision
eager zeroed’ radio button under the ‘Disk Provisioning’ parameters. ‘Thick provision eager zeroed’ is
recommended for best performance.

To add more disks select New Hard Disk from the New Device drop-down list and click Add.

To finalized the Edit Settings and reconfigure the virtual machine click the OK button.

The new disks will be added to the virtual appliance.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 49
This demonstration shows how to create vDisks on a UnityVSA.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 50
This lab covers the initialization of a UnityVSA system. You will install the Connection Utility, then use it to
discover an uninitialized UnityVSA system and assign it a management IP address. Then you will run its
Initial Configuration Wizard.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 51
This module covered the procedures for installing and initializing a Unity physical storage system and
deploying a UnityVSA system. Students should be able to identify the steps and utilities needed for
successfully racking, cabling, powering up and initializing a Unity storage system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 52
This module focuses on the different interfaces used to monitor and configure a Unity storage system. The
module shows how default access accounts and roles can be attributed to a custom user to access the
system. It highlights basic settings that can be configured for management purposes and how to configure
Unisphere to access EMC Support. Finally the module shows how to configure event notifications and
alerts that can be used to identify system issues.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 53
This lesson introduces the different user interfaces used for administration and management of Unity
storage systems. This lesson also shows how user authentication is used to protect the access to Unity
hardware, software, and specific product features.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 54
Administration and management of Unity systems is performed with the Unisphere graphical user
interface (GUI), and the command line interface (CLI). Unity Systems can also be managed via the
Unisphere Management REST API.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 55
Unisphere GUI is a web-based user interface that is hosted on the Unity system, and enables the user to
securely manage Unity storage systems locally on the same LAN or remotely over the Internet, using a
common browser.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 56
Unisphere allows the configuration, administration, and monitoring of a single Unity storage system from
one single interface. Unity systems cannot be configured to be part of a Storage Domain, so only
Unisphere provides an overall view of what is happening in the environment plus an intuitive and easier
way to manage the Unity storage array.

Unisphere can be launched by simply entering the IP address of the Unity management port in the URL
address field of a supported web browser.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 57
Access to Unity systems is controlled by the successful authentication of a valid user account in
Unisphere. Management accounts are used to perform most of the Unity administrative tasks.

Unity provides two administrative user authentication scopes. Users can log in with credentials maintained
through either local user accounts or domain-mapped user accounts. There is no Global authentication
scope as the concept of storage domain does not exist for the Unity systems.

Local user accounts can be created and managed through the User Management Settings page in the
Unisphere GUI. These user accounts are associated with distinct roles and provide username and
password authentication only for the system on which they were created. They do not allow management
of multiple systems unless identical credentials are created on each system.

The Lightweight Directory Access Protocol (LDAP) is an application protocol for querying directory
services running on TCP/IP networks. LDAP provides central management for network authorization
operations by helping to centralize user and group management across the network. With this method, the
accounts used to access the system are domain-mapped user accounts (members of an LDAP domain
group). It uses the username and password specified on an LDAP domain server. Integrating the system
into an existing LDAP environment provides a way to control user and user group access to the system
through Unisphere CLI or Unisphere.

The user authentication and system management operations are performed over the network using
industry standard protocols such as Secure Socket Layer (SSL) and Secure Shell (SSH). GUI
administration can be performed using a supported network browser or CLI administration is done with the
use of the Unisphere CLI client software.

For deployments where the Unity will be managed by more than one administrator, multiple unique
administrative accounts are allowed. Different administrative roles can be defined for those accounts to
distribute administrative tasks between users.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 58
Unity storage systems come with factory default management and service user accounts to be used when
initially accessing and configuring the storage system. These accounts have access to both Unisphere
GUI and Unisphere CLI interfaces but have distinct privileges of operations they can perform.

• Management accounts: Administrator privileges for resetting default passwords, configure system
settings, create user accounts, and allocate storage.

• Service account: Perform specialized service functions such as collecting system service information,
restart management software, reset system to factory defaults, etc.

During the initial configuration process, you are required to change the passwords for the default admin
and service accounts.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 59
The Unisphere graphical user interface is built on HTML5 technology with support on a wide range of
browsers, such as Google Chrome v33 or later, Internet Explorer v10 or later, Mozilla Firefox v28 or later
and Apple Safari v6 or later.

The Unisphere interface has 3 main areas which are used for navigation and visualization of the content:
the navigation pane, a main page, and the sub-menus.

The navigation pane on the left has the Unisphere options for provisioning storage, provide host access,
protect the data, and monitor the system operation.

The main page is where the pertinent information about options from the navigation pane and a particular
submenu is displayed. The page also shows the available actions that can be performed for the selected
object. The selectable items will vary depending on the selection. It could be information retrieved from the
system, or configuration options for storage provisioning, host access, and data protection. In this example
the page shows the System View content.

A sub-menu with different tabs (links) on the top of the main page provides additional options for the
selected item from the navigation pane.

There is also a top menu on the right-corner with links for the system alarms, job notifications, help menu,
and the configuration of Unisphere preferences and global settings.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 60
The Unisphere Main Dashboard provides a quick view of the storage system status including the health
information. A user can create its own Customized dashboard and save it. The customized dashboards
can be modified and deleted.

View blocks can be added to a dashboard. These view blocks can be used to view a summary of system
storage usage, monitor system alerts, view health of storage and system resources, and provide graphs of
system performance at a high-level.

To add view blocks to the selected Dashboard, the user must open the sub-menu on the top, choose
customize, then select the desired block and click the Add View Block button.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 61
This lab provides a tour of the Unisphere GUI interface to help you become more familiar with the storage
system and the management interface.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 62
The Unisphere CLI (uemCLI) enables the user to perform scripting for some of the most commonly
performed tasks in the Unity system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 63
Unisphere CLI enables you to run commands on a Unity storage system from a host with the Unisphere
CLI client installed.

The Unisphere CLI client can be downloaded from the support web site and installed on a Microsoft
Windows or UNIX/Linux computer.

Unisphere CLI supports provisioning and management of network block and file-based storage. The
application is intended for advanced users who want to use commands in scripts for automating routine
tasks. The routine tasks include:

• Configuring and monitoring the system.

• Managing users.

• Provisioning storage.

• Protecting data.

• Controlling host access to storage.

• Storage types

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 64
The example of command in this slide displays the general settings for a physical system.

In the example the uemcli command accesses the Unity system using the management port with IP
address 10.126.91.11 and logs into the system as the local user admin. The system certificate is displayed
and the user has the choice to accept or reject it. If the certificate is accepted,

the command retrieves the array general settings and outputs its details on the screen.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 65
REST API is a set of resources, operations and attributes that interacts with Unisphere management
functionality allowing the administrator to perform some automated routines on the array using web
browsers, and programming and scripting languages.

For more information refer to the latest Unisphere Management REST API Programmer’s Guide
available in the EMC Online support web site (http://support.emc.com).

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 66
Unity systems can be monitored through the Unisphere Central and CloudIQ interfaces.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 67
Unisphere Central, formerly known as Unisphere Remote, consists of a Unisphere Central server running
on an ESX/ESXi server (standalone or through vCenter), which monitors supported storage arrays (Unity,
VNX, VNXe, or CX4 systems) and that can be remotely accessed by a client host.

The Unisphere Central server is deployed as a virtual machine (VM) built from an OVF template in a
VMware environment. The Unisphere Central OVF template can be downloaded from EMC Online
Support. When deploying the OVF template an IP address for the Unisphere Central can be assigned
within vCenter or in the console of the VM on an ESX/ESXi host.

The Unisphere Central server obtains aggregated status, alerts, host details, performance and capacity
metrics, and storage usage information from the Unity, VNX, VNXe and CX4 systems in the environment.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 68
Unisphere Central is a network application that remotely monitors the status, activity, and resources of
multiple supported EMC storage systems from a central location. The application allows administrators to
take a look at their storage environment from a single interface and rapidly access the systems that need
attention or maintenance.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 69
CloudIQ is an EMC-hosted service that uses data that is collected by ESRS to allow users to monitor
storage systems and perform basic service actions. The CloudIQ interface is accessible via web browser
at any time and from any location. CloudIQ provides dashboard views of all connected systems, displaying
key information such as performance and capacity trending and predictions.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 70
CloudIQ is a Cloud-based Software as a Service (SaaS) to monitor and service Unity systems allowing
customers to access near real-time analytics from anywhere at any time. CloudIQ functionality is
embedded into the Unity OE code and is free of cost requiring no license.

Unity collects several metrics at various predetermined time intervals and sends the information to ESRS.
(ESRS must be configured and functional in order for CloudIQ to work). ESRS then sends the data to the
Cloud where administrators can access the data through any number of supported browsers.

System metrics include: Alerts, Performance, Capacity, Configuration, and Data Collects.

ESRS or EMC Secure Remote Services is better detailed in the Support configuration lesson.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 71
This table describes the available dashboard widgets for Unity OE v4.1 and their functions.

The Dashboard Heath Score widget provides a view of the overall health score of a single system or all
monitored systems.

The Alerts trending widget aggregates alerts across all CloudIQ monitored systems. Users can drill down
and select items in a specific time frame or select a particular severity level. Details can be found on
performance, capacity and configuration, and include alerts and pool aggregate views.

The Pools widget (Running out of space) shows a summary of how many pools will reach capacity in a
given period of time (week, month, and quarter). This option, along with the Pools (Most available space)
option allows storage administrators to proactively move files to a pool with more space to avoid a pool
from filling up to capacity.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 72
The slide displays the landing page when first logging on to CloudIQ. The four dashboard widgets are
visible and provide storage administrators with a quick check of the overall system health. For example,
the Health Score indicates there are two systems, one of which has issues. A quick check of the score
however reveals the issues are not severe since the score is displayed with a green circle. Administrators
can drill down further as needed by clicking the text or number for additional information.

The Alerts Trend window allows administrators to view Alerts in a given period of time by selecting the
time frame in which the display shows. In the screen, the trend is displayed for 1 day; select 1 month to
display the trends over that period of time.

The two Pools windows will show the capacity of each pool and if a particular pool is running out of space.
We can clearly see in the example that pool2 has the most capacity of all monitored pools.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 73
CloudIQ has the capability to aggregate several Unity arrays and display the status in a single window.
Each monitored system is given a health score and can be identified by location and model.

Your score can help you spot where your most severe health issues are based on five core factors:
• Configuration
• System Health
• Performance
• Capacity
• Data Protection

The area with the highest risk to your system's health will hurt your score until actions are taken towards
remediation.

Storage administrators can also view an aggregated list of pools, LUNs, file systems, and hosts for all
monitored systems.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 74
The CloudIQ Health Score engine breaks down into five categories each of which is monitored and
contributes to the overall health of the system. These categories are not customer configurable but built
into the CloudIQ software. As the product matures additional categories will be made available.

For each category, CloudIQ runs a check against a known set of rules and makes a determination if a
particular resource has issues. The health score engine creates a number of impact points and those
impact points contribute to the overall health score of the system.

Categories:

• Configuration: Hosts - non-HA, Drive Issues: Faults, weighted by use (Hot Spare, RAID 6, RAID 5)

• System Health: Components with issues, OE/Firmware Compliance issues

• Performance: CPU Utilization, SP Balance

• Capacity: Pools reaching full capacity

• Data Protection: RPOs not being met, last snap not taken

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 75
The proactive health score feature displays the health of single or aggregated systems by providing both a
score shown as a number as well as a color associated with the score.

Systems (not categories) are given a score with 100 being the top score and 0 being the lowest. CloudIQ
services running in the background collect data on each of the five categories and determine its impact
point on the system based on a set of rules.

As shown in the slide, system VV-D9094-spa has a score of 60 (occupies 60% of the circle). Its color is
red indicating a system needs attention. The resource itself, the Performance icon, also shows a score of -
40 and is red also. Total scores are based on the number of impact points across the five categories. The
calculation is then, -40 from 100 = 60.

For system BC-D1165-spa, the health score is at 70% yellow. Note there are two categories that have
impact points, the Configuration icon at -20 and the Performance icon at -30. The calculation for the health
score only takes into consideration the -30 (Performance) since it has the highest number of impact points
(meaning the customer should look at and fix these first). The total is not based on the cumulative sum of
points (i.e., -50).

All other icons show a status of green which indicates no issues. Total scores are based on the number of
impact points across the five categories.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 76
The System summary provides an overview of key information about the selected system. The information
is divided into tabs. In the example, system OB-H1210 is selected. Administrators are given a quick view
of the health of the system and can review the properties as well. Selecting an object will drill down to
display additional information. For example, clicking the health score shown here as a green 99, will
display health issue details.

Users can also select one of the sub menus, such as Pools, Storage, Drives, or Hosts.

Selecting the object displays additional details. In the example, Storage has been selected and details
about the types of storage (file system, LUNs) are visible.

System summary information is divided into the following tabs:


• Properties: Displays configuration data, health information, as well as tabs to view additional
information about pools, storage objects, drives, and hosts on the system.
• Capacity: Displays the amount of capacity used and free on the system along with a list of pools on
the system and the utilization percentage and time to full for each pool, the amount of capacity used
by each drive type installed on the system, and the amount of capacity used by the storage objects
on the system.
• Performance: Displays graphs for utilization, IOPS, bandwidth, and latency for Storage Processor
A (SP A) and Storage Processor B (SP B) on the system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 77
Another option available from the Systems view is Capacity. Selecting Capacity displays the amount of
capacity used and free on the system along with a list of pools on the system and the utilization
percentage and time to full for each pool, the amount of capacity used by each drive type installed on the
system, and the amount of capacity used by the storage objects on the system.

In the example there are three pools shown in blue text; selecting one of the pools will allow users to view
the specifics for the pools.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 78
To view a list of available configured pools on the system, select the Pools option under the Storage
menu. From the list select the pool you want to view. In the example, pool2 has been selected and by
default, the Properties of that pool are displayed. Users also have the choice of selecting Capacity as well.

The Properties windows displays detailed information about an individual pool. Users can determine the
status of the pool and the system the pool resides on, as well as the state of FAST Cache or FAST VP.

To view additional details on pool objects, select either the Storage or Drives option. The Storage option is
selected in the example. We can see the name, type of object and additional details on the total size,
amount of space used and allocated to the object and in this case, since a file system is shown, the NAS
server on which the file resides.

Capacity and Performance metric can also be selected on the pool.

Capacity displays the amount of capacity used and free in the pool, as well as the time to full, the amount
of capacity used by the storage objects on the system, and historical capacity trends. The section below
provides additional details.

Performance will allow users to view the top performance storage objects and displays graphs on IOPS,
bandwidth, and backend IOPS.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 79
This demo covers a quick tour of the CloudIQ interface and the Unity system information it provides.

Click the Launch button to view the video.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 80
This lesson covers the configuration of some basic system settings on Unisphere, for the administration
and management of a Unity system array.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 81
In Unisphere, system global settings can be configured and managed from the Settings configuration
window which is invoked by clicking on its icon from the top menu.

In the Settings configuration window it is possible to monitor installed licenses, manage users that can
access the system, configure the network environment, allow monitoring of the system by Unisphere
Central and the logging of system events to a remote log server, start and pause FAST suite feature
operations, register support credentials and enable ESRS, create IP routes, enable CHAP authentication
for iSCSI operations, and configure email and SNMP alerts.

The first screen of the Settings window allows the management of licenses for the available features. A
product license can be selected from the list to have its description displayed. To unlock features, license
keys must be installed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 82
User accounts can be created and managed through the Unisphere User Management Settings page.
Multiple unique administrative accounts are allowed and different administrative roles can be defined for
those accounts to distribute administrative tasks between users.

To create an user account you must open the Settings window, and then select Users and
Groups > User Management.

Select the Add icon.

Then select the type of user or group to add: a local user or an LDAP user or group. When users log in to
Unisphere with an LDAP account, they must specify their username in the following format:
domain/username.

Specify the account information.

Define the role for the new user (explained on the next slide).

Then verify the information on the summary slide then click Finish to commit the changes.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 83
Unisphere accounts combine a unique user name and password with a specific role for each identity. The
specified role determines the types of actions that the user can perform after logging in.
The user roles are: Operator, Storage Administrator, Administrator, and VM Administrator. Shown here is
a table providing a description of the user roles.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 84
In Unisphere, LDAP settings can be configured via the Directory Services link under the Users and
Groups section on the Settings page.

Specify the LDAP server credentials:

Enter the domain name of the LDAP authentication server.

Type the base distinguished name (DN) of the root of the LDAP directory tree. A DN is a sequence of
relative distinguished names connected by commas, which describes a fully qualified path to an entry.

Enter the network name or address of the LDAP server.

Type the password the system will use for authentication with the LDAP server.

If LDAP over SSL (LDAPS) protocol is used, then the default communication port is set to 636 and the
system will request that an LDAP trust certificate is uploaded to the storage system.

For secure communication between the two entities (storage system and LDAP server), one entity must
accept the certificate from the other. When the certificate of the LDAP server is accepted or validated for
the storage system, the certificate is placed in the storage system’s certificate store and is marked as
trusted. So the subsequent connections to the LDAP server are trusted by the storage system.

The user must follow the LDAP Server operating environment instructions to learn how to retrieve the
LDAP SSL certificate that will be imported.

Click on the Advanced link to configure user and Group settings parameters then click OK to save the
changes. Then click on Apply to commit the changes. The Login operations are authenticated by the
LDAP domain server.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 85
Other network settings such as System Time Synchronization, DNS Servers, Unisphere Central, SP
management ports, and Remote Logging, can be configured from the Management Section.

Unity supports two methods for configuring system time: Manual settings or NTP synchronization.

The NTP synchronization method allows the Unity system to connect to a Network Time Protocol (NTP)
server in order to synchronize the system clock with other applications and services on the network. Some
applications will not operate correctly if the Unity system clock is not synchronized to the same source as
the host system clock.

Time synchronization is key for Microsoft Windows environments, when the Unity system must be time
synchronized to the same NTP servers as the systems accessing the Unity storage. In Microsoft Windows
environments, this is typically one or more of the Domain Controllers. This configuration is necessary to
connect a NAS server to the Active Directory, to allow SMB and multi-protocol access.

If the network does not have an NTP server, the administrator can set the Unity system time in the
following ways:

• Manually set system time: Enter a specific time for the Unity system in the UTC format. Consolidated
Universal Time (UTC) is the primary time standard by which the world regulates clocks and time.

• Set system time to client time: Synchronize the Unity system time with the host where you are running
Unisphere (in UTC format).

In the System Time and NTP option under the Management section select the Enable NTP
synchronization. Then click add to enter the IP address of a NTP server. Click the add button from the
dialog box to close the window and add the NTP server.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 86
There are Unity features that may require domain name resolution (For example: Unisphere alert
settings). The DNS Servers list can include multiple entries.

Select Obtain DNS server address automatically to have Unity automatically retrieve one or more IP
addresses for the DNS server if running Unity on a dynamic network that includes DHCP and DNS
servers.

If not running Unity on a dynamic network, or choosing to provide a static IP address for the DNS server
manually, select Configure DNS server address manually, then click on the Add button and enter the IP
address of a DNS server, and click Add.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 87
CloudIQ provides a monitoring and administration interface with intelligent analytics about the Unity
system performance, capacity, and configuration.

CloudIQ also provides proactive serviceability that informs the user about issues before they occur and
provides the user with simple, guided remediation.

EMC Secure Remote Services (ESRS) must be enabled on the storage system to send data to CloudIQ.

After enabling ESRS on a system, users can enable CloudIQ.

• Open the Unisphere Settings window and then navigate to the Management section and select
Centralized Management.

• On the CloudIQ tab, Check the Send data to CloudIQ checkbox to enable the system to provide
status and alert information to CloudIQ.

Once CloudIQ is enabled, it is possible to disable ESRS without changing the CloudIQ setting.

Without ESRS, data is not collected and sent to CloudIQ, but if ESRS is re-enabled, the system
remembers the CloudIQ setting and immediately resumes sending data to CloudIQ.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 88
Unisphere Central is a vApp that enables administrators to remotely monitor the status, activity, and
resources of multiple Unity systems that reside on a common network. The administrator must configure
Unity systems to communicate with Unisphere Central.
Open the Unisphere Settings window and then navigate to the Management section and select
Centralized Management.
In the Unisphere Central Configuration page, select the check box Configure this storage system
for Unisphere Central to enable it.
If the security policy on the Unisphere Central server was set to manual you only need to enter the IP
address of the Unisphere Central server in the Unisphere Central IP field.
However, if the security policy on the Unisphere Central server is set to Automatic, then you must retrieve
from the server the security information: the Certificate Hash that was automatically generated (8 to 256
characters) and the Challenge Phrase that was manually configured.
Select the Use additional security information from Unisphere Central checkbox.
Type the Unisphere Central Hash - the Certificate Hash that was configured in the Unisphere Central
server.
Type and confirm the Challenge Phrase - the 8 characters long passphrase configured in the Unisphere
Central server.
Click Apply to commit the changes to the configuration.
Note that Unisphere Central does not provide active management capability or real-time performance
views of the Unity systems in the environment. However, it does provide a means to "link-and-launch" to
the management interfaces of individual Unity systems in order to perform these tasks.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 89
The Specify Network Configuration page of the Unisphere IPs section of the Settings window allows the
administrator to modify the host name and network addresses assigned to the storage system. The Unity
storage system supports both IPv4 and IPv6.

Expand the Management section of the Settings window, then choose the Unisphere IPs option.

In the Specify Network Configuration page enter or modify the information for the Unity network
configuration. Each IP version has radio buttons to disable the configuration, and select the dynamic or
static configuration.

If you are running the Unity system on a dynamic network, the management IP address can be assigned
automatically, by selecting the proper radio button.

For a static IP address select the radio button to enable it and enter the IP address, the subnet mask and
gateway in the proper fields.

Click Apply to commit the changes.

Observation: If enabling ESRS support for the Unity system, then EMC recommends that a static IP
address is assigned to Unity to have it managed by ESRS.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 90
The Remote Logging setting enables the Unity system to log user/audit messages to a remote host.

Before configuring remote logging, a remote host running syslog must be configured to receive logging
messages from the storage system. In many scenarios, a root or administrator account on the receiving
computer can configure the remote syslog server to receive log information from the Unity system by
editing the syslog-ng.conf file on the remote computer. For more information on setting up and running a
remote syslog server, refer to the documentation for the operating system running on the remote
computer.

To configure Remote Logging you must check the Enable logging to a remote host checkbox. Then you
must specify the network address of the host that will receive the log data. The remote host must be
accessible from the Unity system, and security for the log information must be provided through the
network access controls or the system security at the remote host.

Select the component that generates the log messages you want to record.

• Kernel Messages - Messages generated by the operating system kernel. These messages are
specified with the facility code 0 (keyword kern)

• User-Level Messages - This is the default option. Messages generated by random user processes.
These messages are specified with the facility code 1 (keyword user)

• Messages Generated Internally by syslogd - Messages generated internally by the system logging
utility (syslogd). These messages are specified with the facility code 5 (keyword syslog)

Then select the protocol used to transfer log information: UDP or TCP. By default, the system transfers log
information using the UDP protocol on port 514. Click Apply to commit the changes.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 91
This lesson covers the Support configuration for the Unity system in Unisphere, including Proxy server
settings, registering the EMC Support credentials and enabling and configuration EMC Secure Remote
Services.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 92
The Support Configuration section of the Settings Configuration window allows the configuration of a
Proxy Server to be used for connections to EMC Support servers, enter the EMC Support credentials,
contact information, and configure the EMC Secure Remote Services (ESRS).

Proxy server configuration allows the exchange of service information for Unity systems that cannot
connect to the internet directly. Once configured, the storage administrator will perform the following
service tasks using the proxy server connection:

- Configure and save support credentials

- Configure ESRS

- Display the support contract status for the storage system

- Receive notifications about support contract expiration, technical advisories for known issues, software
and firmware upgrade availability, and Language pack update availability.

To configure Proxy Server the user must open the Settings page and expand the Support Configuration
section. Then the Proxy Server option must be selected and the Proxy server information entered on the
valid fields.

The secure (SOCKS) protocol should be selected for IT environments where HTTP is not allowed. This
option uses port 1080 by default and does not support the delivery of notifications for technical advisories,
software and firmware upgrades.

The non-secure (HTTP) protocol supports all service tasks including upgrade notifications. This option
uses port 3128 by default.

The user must enter the IP address of the Proxy Server and the credentials (user name and password) if
required. The SOCKS protocol requires user authentication.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 93
Support credentials are used to retrieve the customer current support contract information and keep the
information up-to-date automatically, providing access to all the options to which the client is entitled on
the Unisphere Support page.

In addition, support credentials are required to configure EMC Secure Remote Services (ESRS), which
provides EMC Support direct access to the storage system (via HTTPS or SSH) to perform
troubleshooting on the Unity system and resolve issues more quickly.

Enter the username and password on the proper fields and click Apply to commit the changes. The
credentials entered here must be associated with a support account.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 94
EMC Secure Remote Services (ESRS) is a secure, bi-directional connection between EMC products in
end user environments and the EMC Support infrastructure. It enables EMC to remotely monitor
configured systems by receiving system-generated alerts and to connect into the customer environment
for remote diagnosis and repair activities. It also provides a high-bandwidth connection for large file
transfers, enables proactive Service Request (SR) generation and usage license reporting, and operates
on a 24x7 basis.

ESRS options available with the Unity release include an embedded version that runs on the Unity
storage system or the ESRS Virtual Edition (VE) which is a Gateway version installed as an off-array
Virtual Machine (VM). It can be managed with Unisphere, UEMCLI, and RestAPI.

Software Licensing Central is an EMC wide service the allows EMC products to send electronic licensing
and usage information to EMC via ESRS VE. This information is visible to both EMC and end users. Unity
systems will automatically send the information on licensed features once per week. This feature is
enabled automatically when remote support is enabled.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 95
The two ESRS options are compared here. The Embedded version which leverages an on-array (Unity
storage system) Docker container is also referred to as Integrated ESRS. The Gateway version is referred
to as Centralized ESRS. Please consult the EMC Unity Family Secure Remote Services Requirements
and Configuration document for the complete details.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 96
EMC Secure Remote Support (ESRS) is a remote monitoring feature that allows authorized EMC remote
access capabilities to supported EMC storage systems via a secure and encrypted tunnel. The secure
tunnel that ESRS establishes between the systems on the EMC network can be used to transfer files out
to the Unity system or back to EMC.

The Configure button on the EMC Secure Remote Services page will be only enabled and launch the
Configure ESRS wizard, if valid EMC Support Credentials and the Contact Information were entered in
the respective pages of the Support Configuration section.

ESRS connection can be either Integrated or Centralized.

Integrated ESRS runs on the Unity storage system and allows only this system to communicate with EMC.
The ESRS software is embedded into the Unity operating environment (OE) of the Unity physical system
as a managed service. The Unity OE is responsible for persisting the configuration and the certificates
needed for ESRS to work.

Centralized ESRS runs on a gateway Virtual Machine (VM). The ESRS Virtual Edition (VE) software is
installed as an off-array VM, which allows multiple storage systems to communicate with EMC. UnityVSA
systems can only be managed by a Centralized ESRS.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 97
Unisphere Command Line Interface, or UEMCLI can also be used to manage ESRS configurations.

Here is the syntax of the uemcli command for changing the integrated ESRS configuration on a physical
system. The command on the example enables ESRS in the Unity system with site ID 234.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 98
Here is the syntax of the uemcli command for enabling centralized ESRS and connecting the Unity system
to the Centralized ESRS server.

Please refer to the EMC Unity Family Unisphere Command Line Interface User Guide for the complete
details on the available ESRS commands.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 99
This lesson covers the Unisphere interfaces used for monitoring the system events and how to set and
manage different alert notification methods.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 100
Alerts are usually events that require attention from the system administrator. Some alerts indicate that
there is a problem with the Unity system. For example, you might receive an alert telling you that a disk
has faulted, or that the Unity system is running out of space.

Alerts are registered to the System Alerts page in Unisphere. The system alerts page can be accessed
through the link on the top menu bar, from the option under the Events section of the navigation pane
and from the notification icons of the view block added to the dashboard.

The view block on the dashboard shows these alerts categorized by critical, error and warning which will
be better explained on the next slide. Clicking on one of the icons will open the Alerts page showing the
records filtered by the chosen severity level.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 101
System alerts with their severity levels are recorded on the System Alerts page. The Dashboard of the
Unisphere interface shows an icon with the number of alerts for each recorded severity category. The link
on these icons will open the Alerts page filtered by the selected severity level.

Shown here is a table providing an explanation about the alert severity levels from least to most severe.
Logging levels are not configurable.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 102
To view detailed information on a system alert:

- Select the alert from the list of records of the Alerts page.

Details about the selected alert record will be displayed in the right pane. The information will include time
the event was logged, severity level, alert message, description of the event, Acknowledgement flag, the
component that was affected by the event, and the current status of the component.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 103
Unisphere can also be configured to send the system administrator alert notifications in an email. Email
alerts are used only for internal communication. No Service Requests are created based on Email alerts.
Only the configuration of ESRS provide interaction with EMC Support.

In Unisphere open the Settings configuration window and select the Alerts section. On the Specify Email
Alerts and SMTP configuration click on the Add button. The Add a New Email window will open -
enter the e-mail that will receive the notification messages and click OK to save it.

Select from the drop-down list the severity level of the alert notifications to be sent.

Then type the IP address of the Simple Mail Transport Protocol (SMTP) server required to send emails.

Click the Send Test Email to verify that the SMTP server and destination email addresses are valid.

Then click Apply to commit the changes.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 104
Alerts can also be sent via Simple Network Management Protocol (SNMP) message (known as a trap).
From the System Settings window, select SNMP from the Alerts section to configure the SNMP trap
destination targets and to indicate the severity level of the alert notifications to be sent.

On the Manage SNMP Alerts page click on the Add button to add the SNMP trap destination target. The
SNMP target window will open - enter the network address (host name or IP address).

Type the user name to authenticate, and select the authentication protocol used by traps. You can only
specify the privacy protocol used to encode trap messages when you edit an existing destination. Click OK
to save the SNMP target – the new entry will be displayed in the list.

Select from the drop-down list the severity level of the alert notifications to be sent.

Click Send Test SNMP Trap to verify that the SNMP configuration is valid.

When done click on the Apply button to commit the changes.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 105
This module covered the different interfaces used to configure and monitor a Unity storage system, the
default Unity system access accounts and roles that can be attributed to a custom user to access the
system, the basic settings that can be configured for management purposes, the Unisphere
configurations for EMC Support, and the configurable event notifications and alerts used for identifying
system issues.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 106
This module focuses on the configuration and management of Block and File storage resources for SAN
hosts and NAS clients. The module also discuss how to provision VMware VMFS, NFS and VVol
Datastores to ESXi hosts.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 107
This lesson covers an overview of the storage resources that are provided by Unity: pools, Block storage,
File storage, and VMware datastores. The lesson also describes the characteristics of the homogeneous
and heterogeneous pools and how to manage them in Unisphere.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 108
Unity provides storage resources suited for the needs of specific applications, host operating systems, and
user requirements. These storage resources are categorized in storage pools, block storage (LUNs or
Consistency Groups), File storage (NAS Servers, file systems and shares), and VMware datastores
(VMFS or NFS, or VVol datastores).

LUNs and Consistency Groups provide generic block-level storage to hosts and applications that use the
Fibre Channel (FC) or iSCSI protocol to access storage in the form of virtual disks. LUN (Logical Unit) is a
single element of storage while Consistency Group is a container with one or more LUNs.

File Systems and shares provide network access to NAS clients in Windows and Linux/UNIX
environments. Windows environments use the SMB/CIFS protocol for file sharing, Microsoft Active
Directory for authentication, and Windows directory access for folder permissions. Linux/UNIX
environments use the NFS protocol for file sharing and POSIX access control lists for folder permissions.

VMware datastores provides storage for VMware virtual machines through datastores that are accessible
through the FC, or iSCSI protocols (VMFS) and the NFS protocol.

Another modality of supported VMware datastores are the VVol (Block) and VVol (File) datastores. These
storage containers will store the virtual volumes (VVols) which are VMware objects that corresponds to a
Virtual Machine (VM) disk, and its snapshot and its clones. These VVol (File) use NAS protocol endpoints
and VVol (Block) uses SCSI protocol endpoints for I/O communications from the host to the storage
system. The protocol endpoints provides access points for ESXi hosts communication to the storage
system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 109
Unity storage is provisioned from storage pools. A Pool is a collection of disks that are dedicated to create
LUNs, Consistency Groups, File Systems, and VMware datastores.

Storage pools provide optimized storage for a particular set of applications or conditions. The storage pool
configuration defines the types and capacities of the disks in the pool. A user can defined the RAID
configuration (RAID types and stripe widths) when selecting a tier to build a storage pool.

Pools can be heterogeneous (made up of more than one type of drive) or homogeneous (composed by
only one type of drive).

In a homogeneous pool, only one disk type (flash, SAS, or NL-SAS) is selected during pool creation.

If the FAST VP license is installed, and there are multiple disk types on the system, you can define
multiple tiers for a storage pool. There can be a maximum of three disk types in a heterogeneous pool.
Each tier can be associated with a different RAID type. Flash, SAS, and NL-SAS disks provide tiers of
Extreme Performance, Performance, and Capacity, on Unity systems. However, observe that SAS Flash
3 drives cannot be part of a heterogeneous pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 110
To manage a storage pool, select Pools from the Storage section of Unisphere.
From the Pools page it is possible to create a new pool, expand existing pools, view pool properties and
modify some settings, and delete an existing pool.
The Storage Pools page shows the list of created pools with its allocated capacity, its utilization details
and free space.
Details about a pool are displayed on the right-pane whenever a pool is selected.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 111
To create new storage pools, select Pools under the Storage section on the navigation pane. Then click on
the “add sign” link in the Pools page to launch the Create Pool wizard.

In the wizard window, enter the pool name, and optionally the pool description. Then click Next.

The wizard will display the available storage tiers. The user can select the tier and change the RAID
configuration for the selected tier, and choose whether the pool will use FAST Cache.
In the next step, the user can then select the amount of drives from the selected tier to add to the pool.
Also, the user can create and associate a Capability Profile to the pool been created. A capability profile is
a set of storage capabilities for a VVol datastore. This feature will be described when discussing the
management of VMware storage.
The pool configuration can be reviewed from the Summary page. The user can go back and change any
of the selections made for the pool, or click Finish to start the creation job.
The results page will show the status of the job. A green check mark with a 100% status indicates a
successful completion of the task.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 112
The Pool properties page can be invoked by selecting the storage pool and clicking on the edit icon. The
properties page is divided in 5 tabs: General, Disks, Usage, FAST VP, and Snapshot Settings.

On the General tab it is possible to change the pool name and description.

The Disks tab displays the characteristics of the disks in the pool.

The Capacity option on the Usage tab shows information about storage pool allocation and use including
(as shown here):
• Total amount of space allocated to existing storage resources and metadata. This value does not
include the space used for snapshots.
• Free Space, which is the amount of pool space that is available for provisioning storage resources and
snapshots.
• Subscribed capacity, which is the percentage of the pool's total space requested by its associated
storage resources. When this value is over 100%, the pool is oversubscribed. In this case, the storage
pool can be expanded by adding drives to an existing tier or adding an additional tier.
• Alert threshold, which is the percentage of storage allocation at which Unisphere generates
notifications about the amount of space remaining in the pool. Drag the slider to set the value between
50% and 84%.
• Pool used capacity history

On the FAST VP tab (which appears if FAST VP is licensed), it is possible to view data relocation and tier
information for the pool.
On the Snapshot Settings tab, the user can review and optionally change the properties for snapshot
automatic deletion.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 113
When a storage pool has been identified as in need of additional capacity, it can be expanded from the
Storage Pools page by selecting the pool, and clicking Expand Pool.

In the Select Storage Tiers step, select the storage tiers for the disks you want to add. If your system is
not licensed for FAST VP, you can only add disks to the existing tier.

If you are adding a new tier to the storage pool, you can select a different RAID configuration for the disks
in the tier. Click the Change link next to the tier name, select the new RAID type (if applicable), and
click OK. When the RAID configuration is complete, click Next.

In the Select Amount of Storage page, select the number and type of disks to add, and click Next.

Verify the information shown on the Summary page, and click Finish to expand the pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 114
n this lab you will assign the storage tier levels to virtual disks presented to the UnityVSA system. From
that storage space you will then create several heterogeneous and homogeneous pools.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 115
This lesson covers the management of Block-level storage resources using the Unisphere GUI interface.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 116
Block storage resources provide hosts [CLICK] with access to general purpose block-level storage
through [CLICK] iSCSI or Fibre Channel (FC) connections.
[CLICK] With Block storage you can manage addressable partitions of block storage resources so that
host systems can mount and use these resources (LUNs) over FC or IP connections.
After a host connects to the LUN, it can use the LUN like a local storage drive.
Components of Block storage includes: LUNs and Consistency Groups.
A LUN represents a quantity of block storage allocated from a storage for a host. You can allocate a LUN
to more than one host, if you coordinate the access via a set of clustered hosts.
A Consistency Group is an addressable instance of LUN storage that can contain one or more LUNs (up
to 50) and is associated with one or more FC or iSCSI hosts. Snapshots taken of a Consistency Group
apply to all LUNs associated with the group.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 117
The unity architecture provides hosts and applications with Block storage through network-based Internet
Small Computer System Interface (iSCSI) or Fibre Channel (FC) protocols. FC and iSCSI are based on a
network-standard client/server model with iSCSI or FC initiators (hosts) acting as storage clients and the
relevant targets acting as storage interfaces. Once a connection is established between the host and
interface, the host can request storage resources and services from the interface.

The iSCSI support allows Block storage access (LUNs, Consistency Groups, and VMware VMFS
datastores) using Initiator paths to each SP. Multiple iSCSI interfaces can be created on one Ethernet
port, and CHAP authentication can be optionally enabled for any host.

The Unity systems supports a 16 Gb/s Fibre Channel I/O module for block access. Fibre Channel (FC)
support provides the ability to share block storage resources over an FC storage area network (SAN).
Unity automatically creates FC interfaces when the I/O module is available to the Storage Processor (SP).

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 118
When you log on to Unisphere for the first time, the Initial Configuration Wizard includes a step that
enables you to add iSCSI interfaces.

To add or manage iSCSI interfaces at a later time, under Storage, select Block and iSCSI Interfaces..

The iSCSI interfaces page shows the list of interfaces, the SP and Ethernet ports where they were
created, the network settings and their IQN (iSCSI Qualified Name). Observe the format of the IQNs on
the last column. The format is explained in the slide.

From this page it is possible to create, view and modify, and delete an iSCSI interface.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 119
To create new interfaces click on the “ add sign” link [CLICK] and the Add iSCSI Network Interface
window will be launched.

Select the Ethernet port where the interface will be created, then enter the network address information:
• IP Address: You can specify an IPv4 or IPv6-based address.
• Subnet Mask or Prefix Length: IP address mask or prefix length that identifies the subnet where the
iSCSI target resides.
• Gateway: Enter the Gateway IP address associated with the iSCSI network interface.
[CLICK] The IQN Alias is the alias name associated with the IQN. The IQN and the IQN alias are
associated with the port and not the iSCSI interface.
Both IQN and IQN alias are generated automatically.
The VLAN ID should be set only if the network switch port was configured to support VLAN tagging of
multiple VLAN IDs. Click on the Edit button to enter a value between between 1 and 4904 to be associated
with the iSCSI interface.
Then click on the OK button to commit the changes and create the new interfaces.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 120
To view and modify the properties of an iSCSI interface click the edit icon. The user can change the
network settings for the interface and assign a VLAN.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 121
Fibre Channel interfaces are created automatically on the Unity storage system.
The Fibre Channel interfaces can be displayed by selecting the Fibre Channel option under the Access
section of the Settings Configuration window.
The Fibre Channel Ports page shows details about I/O modules and ports. Each Fibre Channel initiator is
uniquely identified by its world wide name (WWN).
To display information about a particular Fibre Channel port, select it from the list and click on the edit link.
The properties window shows details about the I/O module port. The user visualize the speed the port is
operating and change it if desired.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 122
To manage LUNs, select Block from the Storage section of Unisphere.
From the LUNs page it is possible to create a new LUN, view the LUN properties, modify some settings,
and delete an existing LUN.
The LUNs page shows the list of created LUNs with its size (in GB), the allocated capacity, and the pool it
was built from.

To see the details about a LUN select the LUN [CLICK] from the list and the details about the LUN will be
displayed on the right-pane.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 123
To create LUNs select the Block page under the Storage Section in Unisphere. The LUNs page will be
displayed with a list of existing LUNs with its size and percentage of allocated capacity. You can then
launch the Create a LUN wizard by clicking on the “ add sign” link.
Then follow the wizard steps:
• Select the number of LUNs to be created
• Provide the name and description for the LUN
• Select the storage pool that will be used to create the LUNs.
• Choose the Tiering Policy to be used. This setting will be used for data relocation on heterogeneous
pools.
• Define the size of the LUN.
• Observe that Thin Provisioning is enabled by default.
• The user can optionally configure Host I/O limits for the LUN. This feature will be discussed in the
Advanced Storage Features module.
• On the next steps of the wizard the user can associate the LUN with a SAN host configuration
previously created with a defined connectivity protocol and access level. This will be discussed in
the Storage Resources Access module.
• The user can optionally configure local data protection and remote data protection for the LUN. This
configuration will be covered in the Data Protection module.
• On the last step of the wizard the user can review the LUN to be created on the Summary page then
click Finish to start the creation job.

The results of the process will be displayed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 124
To view and modify the properties of LUN click the edit icon. The General tab of the LUN page depicts its
utilization details and free space. Also, the size of the LUN can be expanded from this page.

The other tabs of the LUN properties window allow the user to configure and manage local and remote
protection, and advanced storage features. Advanced storage features will be discussed on another
module.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 125
To manage Consistency Groups, select Block from the Storage section of Unisphere.
From the top menu select Consistency Groups page. From this page it is possible to create a new
group, view its properties, modify some settings, and delete an existing Consistency Group.
The Consistency Groups page shows the list of created groups with its size (in GB), the allocated
capacity, and the amount of pools that were used to built each pool.
For a selected Consistency Group it is possible to add LUNs to it by creating new ones or by moving
existing LUNs to the group.

To see the details about a Consistency Group select it from the list and its details will be displayed on the
right-pane.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 126
To create Consistency Groups select the Block page under the Storage Section in Unisphere.
Then select the Consistency Groups page from the top menu. The Consistency Groups page will
be displayed with a list of existing groups with its size and percentage of allocated capacity. You
can then launch the Create a Consistency Group wizard by clicking on the “add sign” link. [click]
Then follow the wizard steps:
• Provide the name and description for the Consistency Group
• Click on the “add sign” link to open the Configure LUNs window.
• Define the number of LUNs, and name. The name of the LUNs will be the combination of the name
and a sequenced number.
• Select the storage pool that must be used.
• Define the size of the LUN(s) to be created.
• If the selected pool is tiered then it is possible to choose the tiering policy.
• Host I/O limits can also be defined before saving the configuration.
• Provide access to the SAN host using a previously created host configuration (with a defined
connectivity protocol).
• Define the access level of the SAN host to the LUN(s).
• Review the LUN(s) to be created on the Summary page then click Finish

The results of the process will be displayed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 127
To view and modify the properties of a Consistency Group click the edit icon. The General tab of the
Consistency Group properties page depicts its utilization details and free space.

The LUNs tab shows the LUNs that are part of the Consistency Group. The properties of each LUN can
be viewed and a LUN can be removed from the group or moved to another pool.

The other tabs of the Consistency Group properties window allow the user to configure and manage local
and remote protection, and advanced storage features. Advanced storage features will be discussed on
another module.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 128
In this lab you will create LUNs, and Consistency Groups from the heterogeneous pool created on the
previous lab.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 129
This lesson covers the management of File-level storage resources using the Unisphere GUI interface.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 130
Unity File storage is a set of storage resources that provide file-level storage over an IP network.

From Unity File, SMB and/or NFS shares are created, and provided to Windows, Linux, and UNIX clients
as a file-based storage resource. Shares within the file system draw from the total storage that is allocated
to the file system.

File storage support include NDMP backup, virus protection, event log publishing and file archiving to
cloud storage using CTA as the policy engine.

The Unity components that work together to provision file-level storage include a NAS server, file system,
and shares.

The Unity NAS server is a virtual file server that provides the file resources on the IP network, and to
which NAS clients connect. The NAS server is configured with IP interfaces and other settings used to
export shared directories on various file systems.

The Unity File system is a manageable "container" for file-based storage that is associated with the a
specific quantity of storage, a particular file access protocol (SMB/CIFS or NFS), and one or more shares
through which network clients can access shared files or folders.

An Unity share is a exportable access point to file system storage that network clients can use for file-
based storage via the SMB/CIFS or NFS protocols.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 131
NAS servers are software components used to transfer file data and provide the connection ports for
users, clients, and applications that access Unity file system storage. [CLICK] Communication is
performed via TCP/IP using the Unity Ethernet ports. [CLICK] The Ethernet ports can be configured with
multiple network interfaces.

NAS servers retrieve data from available disks over the SAS backend, and make it available over the
network via the SMB or NFS protocols.

[CLICK] Before you can provision a file system storage over SMB or NFS, or, VMware NFS datastore or a
File VVol datastore, a NAS server that is appropriate for managing the storage type must be configured
and running on the system.

NAS servers can provide multi-protocol access for both UNIX/Linux and Windows clients at the same
time.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 132
To manage NAS servers, select File from the Storage section of Unisphere.
From the NAS Servers page it is possible to create a new NAS server, view its properties, modify some
settings, and delete an existing NAS server.
The NAS Server page shows the list of created NAS servers, the SP providing the Ethernet port for
communication, and the Replication type if any configured.

To see the details about a NAS server select it from the list and its details will be displayed on the right-
pane.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 133
To create a NAS server click the add sign “link” . Then follow the Create a NAS Server wizard steps:

Enter a name for the NAS server, and select the storage pool that will be used to supply file storage, then
choose the Storage Processor (SP) where you want the server to run. It is also possible to select a Tenant
to associate with the NAS Server. The multi-tenancy feature support will be covered in the Advanced
Storage Features module.

In the next step, select the SP Ethernet port you want to use and specify the IP address, Subnet Mask,
and Gateway. If applicable select a VLAN ID to associate with the NAS server. VLAN ID should be
configured only if the switch port supports VLAN tagging. If you associate a tenant with the NAS server,
you must choose a VLAN ID.

In the Configure Sharing Protocols page choose whether the NAS server supports Windows shares
(SMB, CIFS), Linux/UNIX shares (NFS), or multi-protocol (SMB and NFS shares on the same file system).

If you configure the NAS server to support Windows shares, specify an SMB host name, a Windows
domain, and the user name and password of a Windows domain account with privileges to register the
SMB computer name in the domain.

If you configure the NAS server to support UNIX/Linux shares, the NFSv4 protocol can be enabled if
desired as well as the support to File VVols. Select Next to advance to the next configuration page.

The Unix Directory Service page is only available if UNIX/Linux shares or multi-protocol support are
configured in the NAS Server. A Unix Directory Service (NIS or LDAP) must be used.

The NAS server DNS can be enabled on the next page. For Windows shares enable the DNS service, add
at least one DNS server for the domain and enter its suffix. Click Next to advance to the next step.

Optionally configure remote replication for the NAS Server.

Review the NAS server configuration from the Summary page and click Finish to start the creation job..

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 134
To view and modify the properties of a NAS Server click the edit icon.

From the General tab of the properties window you can view the associated pool, SP, and supported
protocols. It is also possible to change the name of the NAS Server.

From the Network tab a user can view the properties of an associated network interface, add and delete
interfaces, change the preferred interface and view and define network routes. From this page it is also
possible to enable the Packet Reflect feature which will be discussed in the Advanced Storage Features
module.

The Naming Services tab allow the user to define the Naming services to be used: DNS, LDAP and/or
NIS.

The Sharing Protocols tab allow the user to manage settings for file system storage access for Windows
shares (SMB,CIFS) using the Active Directory or Standalone option and Linux/UNIX shares (NFS). The
user can also provide multi-protocol access to the file system if a Unix Directory Service is enabled in the
Naming Services tab.

The other tabs of the NAS Server properties window allow the user to enable NDMP Backup, DHSM
support, and Event Publishing, Antivirus protection and Kerberos authentication, and remote protection.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 135
You can manage the Network interfaces by selecting the Ethernet option from the Access section of the
Settings Configuration window.

From the Ethernet Ports page, settings such as link aggregation and link transmission can be verified
and changed.

To display information about a particular Ethernet port, select it from the list and click on the edit link. The
properties window shows details about the port, including the speed and MTU size. The user can change
both these fields if necessary.

The MTU has a default value of 1500 bytes. If you change the value, you must also change all
components of the network path (switch ports and host).

If you want to support jumbo frames, set the MTU size field to 9000 bytes. This setting is only appropriate
in network environments where all components support jumbo frames end-to-end. In virtualized
environments, jumbo frames should be configured within the virtual system, as well.

The user can also create a link aggregation, add or remove the port to an existing link aggregation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 136
To manage a File System, select File from the Storage section.
From the File Systems page it is possible to create a new File System, view its properties, modify some
settings, and delete an existing File System.
The File Systems page shows the list of created File Systems with its size (in GB), the allocated capacity,
the NAS server used to share it, and the pool it was built from.
To see the details about a File System select it from the list and the details about the File System will be
displayed on the right-pane.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 137
To create a new file system with a share click on the “add” link from the File Systems page to launch the
Create a File System wizard.
You must follow the steps of the wizard to set the parameters for creating the File System:
• Select the NAS Server to associate with the file system. Prior to provisioning the file system a NAS
servers must be created as explained on the previous module.
• Then select the protocols the file system will support. The available options will depend on the
selected NAS server.
• On the next step of the wizard, enter a name and optional description for the file system, then
advance to the next step.
• Select the storage pool to create the file system from and the size for the file system.
• Thin provisioning is enabled by default.
• The wizard also allows the definition of the File System minimum allocation size. The default size is
still 3 GB, but the value can be adjusted to the user needs.
• Optionally select a tiering policy, and move to the next step of the wizard.
• File System shares can then be enabled from the wizard. For Windows Shares it is possible to
configure additional SMB settings. For Linux/UNIX Shares it is possible to associate it with a host
profile and set the access level: read-only, read/write, and read/write, allow root. These steps will be
discussed in more details in the Storage Resources Access lesson.
• The next two steps of the wizard allows the local and remote data protection. These features will be
discussed in more details in the Data Protection module.
• Review the File system to be created on the Summary page then click Finish. The results of the
process will be displayed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 138
A parameter in the Create a File System wizard allows the configuration of how much space is initially
allocated to the file system.

This prevents auto-shrink from removing too much space from the file system. Thin-provisioned file
systems are automatically shrunk by the system when certain conditions are met.

In this example, for a file system of 100 GB in size (which represents what the host sees), 6 GB of space
were reserved to the file system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 139
To view and modify the properties of a File System click the edit icon.

The General tab of the properties window depicts the details about File System utilization details and free
space. Also, the size of the File System can be expanded and shrunk from this page. And the File System
minimum allocation size can be changed.

The other tabs of the File System properties window allow the user to configure and manage local and
remote protection, configure File System Quotas and enable the event notifications for the file systems
monitored by Event Publishing service. File System Quotas and the Event Publishing service will be
discussed in the Advanced storage Features module.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 140
In this lab you will create two NAS servers for file storage provisioning. One will be used for SMB file
access and the other one will be used for NFS file access.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 141
In this lab exercise you will create two file systems from one of the heterogeneous pools created on a
previous lab. One of the file system will be used for sharing file data with Windows clients and the other
one will used for sharing file data with Linux/Unix clients.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 142
This lesson covers the management of storage resources dedicated to VMware ESXi hosts using the
Unisphere GUI interface.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 143
Unisphere provides tools for creating specialized VMware storage resources, called datastores, and then
discovering and connecting to ESXi hosts and vCenter servers on the network through VMware host
configurations.

A VMware Datastore is a storage resource that provides storage for one or more VMware hosts. The
datastore represents a specific quantity of storage made available from a particular NAS server (File) and
storage pool (Block).

A storage pool must be associated to a Capability Profile in order to enable VMware VVols based storage
provisioning. Capability profiles describe the desired storage characteristics so that a user-selected policy
can be mapped to a set of compatible VVol datastores.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 144
To manage a VMware Datastore, select VMware from the Storage section.
From the VMware storage page it is possible to create a new Datastore, view its properties, modify some
settings, and delete an existing Datastore.
The VMware storage page shows the list of created VMware datastores with its size (in GB), the
allocated capacity, the used capacity, type of datastore, the NAS server used t, and the number of pools.
To see the details about a Datastore select it from the list and the details about the Datastore will be
displayed on the right-pane.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 145
To create a new Datastore click on the “add” link from the Datastores page to launch the Create VMware
Datastore wizard.
You must follow the steps of the wizard to set the parameters for creating the Datastore:
• Select the type of datastore that you want to create: File, Block, VVol (File) or VVol (Block). File and
VVol (File) options will only be available if at least one NAS server with support to NFS protocol was
previously created. Select the NAS server providing storage for the new datastore. VVol (Block) and
VVol (File) options are only available if there is at least one Capability Profile created for a storage
pool. Capability Profiles will be discussed in the next slides.
• Enter the datastore name and optional description and advance to the next step.
• Select a storage pool to use for the datastore
• Select one of the available tiering policies for the datastore and the quantity of primary storage to
allocate for the datastore. For NFS and VVol (File) it is possible to define the Minimum allocation
size for the File System.
• The Host IO Size parameter can be used to match storage block size with the I/O size of the
application to maximize performance of the VMware NFS datastores.
• In the Configure Access window, specify which ESXi hosts can access the datastore
• Optionally enable local or remote protection for the datastore on the next windows.
• Then review the Datastore to be created and click Finish to complete the task. The results of the
process will be displayed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 146
To view and modify the properties of a VMware Datastore click the edit icon.

The General tab of the properties window depicts the details about Datastore capacity utilization and free
space. Also, the size of the Datastore can be expanded from this page. And the File System minimum
allocation size can be changed.

The other tabs of the Datastore properties window allow the user to set host access, change the tiering
policy, configure and manage local and remote protection, and configure Host I/O Limits in the case of
VMFS and VVol (Block) datastores. The Host I/O limits feature will be discussed in the Advanced
Storage Features module.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 147
VVols support allows the use of storage profiles aligned with published capabilities to provisioning virtual
machines.

The storage profiles can be defined by service levels, usage tags, and storage properties.

[CLICK] A VM-granular integration facilitates the offloading of data services with support to snapshots, fast
clones, full clones, and reporting on existing VVols and affected virtual machines.

[CLICK] Compatible arrays can communicate with the ESXi server through VASA APIs. Communications
are done via Block (iSCSI and FC) and File (NFS) protocols.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 148
Provisioning Virtual Volumes (VVols) involve some tasks that are performed on the storage system and
some that are done by the vSphere administrator.

This module shows the tasks performed on the storage system. The tasks performed by the vSphere
administrator will be discussed on the Storage Resources Access module.

First the storage administrator must create the storage pools that will be associated with VMware
Capability profiles.

Capability Profile definitions include thin or thick space efficiency, user defined set of strings, storage
properties (drive type and RAID level), and a service level to identify it.

Then the Storage Containers or VVol datastores can be created based on the selected storage pool and
capability profile.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 149
A capability profile is a set of storage capabilities for a VVol datastore. These capabilities are derived
based on the underlying pools for the VVol datastore. Capability profiles must be created before you can
create a VVol datastore. Capability profiles can be created at the time of pool creation (recommended), or
can be added to an existing pool later.

Capability profiles define storage properties such as drive type, RAID level, FAST Cache, FAST VP, and
space efficiency (thin, thick). Also, service levels are associated with the profile depending on the storage
pool characteristics. The user can add tags to identify how the VVol datastores associated with the
Capability Profile should be used.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 150
To manage a Capability Profile, select VMware from the Storage section, then choose Capability Profiles
from the top sub-menu.
From the Capability Profiles page it is possible to create a new Capability Profile, view its properties,
modify some settings, and delete an existing Capability Profile.
The Capability Profile page shows the list of created VMware Capability Profiles, and the pools it is
associated with.
To see the details about a Capability Profile select it from the list and the details about the Capability
Profile will be displayed on the right-pane.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 151
To create a new Capability Profile click on the “add” link from the Capability Profiles page to launch the
Create VMware Capability Profile wizard.
You must follow the steps of the wizard to set the parameters for creating the Capability
Profile:
• Enter the Capability Profile name and optional description
• Select the storage pool to associate the Capability Profile with.
• Enter any Usage Tags that will be used to identify how the associated VVol datastore
should be used.
• Then review the Capability Profile to be created ad click Finish to execute the operation. The results
of the process will be displayed.

Only after having a Capability Profile associated with a storage pool you will be able to create a VVol
datastore.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 152
To view and modify the properties of a VMware Capability Profile click the edit icon.

The Details tab of the properties window allows you to change the name of the Capability Profile. Also,
the UUID (Universally Unique Identifier) associated with the VMware object is displayed here for
reference.

The Constraints tabs shows the space efficiency, service level, and storage properties associated with
the profile, and allows the user to add and remove user tags.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 153
VVols reside in VVol datastores, also known as storage containers. A VVol datastore is associated with
one or more capability profiles.

The VVol datastore will show as a compatible storage in vCenter or the vSphere Web Client if the
associated capability profiles meet VMware storage policy requirements.

There are two types of VVol datastores: VVols (File) and VVols (Block).

VVols (File) are virtual volume datastores that uses NAS protocol endpoints for I/O communication from
the host to the storage system. Communications are done via the NFS protocol.

VVols (Block) are virtual volume datastores that uses SCSI protocol endpoints for I/O communication from
the host to the storage system. Communications are done via either the iSCSI or the FC protocols.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 154
To create a new VVol Datastore click on the “add” link from the Datastores page to launch the Create
VMware Datastore wizard.
You must follow the steps of the wizard to set the parameters for creating the VVol
Datastore:
• Select the type of datastore that you want to create: VVol File or VVol Block. Then advance to the
next step of the wizard.
• Enter the datastore name and optional description.
• In the next step select the Capability Profiles to use for the VVol datastore. And from the Datastore
Size (GB) column you can determine how much space the datastore will consume from each of the
pools associated with the Capability Profiles.
• In the Configure Access page, specify which hosts can access the datastore
• Then review the VVol Datastore to be created on the Summary page and click Finish to complete
the creation job. The results of the process will be displayed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 155
So in this module we have discussed the steps highlighted on the first box which are performed by the
Storage Administrator in order to provision a storage container (VVol Datastore) that will be mapped to
storage objects in the vSphere environment for building Virtual Volumes.

In the next module we will discuss the steps on the second box that are performed by the vSphere
administrator in order to create a VVol Datastore that maps to the VVol datastore created in Unity, and
allow the provision of virtual machines using storage policies.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 156
This module covered the configuration and management of Block and File storage resources for SAN
hosts and NAS clients. The module also discussed how to provision VMware VMFS, NFS and VVol
Datastores to ESXi hosts.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 157
This module focuses on the procedures to grant SAN hosts and NAS clients access to provisioned storage
in a Unity system. The module also shows how to configure ESXi hosts access to VMFS, NFS, VVol
(Block), and VVol (File) datastores.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 158
This lesson covers the Windows and Linux/UNIX host access to Unity Block storage., describing the host
access requirements, multi-path management options, Unity front-end connectivity options, the process of
registering an FC or iSCSI initiator, and creation of a host configuration in Unity.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 159
Host access to Unity block storage requires a host having connectivity to block storage from the Unity
system. The graphic illustrates an overview of configuration operations to achieve host block storage
access to a Unity. These activities span across the host, the connectivity and the Unity.

Although hosts can be directly cabled to the Unity, connectivity is commonly done through storage
networking, and is formed from a combination of switches, physical cabling and logical networking for the
specific block protocol. The key benefits of switch-based block storage connectivity are realized in the
logical networking. Hosts can share Unity front-end ports; thus the number of connected hosts can be
greater than the number of Unity front-end ports. Redundant connectivity can also be created by
networking with multiple switches, enhancing storage availability.

Storage must be provisioned on the Unity for the host. Provisioning Unity storage consists of grouping
physical disk drives into Storage Pools and creating LUNs from the pools. Unity can also access VMFS
Datastores, VVol (Block) Datastores, and Consistency Groups.

Connected hosts are registered to the Unity as an iSCSI initiator iqn or a FC WWN.

The host must then discover the newly presented block storage within its disk sub-system. Storage
discovery and readying for use is done differently per operating system. Generally, discovery is done with
a SCSI bus rescan. Readying the storage is done by creating disk partitions and formatting the partition.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 160
Before configuring hosts to access the storage system, ensure that these requirements are met. Ensure
the following tasks are completed before connecting hosts to the storage system:

Install and configure the system using the Initial Configuration wizard.

Use Unisphere or the CLI to configure iSCSI or Fibre Channel (FC) LUNs, on the storage system.

The host will need to have an adapter of some kind to communicate via the storage protocol. In Fibre
Channel environments, a host will have a Host Bus Adapter, or HBA, installed and configured. For iSCSI,
a standard NIC can be used.

A Multi-pathing software is recommended to manage paths to the storage system and provide access to
the storage if one of the paths fails.

With a storage networking device ready on the host, connectivity between the host and the array will be
required. In FC, this will include setting up zoning on an FC switch. In iSCSI environments, initiator and
target relationships will need to be established.

To achieve best performance, the host should be on a local subnet with each iSCSI interface that provides
storage for it. In a multi-path environment, each physical interface must have two IP addresses assigned;
one on each SP. The interfaces should be on separate subnets. To achieve maximum throughput, connect
the iSCSI interface and the hosts for which it provides storage to their own private network. That is, a
network just for them. When choosing the network, consider network performance.

After connectivity has been configured, the hosts need to be registered with the Unity storage array.
Registration is usually automatic though in some cases it will be performed manually. In either case, the
registrations should be confirmed.

Having completed the connectivity between the host and the array, you will then be in a position to
provision the Block storage volumes to the host.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 161
This diagram shows the High Availability options for host to storage connectivity. Host connections are
SAN attached or directly connected. Directly attaching a host to a Unity system is supported if the host
connects to both SPs and has the required multipath software.

Fibre Chanel HBAs should be attached to a dual fabric for HA. iSCSI connections should be attached
using different subnets for HA.

Note: A server cannot be connected to the same storage system through both NICs and iSCSI HBAs.

Depending on the type of HBA being used on the host (Emulex, Qlogic, or Brocade) users can install HBA
utilities to view the parameters. The utilities can be downloaded from the respective vendor support pages
and can be used to verify connectivity between the HBA’s and the arrays they are attached to.

For iSCSI connectivity, an iSCSI a software or hardware iSCSI initiator is required.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 162
Multi-path management software manages the connections (paths) between the host and the storage
system to provide access to the storage if one of the paths fails. The following types of multi-path
management software are available for a Windows 2003, Windows Server 2008, or Window Server 2012
connected host:

EMC PowerPath software on a Windows 2003, Windows Server 2008, or Windows Server 2012 host.
Refer to the Unity Support Matrix on the support website for compatibility and interoperability information.

Note: PowerPath is not supported for Windows 7.

Native MPIO on Windows 2003, Windows Server 2008, or Windows Server 2012 without Multiple
Connections per Session (MCS).

The multi-path I/O feature must first be enabled before it can be used. MCS is not supported by Unity.

Refer to the EMC Unity High Availability, A Detailed Review white paper available from support.emc.com.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 163
For your system to operate with hosts running multi-path management software, two iSCSI IPs are
required. These IPs should be on separate physical interfaces on separate SPs.

Verify the configuration in Unisphere. For details on how to configure iSCSI interfaces, refer to topics
about iSCSI interfaces in the Unisphere online help.

When implementing a highly-available network between a host and your system, keep in mind that:

A LUN is visible to both SPs.

Directly attaching a host to a Unity system is supported if the host connects to both SPs and has the
required multipath software.

Important: Path management software is not supported for a Windows 7 or Mac OS host connected to a
Unity system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 164
Certain rules should be followed as a best practice when configuring host to storage connectivity.

Some guidelines to follow are:

• Any single host should connect to any single array with 1 protocol only.

• A host may connect to different arrays with different protocols.

• iSCSI should use all HBA or all NIC connections, with no mixing in a host.

• Arrays may see connections from hosts with NICs or HBAs.

• Hosts with CNAs will use either FC or iSCSI to a single array.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 165
The Unity array supports two embedded ports per SP for connectivity via Fibre Channel as well as a Fibre
Channel expansion I/O module. The FC embedded ports allow FC connectivity at 8 or 16 Gb/s

For FC expansion I/O modules, there is a 4 port 16GB/s that can also negotiate to lower speeds. 16Gb/s
FC is recommended for the best performance.

Unity supports iSCSI connections on multiple port options. If the CNA port is configured for iSCSI, the port
supports 10Gb/s Optical SFPs or TwinAx cables (active).

For iSCSI I/O expansion modules, Unity supports a 4 Port 1Gb/s or 10Gb/s GbaseT module RJ45,
Copper, Cat 5/6 cable and a 2 or 4 Port 10Gb/s IP/iSCSI module with SFP+ or active TwinAX copper, the
2 Port IO module includes iSCSI offload engine.

If possible, configure Jumbo frames (MTU 9000) on all ports in the end-to-end network, in order to provide
the best performance.

CPU bottlenecks caused by TCP/IP processing have been a driving force in the development of hardware
devices specialized to process TCP and iSCSI workloads, offloading these tasks from the host CPU.
These iSCSI and/or TCP offload devices are available in 1 Gb/s and 10 Gb/s speeds.

As a result, there are multiple choices for the network device in a host. In addition to the traditional NIC,
there is the TOE (TCP Offload Engine) which processes TCP tasks, and the iSCSI HBA which processes
both TCP and iSCSI tasks. TOE is sometimes referred to as a Partial Offload, while the iSCSI HBA is
sometime referred to as a Full Offload.

While neither offload device is required, these solutions can offer improved application performance when
the application performance is CPU bound.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 166
The storage system supports Fibre Channel and iSCSI initiator registration. After registration, all paths
from each registered initiator automatically have access to any storage provisioned for the host. This
ensures a highly available connection between the host and array. You can manually register one or more
initiators before you connect the host to the storage system.

Initiators enable you to register all paths associated with an initiator in place of managing individual paths.
In this case, all paths to the host, including those that log on after the fact, are automatically granted
access to any storage provisioned for the host.

The maximum number of connections between servers and a storage system is limited by the number of
initiator records supported per storage-system SP and is model dependent. An initiator is an HBA or CNA
port in a server that can access a storage system. Some HBAs or CNAs have multiple ports. Each HBA or
CNA port that is zoned to an SP port is one path to that SP and the storage system containing that SP.
Each path consumes one initiator record. Depending on the type of storage system and the connections
between its SPs and the switches, an HBA or CNA port can be zoned through different switch ports to the
same SP port or to different SP ports, resulting in multiple paths between the HBA or CNA port and an SP
and/or the storage system. Note that the failover software environment running on the server may limit the
number of paths supported from the server to a single storage system SP and from a server to the storage
system.

Access from a server to an SP in a storage system can be:

Single path: A single physical path (port/HBA) between the host system and the array

Multipath: More than one physical path between the host system and the array via multiple HBAs, HBA
ports and switches

Alternate path: Provides an alternate path to the storage array in the event of a primary path failure.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 167
Unisphere has a page that enables you to view and manage all host initiators connected to the storage
system. Initiators are endpoints from which FC and iSCSI sessions originate. To access this page in
Unisphere, select the Initiators option under the Access section.
Initiators are endpoints from which FC and iSCSI sessions originate. Any host bus adapter (HBA) can
have one or more initiators registered on it. Each initiator is uniquely identified by its world wide name
(WWN) or iSCSI qualified name (IQN). The link between a host initiator and a target port on the storage
system is called the initiator path.
You can manually register one or more initiators before connecting the actual host to the storage system
through Fibre Channel (FC) or iSCSI. Once the initiators are registered and associated with a host (green
with white check icon) all paths from each registered initiator are automatically granted access to any
storage provisioned for the host. This ensures a high availability connection between the host and storage
system.
A yellow mark indicates that there are no logged in initiators paths.
In the example, we see two of the Fibre Channel host initiators are registered with a host. These two have
a green status. The two other initiators with the yellow triangle are not yet associated with a host.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 168
The link between a host initiator and a target port on the Unity system is called the initiator path.

Each initiator can be associated with multiple initiator paths. Users can control operations at the initiator
level. The storage system manages the initiator paths automatically.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 169
With the increase in use of iSCSI, there are often questions around whether a software or hardware iSCSI
initiator should be used to connect to the iSCSI target.

With a software iSCSI initiator, the only hardware the server requires is a Gigabit networking card. All of
the processing for the iSCSI communication is performed by the server's resources, such as CPU and, to
a lesser extent, memory. This means on a busy server, the iSCSI traffic processing may actually use up
CPU resources that could be used for application needs. Windows has an iSCSI initiator built into the OS.

On the other hand, a hardware iSCSI initiator is a Host Bus Adapter (HBA) that appears to the OS as a
storage device. All of the overhead associated with iSCSI is performed by the iSCSI HBA instead of server
resources. In addition to minimizing the resource use on the server hardware, a hardware iSCSI HBA also
allows additional functionality, because the iSCSI HBA is viewed as a storage device. You can boot a
server from iSCSI storage, which is something you can't do with a software iSCSI initiator. The downside
is that iSCSI HBAs typically cost ten times what a Gigabit NIC would cost, so you have a cost vs.
functionality and performance trade-off.

Most production environments with high loads will opt for hardware iSCSI HBA over software iSCSI,
especially when other features such as encryption is considered. There is a middle ground, though. Some
network cards offer TCP/IP Offload Engines (TOE) that perform most of the IP processing that the server
would normally need to perform. This lessens the resource overhead associated with software iSCSI
because the server only needs to process the iSCSI protocol workload.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 170
Microsoft Internet iSCSI Initiator enables you to connect a host computer that is running Windows Server
2008, Windows Server 2008 R2 Windows Server 2012 and Windows Server 2012 R2 to an external
iSCSI-based storage array through an Ethernet network adapter. You can use Microsoft iSCSI Initiator in
your existing network infrastructure to enable block-based storage area networks (SANs). SANs provide
iSCSI target functionality without investing in additional hardware.

The Microsoft iSCSI initiator does not support booting the iSCSI host from iSCSI storage. Refer to the
EMC Support Matrix for the latest information about boot device support.

All iSCSI nodes are identified by an iSCSI name. An iSCSI name is neither the IP address nor the DNS
name of an IP host. Names enable iSCSI storage resources to be managed regardless of address. An
iSCSI node name is also the SCSI device name, which is the principal object used in authentication of
targets to initiators and initiators to targets. iSCSI addresses can be one of two types: iSCSI Qualified
Name (iQN) or IEEE naming convention, Extended Unique Identifier (EUI).

iQN format - iqn.yyyy-mm.com.xyz.aabbccddeeffgghh where:


• iqn - Naming convention identifier
• yyyy-nn - Point in time when the .com domain was registered
• com.xyz - Domain of the node backwards
• aabbccddeeffgghh - Device identifier (can be a WWN, the system name, or any other vendor-
implemented standard)

Within iSCSI, a node is defined as a single initiator or target. These definitions map to the traditional SCSI
target/initiator model. iSCSI names are assigned to all nodes and are independent of the associated
address.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 171
The Linux operating system includes the iSCSI initiator software . The iSCSI driver open-iscsi comes with
the Linux kernel. You must configure this open-iscsi driver with the network parameters for each initiator
that will connect to your iSCSI storage system.

EMC recommends changing some driver parameters, refer to the Unity Family for Configuring Hosts to
Access Fibre Channel (FC) or iSCSI Storage Guide for the driver parameters. The configuration file is
/etc/iscsi/iscsi.conf.

Note: The Linux iSCSI driver gives the same name to all network interface cards (NICs) in a host. This
name identifies the host, not the individual NICs. This means that if multiple NICs from the same host are
connected to an iSCSI interface on the same subnet, then only one NIC is actually used. The other NICs
are in standby mode. The host uses one of the other NICs only if the first NIC fails.

Each host connected to an iSCSI storage system must have a unique iSCSI initiator name for its initiators
(NICs). To determine a host’s iSCSI initiator name for its NICs use cat /etc/iscsi/initiatorname.iscsi for
open-iscsi drivers. If multiple hosts connected to the iSCSI interface have the same iSCSI initiator name,
contact your Linux provider for help with making the names unique.

To view and discover the target array use the “iscsiadm -m discovery -t sendtargets -p 192.168.3.100”
command.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 172
You can add initiators when you add a host configuration or, at a later time, from the Properties screen of
the host. To view existing initiators and initiator paths registered with the storage system, go to Access >
Initiators.
For iSCSI initiators, an iSCSI target port can have both a physical port ID and a VLAN ID. In this case, the
initiator path is between the host initiator, and the virtual port.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 173
On a Unity system, you can require that all hosts use CHAP authentication to access iSCSI storage on
one or more Unity iSCSI interfaces. To require CHAP authentication from all initiators that attempt access
to an iSCSI interface, you must open the Settings Configuration Window and open the Access section.

Then from the CHAP page check the Enable CHAP Setting option.

When you enable this feature, Unity denies access to this iSCSI interface's storage resources from all
initiators that do not have CHAP configured.

You may also set a global forward CHAP secret that all initiators can use to access the storage system.
Global CHAP can be used in conjunction with initiator CHAP. To implement Global CHAP authentication
select Use Global CHAP and specify a username and Global CHAP secret.

Mutual CHAP authentication occurs when the hosts on a network verify the identity of the iSCSI interface
by verifying the iSCSI interface's mutual CHAP secret. Any iSCSI initiator can be used to specify the
"reverse" CHAP secret to authenticate Unity. When Mutual CHAP Secret is configured for the storage
system, the specified mutual CHAP secret is used by all iSCSI interfaces that run on the system.

To implement mutual CHAP authentication, enable the Use Mutual CHAP option for the iSCSI interface
and specify a username and mutual CHAP secret.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 174
Host configurations are logical connections through which hosts or applications can access storage
resources. They provide storage systems with network profiles of the hosts that use storage resources,
based on the protocol used: FC or iSCSI (block), or SMB or NFS (file). Before a network host can access
storage, you must define a configuration for it and associate it with a storage resource. Host configurations
are associated with storage following permission levels.

To manage Hosts configurations select Hosts from the Access section of Unisphere.
From the Hosts page it is possible to create a new Host configuration, view and modify a Host
configuration, and delete it.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 175
To create a Host configuration the user must click on the add link and follow the wizard. On the Host
Wizard window select the name for the host profile and the host operating system.

If creating a host configuration for a NAS client the user must provide its IP address and skip the Initiators
step.

if creating a Host configuration for a SAN host, the user must select the host automatically discovered
initiators or manually add the initiators on the Initiators screen.

If the initiator was not automatically-discovered and logged in the system, the user must click on the +
button of the Manually Added Initiators section.

Then select Create iSCSI Initiator, input the SAN host IQN and the CHAP credentials on the Add iSCSI
Initiator window, and click Add.

Or select Create Fibre Channel Initiator, input the SAN host HBA WWN on the Add Fibre Channel
Initiator window, and click Add.

After selecting the Initiators the user can click Next to advance to the Summary.

On the Summary page review the Host configuration then click the Finish button to accept the changed
configuration.

The initiators will be registered and the host added to the list of hosts.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 176
The properties of a host configuration can be invoked by selecting the host and clicking on the edit icon.

The General tab of the properties windows allows the change of the host profile.

The LUNs tab displays the LUNs provisioned to the host.

The Network Addresses tab shows the configured connection interfaces.

The Initiators tab shows the registered host initiators and the Initiator Paths tab shows all the paths
automatically created for the host to access the storage.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 177
Host access to provisioned Block storage is specified individually for each storage resource. Host access
is defined when selecting the host configuration to associate with the provisioned LUN or Consistency
group via FC or iSCSI protocol. In the example, a LUN is associated with a Windows Host profile
configured to access Block storage using the iSCSI protocol and File Storage via the SMB protocol.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 178
In order to connect the SAN host to the provisioned LUN via fibre channel the following instructions must
be observed.
Perform the scan of the host bus to discover the devices and enable LUN access. In Windows 2008 hosts
(or later) use the Microsoft Storage Manager for SANs utility. In Linux systems execute a SCSI scan
function through attributes exposed to /sys (in 2.6 based kernels) or reboot system to force the discovery
of new devices.

Use a Windows or Linux/UNIX disk management tool (such as Microsoft Disk Manager) to initialize and
set the drive letter or mount point for the host.

Format the LUN with an appropriate file system: for example, FAT or NTFS.

Note: If you intend to use snapshots for the storage, use the quick-format option when formatting the LUN
from a host running Windows 2008 or above. If you use the full format option, you must allocate a higher
quantity of protection storage than primary storage to the storage resource, or snapshot operations will fail.

(Optional) Configure applications on the host to use the LUN as a storage drive.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 179
In this lab, you will configure an iSCSI Windows host to access a LUN on the UnityVSA. The lab details
the steps to configure an iSCSI interface on the UnityVSA data network port with Unisphere then discover
the target array using the Microsoft iSCSI initiator software.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 180
In this lab, you will configure an iSCSI Linux host to access a LUN on the UnityVSA. This lab details the
steps to configure an iSCSI interface on the UnityVSA data network port with Unisphere then discover the
target array using the Linux native iSCSI initiator software (open-iscsi).

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 181
This lesson covers NAS clients (Windows and Linux/UNIX servers) access to Unity File storage,
describing the NAS servers networking configuration, the host profiles and access levels, and the process
of associating host profiles with file system shares, and mounting these shares to NAS clients.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 182
Host access to Unity File storage requires a NAS client having connectivity to a NAS Server from the Unity
system.

Connectivity between the NAS clients and NAS servers is done via IP network using a combination of
switches, physical cabling and logical networking. Unity front-end ports can be shared, and redundant
connectivity can also be created by networking with multiple switches.

Storage must be provisioned on the Unity for the NAS client. Provisioning Unity storage consists of
grouping physical disk drives into Storage Pools and creating file systems from these pools and file
systems shares based on the NAS server supported protocols. NAS client access to the storage
resources are done via SMB/CIFS and NFS storage protocols. Unity can also provision NFS Datastores,
and VVol (File) Datastores (NFS) for ESXi hosts. The NAS client must then mount the shared file system.
This task is done differently per operating system (Windows, Linux/UNIX, VMware vSphere).

NFS clients have host configuration profile in Unity with the network address and Operating systems
defined. A NFS share can be created and associated with host configuration. The shared file system can
be mounted in the Linux/UNIX system.

ESXi hosts must be configured in the Unity system by adding the vCenter Server and selecting the
discovered ESXi host. The host configuration can then be associated with a VMware datastore in the Host
Access page of the datastore properties. VAAI enables the volume to be mounted automatically to the
ESXi host once it is presented.

SMB clients do not need a host configuration to access the file system share. The shared file system can
be mounted to the Windows system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 183
You can manage the Unity Network interfaces by selecting the Ethernet option from the Access section of
the Settings Configuration window.

From the Ethernet Ports page, settings such as link aggregation and link transmission can be verified
and changed.

To display information about a particular Ethernet port, select it from the list and click on the edit link. The
properties window shows details about the port, including the speed and MTU size. The user can change
both these fields if necessary.

The MTU has a default value of 1500 bytes. If you change the value, you must also change all
components of the network path (switch ports and host).

If you want to support jumbo frames, set the MTU size field to 9000 bytes. This setting is only appropriate
in network environments where all components support jumbo frames end-to-end. In virtualized
environments, jumbo frames should be configured within the virtual system, as well.

The user can also create a link aggregation, add or remove the port to an existing link aggregation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 184
For Linux/UNIX (NFS) file systems, the user must create a Host configuration by clicking on the add link
and selecting the type of profile to be created: host, subnet, or netgroup. Then follow the wizard to create
the profile.

The user must enter the name and determine the host, subnet, or netgroup address according to the
following:

• Single-host access: IP address of the host that will use the storage.

• Subnet access: IP address and subnet mask that defines a range of network addresses that can
access shares.

• Netgroup access: Network address of a netgroup that defines a subset of hosts that can access
shares.

The configuration can be reviewed from the Summary page and the user can click Finish to complete the
job.

In the example a host profile configuration was created for a Linux host

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 185
To manage file systems shares created for Linux/Unix hosts access, select File from the Storage section.

From the NFS Shares page it is possible to create a new share, view its properties, modify some settings,
and delete an existing NFS share.

The NFS Shares page shows the list of created shares, with the used NAS server, its file system and local
path.

To see the details about a share select it from the list and the details about the share will be displayed on
the right-pane.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 186
New NFS shares for a file system can also be created from the NFS Shares page.

Click on the “add” link to launch the Create an NFS Share (NFS Export) wizard.
The steps of the wizard include:
• Selection of the file system it will support
• Enter a name and optional description for the Share
• Provide access to an existing host
• Review the NFS share to be created then click Finish

The results of the process will be displayed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 187
To view and modify the properties of a NFS share click the edit icon.

The General tab of the properties window provides details about the Share name and location of the
share: NAS Server, File System, Local Path and the Export path.

The Host Access tab allows to define share properties.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 188
These are the permissions that can be granted to a host when accessing a File system shared via NFS:

• Use Default Access (use the default access permissions set for the share): Applies the default access
permissions set for the file system.

• Read-Only: Permission to view the contents of the file system, but not to write to it

• Read/Write: Permission to view and write to the file system, but not to set permission for it

• Read/Write, allow Root : Permission to read and write to the file system, and to grant and revoke
access permissions (for example, permission to read, modify and execute specific files and directories)
for other login accounts that access the file system.

• No Access: No access is permitted to the storage resource.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 189
To connect the shared NFS file system to the Linux/UNIX host, mount the share using the host operating
software commands.
When mounting the share, specify the network address of the NAS server and the export path to the target
share. This slide demonstrates how to connect to the shared file system from Linux and UNIX hosts.
After mounting the share to the host, set the share’s directory and file structure. Then set the user and
group permission to its directories and files.
Because shares are accessible through either SMB or NFS, you do not need to format the storage for the
host.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 190
To manage file systems shares created for Windows hosts access, select File from the Storage section.
From the SMB Shares page it is possible to create a new share, view its properties, modify some settings,
and delete an existing SMB share.
The SMB Shares page shows the list of created shares, with the used NAS server, its file system and
local path.
To see the details about a share select it from the list and the details about the share will be displayed on
the right-pane.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 191
New SMB shares for a file system can also be created from the SMB Shares page. Click on the “add” link
to launch the Create an SMB Share wizard.
The steps of the wizard include:
• Selection of the file system it will support
• Enter a name and optional description for the Share
• Optionally configure advanced SMB properties
• Review the SMB share to be created then click Finish

The results of the process will be displayed.

For Windows (SMB) file systems, no host information is required because host access to the storage is
controlled by network access controls set for shares.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 192
To view and modify the properties of a SMB share click the edit icon.

The General tab of the properties window provides details about the Share name and location of the
share: NAS Server, File System, Local Path and the Export path.

The Advanced tab allows the configuration of advanced SMB share properties:

- Continuous availability gives host applications transparent, continuous access to a share following a
failover of the NAS server on the system (with the NAS server internal state saved or restored during
the failover process).

- Protocol Encryption enables SMB encryption of the network traffic through the share.

- Access-Based Enumeration filters the list of available files on the share to include only those to which
the requesting user has read access.

- Branch Cache Enabled copies content from the share and caches it at branch offices. This allows client
computers at branch offices to access the content locally rather than over the WAN.

- Distributed File System (DFS) allows the user to group files located on different shares by transparently
connecting them to one or more DFS namespaces.

- Offline Availability configures the client-side caching of offline files.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 193
Map to the share using the host GUI or CLI commands. When mapping to the share, specify the full UNC
(Universal Naming Convention) path of the SMB share on a Unity NAS server. This slide demonstrates
how to connect to the shared file system from a Windows host.
For NAS servers that have joined a Windows domain, the authentication and authorization settings
maintained on the Active Directory server apply to the files and folders on the SMB file systems.

EMC recommends the installation of the EMC CIFS Management snap-in to a Windows Server 2003,
Windows Server 2008, Windows Server 2012, or Windows 8 host. It consists of a set of Microsoft
Management Console (MMC) snap-ins that can be used to manage home directories, security settings, and virus-
checking on a NAS server

Because shares are accessible through either SMB or NFS, you do not need to format the storage for the
host.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 194
In this lab, you will create SMB shares to a Unity file system and access its file storage from a Windows
client.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 195
In this lab, you will create NFS shares to a Unity file system and access its file storage from a Linux client.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 196
This lesson covers how to configure ESXi hosts access to VMFS, NFS, VVol (Block) and VVol (File)
datastores.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 197
The allocation of block or file storage resources to an ESXi involves the creation of the host configuration
in Unisphere, and the association of the provisioned storage resource with the host profile.

Before an ESXi host can access the Unity provisioned storage, a Host configuration must be defined for it
by providing its network name and IP address.

Then a storage resource (NFS or VMFS datastore) can be created, and associated with the host profile
with a defined level of access.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 198
This procedure describes how to create a Host configuration for a ESXi host that will connect to a NFS,
VMFS, or VVol datastore.

From Unisphere select the VMware option from the Access section.

On the vCenters page click on the + link.

On the Add vCenters wizard window enter the IP address of the vCenter Server and its login credentials.
Then click the Find button to discover the ESXi hosts managed by the vCenter Server.

Select the ESXi host that will be associated with the host configuration profile.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 199
Host Access to VMware Datastores via FC or iSCSI is defined when selecting the host configuration to
associate with the provisioned datastore. Here it is specified how the primary storage resource (LUN) or
its snapshots will be accessed by the VMware hosts.

• No Access: No access to LUN

• LUN: Read/write access to the LUN. No access to associated snapshots

• Snapshot: Read/write access to the snapshots associated with LUN. No access to the LUN

• LUN and Snapshot: Read/write access to both the LUN and the associated snapshots

For VMware host when accessing a File system shared via NFS the permission levels are different:

• Use Default Access (use the default access permissions set for the share): Applies the default access
permissions set for the file system.

• Read-Only: Permission to view the contents of the file system, but not to write to it

• Read/Write: Permission to view and write to the file system, but not to set permission for it

• Read/Write, allow Root : Permission to read and write to the file system, and to grant and revoke
access permissions (for example, permission to read, modify and execute specific files and directories)
for other login accounts that access the file system.

• No Access: No access is permitted to the storage resource.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 200
Once a VMware Datastore is created and associated with an ESXi host profile, we can check if the volume
was discovered by the server in the vSphere environment. The user must open a vSphere web client
session to the vCenter server, then select the ESXi server from the list of hosts, open the Manage page,
select Storage from the submenu and Storage Devices. The new storage device must appear on the list
as attached to the host.

The device Details section of the page displays the volume properties and all the created paths for the
provisioned block storage.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 201
By selecting the Related Objects tab and navigating to the Datastores page, the user can observe that
the datastore was automatically created and presented to the ESXi host in the vSphere environment. That
happens because of Unity’s close integration with VMware.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 202
Protocol Endpoints establish a data path between the ESXi hosts and the respective VVol Datastores. The
Protocol Endpoints are automatically created when a host is granted access to a VVol datastore.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 203
Protocol Endpoints establish a data path between the ESXi hosts and the respective VVol datastores. The
Protocol Endpoints are automatically created when a host is granted access to a VVol datastore.
From the Protocol Endpoints page it is possible to view the properties and status of each one of the PEs.
The Protocol Endpoints page shows the list of created PEs, the type (SCSI or NFS) and the VMware
UUID associated with it.
To see the details about a Protocol Endpoint select it from the list and the details about the PE will be
displayed on the right-pane.

Refer to the video demonstrations for details on provisioning VMware datastores and VVols.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 204
For the VVol datastores created on the array to be used for provisioning VMware storage policy-based
virtual machines the Unity system must be registered as a storage provider on the vCenter server. That’s
a task the vSphere administrator must perform using the IP address or FQDN of the VASA provider.

The vSphere administrator can then create storage policies in vSphere. The VM storage policies defines
which VVol datastores are compatible based on the capability profiles associated with them. The
administrator can then provision the Virtual Machine and select the storage policy and the desired VVol
datastore.

Once the virtual machines are created using the storage policies, the user will be able to see the
underlying volumes created by vSphere presented on the Virtual Volumes page of the VMware datastores
section in Unisphere.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 205
To use the VVols provisioned in Unity for the creation of storage policy-based virtual machines, the Unity
system must be added as a storage provider to the vSphere environment.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 206
The vSphere administrator must open a vSphere session to the vCenter Server and open the Hosts and
Clusters view.

Then select the vCenter server on the left pane, and from the top menu select the Manage option and the
Storage Providers option from the sub-menu.

The administrator must then open the New Storage provider window by clicking on the Add sign. And
enter a name to identify the entity, and the IP address or FQDN of the VASA provider (Unity) on the URL
field making sure to use the port and full format described here in this slide. Next, the administrator must
type the credentials to login to the Unity system, then click OK. The first time the array is registered, a
warning message may pop-up for the Unity certificate.

The administrator can click Yes to advanced and validate the Unity certificate.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 207
The next step is to create Datastores in the vSphere environment using the storage containers (VVol
Datastores) created in Unity.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 208
From the vSphere web client session to vSphere Center, the administrator must open the Hosts and
Clusters view and select the ESXi host from the left pane. Then open the Related Objects page and
select Datastores. The next step is to open the New Datastore wizard using the Add sign link. Besides
the VMFS and NFS types, the wizard now has a VVol option that can be chosen. The administrator must
enter a name for the Datastore and select one of the available Unity VVol datastores from the list. Once
the wizard is complete the new vSphere Datastore (VVol Datastore) is created.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 209
The next step a vSphere Administrator can do is to create the storage policies for virtual machines. These
policies will map to the capability profiles associated with pool used for the VVol datastores creation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 210
The VMware Storage policies can be opened from the vSphere web client home page. From the VM
Storage Policies, the administrator can launch the Create New VM Storage Policy wizard and enter a
name for the policy and select the EMC.UNITY.VVOL data services rule type and add different tags a
datastore must comply with such as usage tag, service-levels and storage properties. In the example only
the usage tag was used. The next step shows all the available mounted datastores categorized by
compatible and incompatible. The administrator must select the datastore that complies with the rules that
were selected on the previous step. Once the wizard is complete the new policy will added to the list.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 211
Once the storage policies are created, the vSphere Administrator can create new Virtual Machines to
these policies.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 212
To create a Virtual Machine the administrator must open the Hosts and Clusters view from the vSphere
web client session to vSphere Center, and select the ESXi host from the left pane. Then from the drop-
down list of the Actions top menu select New Virtual Machine.

The wizard is launched and the administrator can select to create a new virtual machine, enter a name,
select the folder and the ESXi host where the virtual machine will be created.

Then on the storage section of the wizard, the administrator must select the VM Storage Policy previously
created from the drop-down list. The available datastores are presented as compatible and incompatible.
The administrator must select a compatible datastore to continue.

The rest of the wizard steps will instruct the administrator to select the minimum vSphere version
compatibility, the guest OS for the virtual machine, and the option to customizer the hardware
configuration.

Once the wizard is complete a new virtual machine created from Storage Policy-Based Management will
be listed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 213
Virtual Volumes (VVols) are storage objects that are provisioned automatically by the system on VVol
Datastores to store VM data. These objects are different than LUNs and file systems and are subject to
their own set of limits.

Data - Stores data such as VMDKs, snapshots, clones, fast-clones, and so on. At least one Data VVol is
required per VM to store its hard disk.

Config - Stores standard VM-level configuration data such as .vmx files, logs, NVRAM, and so on. At
least one Config VVol is required per VM to store its .vmx configuration file.

Swap - Stores a copy of a VM’s memory pages when the VM is powered on. Swap VVols are
automatically created and deleted when VMs are powered on and off.

Memory - Stores a complete copy of a VM’s memory on disk when suspended or for a with-memory
snapshot.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 214
In this lab, you will create add a vCenter server to Unisphere and find the ESXi hosts managed by it. Then
you will associate a previously created VMFS and NFS datastore with one of the ESXi hosts.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 215
In this lab, you will add a Unity system as a storage provider in a vSphere environment and provision VVol
datastores to a ESXi host for storage policy based virtual machines creation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 216
This module covered the procedures that must be performed to grant SAN hosts and NAS clients the
access to provisioned storage in a Unity system. The module also shows how to configure ESXi hosts
access to VMFS, NFS, VVol ( (Block), and VVol (File) datastores.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 217
This module focuses on the advanced features for Block storage provisioning in Unity. Topics include the
use of FAST Cache and FAST VP features for performance and efficiency, applying compression to Block
storage resources, securing data against unauthorized access with Data at Rest encryption, and the
limitation of system resources consumption for Block storage access with the use of Quality of Service
feature.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 218
This lesson covers the FAST Cache and FAST VP advanced storage features for block storage.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 219
The FAST Cache is a large capacity secondary cache that uses SAS Flash 2 drives to improve system
performance by extending the storage system's existing caching capacity. The FAST Cache can scale up
to a larger capacity than the maximum DRAM Cache capacity.

FAST Cache consists of one or more pairs of SAS Flash 2 in RAID 1 (1+1) and provides both read and
write caching. For reads, the FAST Cache driver copies data off the disks being accessed into the FAST
Cache. For writes, FAST Cache effectively buffers the data waiting to be written to disk.

At a system level, the FAST Cache reduces the load on back-end hard drives by identifying when a chunk
of data on a LUN is accessed frequently, and copying it temporarily to FAST Cache.

The storage system then services any subsequent requests for this data faster from the Flash disks that
make up the FAST Cache; thus, reducing the load on the disks in the LUNs that contain the data (the
underlying disks). The data is flushed out of cache when it is no longer accessed as frequently as other
data.

Subsets of the storage capacity are copied to the FAST Cache in 64KB chunks of granularity.

FAST Cache operations are non-disruptive to applications and users. It uses internal memory resources
and does not place any load on host resources.

Online expansion and shrinking of a FAST Cache is possible by adding or removing drives.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 220
To create FAST Cache, the user needs at least 2 FAST Cache optimized drives in the system, which will
be configured in RAID 1 mirrored pairs. The system uses the Policy Engine and Memory Map components
to process and execute FAST Cache.

• Policy Engine – Manages the flow of I/O through FAST Cache. When a chunk of data on a LUN is
accessed frequently, it is copied temporarily to FAST Cache (FAST Cache optimized drives). The
Policy Engine also maintains statistical information about the data access patterns. The policies
defined by the Policy Engine are system-defined and cannot be changed by the user.

• Memory Map – Tracks extend usage and ownership in 64 KB chunks of granularity. The Memory Map
maintains information on the state of 64 KB chunks of storage and the contents in FAST Cache. A copy
of the Memory Map is stored in DRAM memory, so when FAST Cache is enabled, SP memory is
dynamically allocated to the FAST Cache Memory Map.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 221
During FAST Cache operations, the application gets the acknowledgement for an IO operation once it has
been serviced by the FAST Cache. FAST Cache algorithms are designed such that the workload is spread
evenly across all the flash drives that have been used for creating FAST Cache.

During normal operation, a promotion to FAST Cache is initiated after the Policy Engine determines that
64 KB block of data is being accessed frequently. To be considered, the 64 KB block of data must be
accessed by reads and/or writes multiple times within a short period of time.

A FAST Cache Flush is the process in which a FAST Cache page is copied to te HDDs and the page is
freed for use. The Least Recently Used (LRU) algorithm determines which data blocks to flush to make
room for the new promotions.

FAST Cache contains a cleaning process which proactively copies dirty pages to the underlying physical
devices during times of minimal backend activity.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 222
EMC Unity allows users to increase the configured size of FAST Cache online, without impacting FAST
Cache operations on the system. Online expansion gives users the option of first configuring FAST Cache
with a minimal amount of drives, and growing the configuration as demands on the system are increased.

Each RAID 1 pair is considered a FAST Cache object.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 223
To expand FAST Cache, free drives of the same size and type currently used in FAST Cache must exist
within the system. FAST Cache is expanded in pairs of drives, and can be expanded up to the system
maximum.

When a FAST Cache expansion occurs, a background operation is started to add the new drives into
FAST Cache. This operation first configures a pair of drives into a RAID 1 mirrored set. The capacity from
this set is then added to FAST Cache, and is available for future promotions. These operations are
repeated for all remaining drives being added to FAST Cache. During these operations, all FAST Cache
reads, writes, and promotions occur without being impacted by the expansion. The amount of time the
expand operation takes to complete depends on the size of drives used in FAST Cache and the number of
drives being added to the configuration.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 224
Upon completion of the reconfiguration, the new space is available for use by FAST Cache.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 225
In EMC Unity, removing drives from FAST Cache is possible while FAST Cache is configured and
servicing I/O. If at any time a number of drives need to be removed from FAST Cache, a shrink operation
can be started. FAST Cache shrink is issued in pairs of drives, and allows the removal of all but 2 drives
from FAST Cache. To remove all drives from FAST Cache, the Delete operation is used. FAST Cache
shrink is often utilized when drives need to be repurposed to a Pool for expanded capacity. Removing
drives from FAST Cache can be a lengthy operation, and can impact system performance.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 226
The Shrink process is lengthy and requires flushing all dirty data from each set being removed to back-end
disk. When a FAST Cache shrink occurs, a background operation is started to remove drives from the
current FAST Cache configuration. Removing drives from FAST Cache reduces the size of FAST Cache
by the number of drives selected. After starting a shrink operation, new promotions are blocked to each
pair of drives selected by the system to be removed from FAST Cache. Next, each FAST Cache dirty
page within the drives to be removed is cleaned to ensure that data is synchronized with the locations on
the Pool. After all dirty pages are cleaned within a set of drives, the capacity of the set is removed from the
FAST Cache configuration. Data which existed on FAST Cache drives that were removed may be
promoted to FAST Cache again through the normal promotion mechanism.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 227
Next, each FAST Cache dirty page within the drives to be removed is cleaned to ensure that data is
synchronized with the locations on the Pool. After all dirty pages are cleaned within a set of drives, the
capacity of the set is removed from the FAST Cache configuration. Data which existed on FAST Cache
drives that were removed may be promoted to FAST Cache again through the normal promotion
mechanism.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 228
Data which existed on FAST Cache drives that were removed may be promoted to FAST Cache again
through the normal promotion mechanism.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 229
Data must be re-promoted to be serviced from FAST Cache.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 230
FAST Cache can be created on licensed physical Unity systems with available SAS Flash 2 drives. In
Unisphere, the FAST Cache can be created when running the Initial Configuration wizard, or when
accessing the system settings page.

In the Settings page, select the FAST Cache option under the Storage Configuration section. Then hit
the Create button and select the FAST Cache disks from available SAS Flash drives and choose whether
to enable FAST Cache for existing pools. The process will create RAID Groups, add storage to FAST
Cache and enable it for existing storage pools (if the checkbox was selected). The status of the used disks
can be seen from the FAST Cache disks.

After the FAST Cache is created it can be expanded by adding more SAS Flash disks to it, have disks
removed from it, or it can be deleted.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 231
When FAST Cache is enabled on the EMC Unity system, you have the option to expand the capacity of
FAST Cache up to the system maximum. To expand FAST Cache on a EMC Unity system from
Unisphere, you must navigate to the FAST Cache page found under Storage Configuration in the Settings
window. From this window, select Expand to start the Expand FAST Cache wizard.

When expanding FAST Cache, you may only select free drives of the same size and type as what is
currently in FAST Cache. In this example, only 400 GB SAS Flash 2 drives are available to be selected, as
FAST Cache is currently created with those drives. From the drop-down list, you are able to select pairs of
drives to expand the capacity of FAST Cache up to the system maximum. In this example, 8 free drives
were found. Click OK to start the expansion process.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 232
FAST Cache supports online shrink. The user can shrink the FAST Cache by removing drives from its
configuration. It is possible to remove all but 1 RAID 1 pair – each RAID 1 pair is considered a FAST
Cache object.

To shrink the FAST Cache, select Shrink. Then select the amount of disks to remove from the FAST
Cache. A warning message will be displayed. Removing the drives from FAST Cache requires the flushing
of dirty data from each set being removed to disk.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 233
Although the FAST Cache is a global resource, it is enabled on a per pool basis.

You can configure a pool to use the FAST Cache during pool creation.

For existing pools, navigate to the Pools page in Unisphere and select the storage pool to modify its
settings.

Click on the edit sign to open the properties page. Then use the General tab in the Storage Pool
Properties page to enable the FAST Cache. Check Use the FAST Cache and click Apply.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 234
When reviewing the access patterns for data within a system, most access patterns show a basic trend.
Typically the data is most heavily accessed near the time it was created, and the activity level decreases
as the data ages. This trending is also referred to as the lifecycle of the data. EMC Unity Fully Automated
Storage Tiering for Virtual Pools (FAST VP) monitors the data access patterns within Pools on the system,
and dynamically matches the performance requirements of the data with drives that provide that level of
performance. FAST VP classifies drives into three categories, called tiers. These tiers are:

• Extreme Performance Tier – Comprised of Flash drives

• Performance Tier – Comprised of Serial Attached SCSI (SAS) drives

• Capacity Tier - Comprised of Near-Line SAS (NL-SAS) drives

FAST VP helps to reduce the Total Cost of Ownership (TCO) by maintaining performance while efficiently
utilizing the configuration of a Pool. Instead of creating a Pool with one type of drive, mixing Flash, SAS,
and NL-SAS drives can help reduce the cost of a configuration by reducing drive counts and leveraging
larger capacity drives. Data requiring the highest level of performance is tiered to Flash, while data with
less activity resides on SAS or NL-SAS drives.

EMC Unity has a unified approach to create storage resources on the system. Block LUNs, File Systems,
and VMware Datastores can all exist within a single Pool, and can all benefit from using FAST VP. In
system configurations with minimal amounts of Flash, FAST VP will efficiently utilize the Flash drives for
active data, regardless of the resource type. For efficiency, FAST VP also leverages low cost spinning
drives for less active data. The access patterns for all data within a Pool are compared against each other,
and the most active data is placed on the highest performing drives while adhering to the storage
resource’s tiering policy. Tiering policies are explained later in this document.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 235
FAST VP Tiering policies determine how the data relocation will take place within the storage pool. The
available FAST VP policies are displayed here.

Use the “Highest Available Tier” policy when quick response times are a priority.

The “Auto Tier” policy automatically relocates data to the most appropriate tier based on the activity level
of each data slice.

The “Start High, then Auto Tier” is the recommended policy for each newly created pool, because it takes
advantage of the “Highest Available Tier” and “Auto-Tier” policies.

Use the “Lowest Available Tier” policy when cost effectiveness is the highest priority. With this policy, data
is initially placed on the lowest available tier with capacity.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 236
When creating a multi-tiered pool, you can choose the RAID protection for each one of the tiers being
configured. A single RAID protection is chosen for each tier, and once the RAID configuration is selected,
and the pool is created, it cannot be changed. Only when you expand the pool with a new drive type, you
can select a RAID protection.

This table shows the supported RAID types and drive configurations.

Remember to consider the performance, capacity and protection levels these configurations provide, when
deciding on the RAID configuration to adopt.

RAID 1/0 is suggested for applications with large amounts of random writes, as there is no parity write
penalty in this RAID type.

RAID 5 is preferred when cost and performance are a concern. RAID 6 provides the maximum level of
protection against drive faults of all the supported RAID types.

When considering a RAID configuration which includes a large number of drives, (12+1, 12+2, 14+2),
consider the tradeoffs that the larger drive counts contain, such as the fault domain and potentially long
rebuild times.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 237
To change system-level data relocation configuration on the Unity system, click on the Settings icon on the
top of the page, then under the Storage Configuration section select the FAST VP option. The FAST VP
settings page displays whether scheduled relocations are enabled for the system.

The Relocation Schedule defines the timeframe during which FAST VP relocations are allowed to occur.
Users can select which days, and the timeframe during those days, for relocations to occur. Relocations
will continue to cycle as long as the Relocation Window is open.

The user can manually pause (hit the pause button) and resume (hit the resume button) the scheduled
data relocations on the system, change the data relocation rate, disable and re-enable scheduled data
relocations, and modify the relocation window.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 238
The example shows the Pool properties for Pool 1. From this page select the “FAST VP“ tab.

The page will displays the FAST VP configuration including;

• If a FAST VP license is installed on the system, you can view the following information for a pool:

• Whether the pool participates in scheduled data relocations.

• Estimated time needed for scheduled data relocations.

• Start and end time for the most recent data relocation.

• Number and type of disks in each tier.

• Amount of data in the pool scheduled to move to higher and lower tiers.

• Amount of data in the pool scheduled to be rebalanced within a tier.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 239
Select the storage resource (LUN, Consistency Group or File System) and click on the pencil icon to edit
its properties. The example shows the properties for LUN_2. Then navigate to the FAST VP tab. The page
will display the information of the tiers that are been used for data distribution.

<click> From this page it is then possible to change the tiering policy for the data relocation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 240
This demo covers the configuration of FAST Cache and FAST VP support on Unity Systems using
Unisphere.

Click the Launch button to view the video.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 241
This lab demonstrates how to configure and use the FAST VP support on Unity systems using the
Unisphere Interface.

This lab also demonstrates how FAST VP data relocation can be scheduled or manually started, and how
it behaves with the selection of tiering policies for the storage resources.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 242
This lesson covers the Unity Compression feature. Students should be able to understand and describe
the Unity Compression feature and capabilities.

Student will also be able describe the Unity Compression architecture and how Unity Compression is
integrated into the current software model. Lastly, the students should identify how Unity Compression
interoperates will other components such as snapshots, replication and encryption.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 243
Unity Compression is intended to lower the cost per storage consumed and also improve the cost per
IOPs, through better utilization of system resources.

Unity Compression provides the ability to reduce the amount storage needed for user data on a storage
device, by compressing portions of the data at the time the data is first written. CPU resources, that might
otherwise go unused, are employed to perform the compression on the write path.

Unity Compression is supported on physical hardware only. For Hybrid arrays, Unity Compression is only
supported on All Flash pools with no additional licenses required.

Unity Compression is supported on thin LUNs and VMFS datastores only. Unity Compression enabled
LUNs must be created from All Flash pools (AFP).

Unity Compression depends on the infrastructure provided by Persistent File Data Cache (PFDC). If the
LUN is not using PFDC, then compression will not be used.

Note: If you need to convert a pool to a hybrid pool (both Flash and non-Flash drives), any LUNs
that use compression must be deleted or moved. Hybrid pools cannot have compression enabled,
and you cannot create a compression-enabled LUN in a hybrid pool. An all-Flash pool can contain
both compression-enabled and non-compression enabled LUNs.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 244
Unity Compression occurs inline between System Cache and the storage resource on an All Flash Pool.
The compression process is not invoked for write I/Os at this point in time in order to provide the fastest
response to the host.

When a block of data is written to the system, the data is saved in System Cache, and the write is
acknowledged with the host.

No data has been written to the drives within the Pool at this time.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 245
In Unity the write caching process has been enhanced and all back-end allocations and lookups within the
target storage resource are deferred until after writes are accepted into System Cache and the host is
acknowledged. To ensure all data in cache is backed by disk, a portion of the private space within the
storage resource’s overhead is tracked and utilized as a possible location to store the I/O when accepting
data into cache.

After the I/O is acknowledged, the normal cache cleaning process occurs. Space within the storage
resource is utilized or allocated, if needed, and the data is saved to disk. This caching change not only
applies to compression enabled resources, but it is also applicable to Block and File storage resources
(excluding VVols) created on All Flash Pools.

For compression enabled storage resources, compression occurs during the System Cache’s proactive
cleaning operations or when System Cache is flushing cache pages to the drives within the Pool. The data
in this scenario may be new to the storage resource, or the data may be an update to existing blocks of
data currently residing on disk.

The data compression algorithm occurs before the data is written to the drives within the Pool. During the
compression process, multiple blocks are aggregated together and sent through a sampling algorithm,
which determines if the data can be compressed.

If the sampling algorithm determines a sufficient amount of space can be saved, the proper amount of
space is then allocated within the storage resource and the data is compressed and written to the Pool.

Compression will not compress data if the size of the compressed data and overhead to store the data is
greater than the original data size. Waiting to allocate space within the resource until after the
compression estimate is completed helps to not over-allocate space within the storage resource.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 246
When read operations are received, the current location of the data being requested and the compression
state is determined. If the data is not compressed, a normal read operation occurs as if compression is
disabled on the storage resource. If the data resides in System Cache, the data is sent to the host. If the
data resides on disk, the data is copied into System Cache and then sent to the host requesting the data.

If the data is compressed, it must first be uncompressed before the data is sent to the host. If the
compressed data already resides in System Cache, the data is uncompressed to a temporary location, the
data is sent to the host, and the temporary location is released. If the compressed data being requested
resides on disk, the data is first read into System Cache, uncompressed to a temporary location, and the
host is sent the data. Data is never uncompressed on disk due to a read operation, as this would reduce
the amount of savings on the storage resource.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 247
Unity Compression is supported on standalone LUNs, LUNs contained within a Consistency Group, or
VMware VMFS datastores.

All Unity software features are also supported with Unity Compression. The following sections talk
specifically about certain features of the Unity storage system, and how they relate to Compression.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 248
Storage Resources utilizing Unity Compression can be replicated using any supported replication
software, such as Native Synchronous Block Replication or Native Asynchronous Replication to any
supported destination system. All data replicated, regardless if it is local replication or to a remote system,
is first uncompressed then replicated to the destination. This method of replicating compression enabled
storage resources ensures that all replication topologies are supported as if compression is not enabled
on the resource. Replicating to systems which do not support compression is also supported, such as
replicating to UnityVSA or a physical Unity system not running a code version which supports Unity
Compression.

Unity Compression can also be enabled on only the source, only the destination, or both the source and
destination storage resources, depending on if the system and Pool configuration support Unity
Compression. This allows the user to fully control where to implement Unity Compression. One example
of a supported replication configuration is when utilizing Asynchronous Local Replication. The Source
storage resource may reside on an All Flash Pool and have compression enabled, but the destination may
be on a large capacity Hybrid Pool which does not support compression. Another example of a supported
configuration is when replicating a storage resource from a UnityVSA system or a production system not
utilizing Unity Compression, to a storage resource with compression enabled on a remote system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 249
The Unity Snapshots feature is fully supported with Unity Compression. Unity Snapshots also benefit from
the space savings achieved on the source storage resource. When taking a Snapshot of a compression
enabled storage resource, the data on the source may be compressed. The data is left in its compressed
state, and the Snapshot inherits the savings achieved on the source storage resource. This savings is also
calculated and reported as part of the GBs saved compression information.

When a snapshot is mounted and the source storage resource has compression enabled, Unity
Compression is also utilized on any snapshot I/O. If a read is received for a compressed block of data, the
data is uncompressed and sent to the requestor. Savings can also be achieved on writes to a snapshot.
As write operations are received, if the source storage resource has compression enabled, snapshot
writes are also passed through the Unity Compression Algorithms. This savings is tracked and reported as
part of the GBs saved for the source storage resource.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 250
Data at Rest Encryption (D@RE) is fully supported on systems utilizing Unity Compression. Unity
Compression is not impacted by Data at Rest Encryption, as all compression operations occur on data
residing in System Cache. For data being written to disk, the data is first compressed within System
Cache, then written through hardware-based encryption modules to the backend drives. For reads from
disk, the data is first decrypted and saved into System Cache before being uncompressed and sent to the
host.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 251
Unity Native SANCopy Import allows Block objects created on supported VNX systems to be imported to
Unity. When configuring an Import Session, Unity Compression is supported on the destination as long as
the destination system supports Unity Compression and the target storage resource resides on an All
Flash Pool. When creating an Import Session, if the target resource supports compression, a checkbox is
available to enable compression on the destination resource. As data is migrated from the source VNX
system to the Unity system, it is compressed as it is written to the Pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 252
Expanding a Pool which contains compression savings is only supported if it is being expanded by
supported Flash Drives. For instance, a Pool containing SAS Flash 2 or SAS Flash 3 drives can be
expanded by adding more SAS Flash 2 or SAS Flash 3 drives to the Pool. If the Pool contains SAS Flash
4 drives, it can only be expanded using SAS Flash 4 drives. While compression savings exist within the
Pool, adding spinning drives to convert the Pool to a Hybrid Pool is not supported.

To expand and convert an All Flash Pool containing compression savings to a Hybrid Pool, all savings
must be removed from the Pool. This not only means disabling compression on all storage resources
within the Pool or utilizing Move to migrate data to a new Pool, but also ensuring that all compression
savings on the Pool is reduced to 0. Disabling compression on a resource does not cause the compressed
data within the resource to be uncompressed. This can be achieved by using Move to a resource on the
same Pool with compression disabled, or to another Pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 253
All LUNs created on an All-Flash pool will have Unity Compression disabled by default. The LUN creation
wizard will have a checkbox and will have to be selected during the LUN creation if Unity Compression is
to be enabled on that LUN.

Unity Compression is supported on Thin LUNs, Thin LUNs within a Consistency Group, and VMware
VMFS datastores, All Flash Pools can be created on a Unity Hybrid Flash system, or a Unity All Flash
system. Within a Consistency Group, compression enabled LUNs can be mixed with LUNs which have
compression disabled.

Note the Compression checkbox is displayed for thin LUNs in All-Flash pools only. It allows the thin LUN
to be compressed to save space.

If an upgrade is performed from Unity GA release to a higher release, existing LUNs will not automatically
be enabled and the user will have to manually select Unity Compression on a LUN by LUN basis.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 254
Compression on a LUN can be enabled or disabled at any time. When done on an existing LUN, all
existing data is left uncompressed and will only be compressed upon a rewrite of the data.

When disabling Unity Compression, data is left compressed until the data is overwritten or migrated by
using “LUN Move” operation.

Compression stops for new writes when sufficient resources are not available, and resumes automatically
once enough resources are available. Data that cannot be compressed is detected, and is written
uncompressed.

Note: When you enable compression, only new data is compressed, not data that already exists on the
LUN. In order to compress existing data, you must move the LUN's data to a destination LUN that has
compression enabled.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 255
Move, is a feature introduced in Unity OE version 4.1, is used to move data from a supported source
storage resource to a target storage resource within the same system. The Move operation is completely
transparent to any associated hosts, and no interruption to access is seen. Move can be utilized to move a
storage resource from one Pool to another, or within the same Pool. When utilizing the Move feature, the
Session Priority and destination Pool may be customized. You can also move to a compression enabled
resource if the destination Pool is an All Flash Pool and the storage resource supports compression. A
Move session can be cancelled at any time.

As with other features, Move can be managed in Unisphere, Unisphere CLI, and REST API. The
Compress Now option is only available within Unisphere, and is only a method to start a Move operation
to a compression enabled storage resource within the same Pool.

For more information on Move and any restrictions of its usage, refer to the white paper titled EMC Unity:
Migration Technologies.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 256
Compress Now, also a feature introduced in Unity OE version 4.1, is used to compress existing data
within a compression enabled storage resource. This feature is only available for compression enabled
storage resources. When selected, the Compress Now operation automatically starts a Move process to a
compression enabled storage resource residing within the same Pool. As the Move operation copies data
to a compression enabled storage resource, the data is compressed. The Compress Now operation is
most commonly utilized after enabling compression on an existing storage resource to attain compression
savings on existing data.

The Compress Now option may be utilized at any time, not only after enabling compression on a storage
resource. One benefit to utilizing Compress Now is that it sequentially moves data to a new storage
resource. This reorganization can help to improve performance of sequential workloads if the data was
originally written randomly to the resource. The reorganization may also increase the savings achieved by
Unity Compression by putting compressible data sequentially together. For maximum space savings, it is
recommended that only 1 Compress Now operation per Storage Processor be running on the system at a
time.

Note: If you need to convert a pool to a hybrid pool (both Flash and non-Flash drives), any LUNs
that use compression must be deleted or moved. Hybrid pools cannot have compression enabled,
and you cannot create a compression-enabled LUN in a hybrid pool. An all-Flash pool can contain
both compression-enabled and non-compression enabled LUNs.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 257
As with enabling compression on a storage resource, Unity Compression can be disabled at any point in
time. This can be completed in Unisphere from the properties window of the storage resource, or by
utilizing Unisphere CLI or REST API. After compression is disabled, all data for the storage resource is left
its current state within its Pool, whether it is compressed or not. Data written after disabling compression
will be stored as uncompressed. As previously compressed data is overwritten, compression savings are
reduced on the storage resource. To fully remove compression savings from a storage resource, Move
can be utilized by specifying a non-compressed destination.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 258
This demo covers the process to enable and disable ILC on a Unity storage system.

Click the Launch button to view the video.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 259
This lesson covers the Unity feature that provides data security against unauthorized access. The lesson
discuss the D@RE components and explain how the D@RE key are used.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 260
Unity systems offers controller based Data At Rest Encryption (D@RE). The Unity Data at Rest
Encryption feature protects against unauthorized access to lost, stolen, or failed drives by ensuring all
sensitive user data on the system is encrypted.

Data is encrypted with strong encryption, based on FIPS 140-2 Level 1 compliant encryption using
Advanced Encryption Standard (AES) algorithms. Compliance is with industry or government data security
regulations that require or suggest encryption including, HIPAA (healthcare), PCI DSS (credit cards), and
GLBA (finance).

Hardware-based encryption modules located in the SAS IO controller chip in all the 12 GB/s SAS I/O
modules and embedded in the Storage Processor encrypt data as it is written to the backend drives, and
decrypt the data as it is retrieved from these drives.

Since the encryption/decryption functions occur in the SAS controller, it has minimal impact on data
services such as replication, snapshots, etc. There is little to no performance impact with encryption
enabled.

An internal key manager generates and manages encryption keys. This method is simpler, lower cost, and
more maintainable than self-encrypting drives.

With the encryption hardware embedded in the array, drive vendor and drive type are agnostic, allowing
use of any disk drive type and eliminates drive specific vendor overhead.

Securely decommissioning arrays is easily accomplished by deleting pools, this in turn deletes all drive
encryption keys and most often eliminates the need to shred disk drives. Encryption is a licensed feature
and will not appear in the licenses page if the license is not active. No data-in-place upgrades are
supported and changing the encryption state requires a destructive re-initialization.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 261
Unity D@RE uses an internal fully-automated key manager which generates, stores, deletes, and
propagates security encryption keys.

Key Manager monitors for key store configuration changes (i.e. storage pool configured, disk added to
pool, etc.) that result in key creation/deletion.

Key Manager uses RSA BSAFE libraries to generates several encryption keys, and stores these keys in a
secure Keystore. The combined use of these encryption keys, ensure that neither the drives themselves,
nor the keys which encrypt these drives, can be read. The encryption keys generated by Key Manager
are: Data Encryption Keys (DEKs), Key Encryption Keys (KEKs), and Key Encryption Keys Wrapping Key
(KWK).

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 262
A Data Encryption Key (DEK) is a randomly generated key that is used to encrypt data on a particular
drive. There is a unique DEK for each bound drive, which is created when that drive is bound, and deleted
when that drive is unbound. Each DEK is a unique 512-bit key generated by XTS-AES (Advanced
Encryption Standard) algorithm. All DEKs are stored in a single encrypted keystore, which must be
available in order to access encrypted data on the system.

The second type of key used is referred to as a Key Encryption Key (KEK), and is a 256 bit key generated
to protect and secure the DEKs as they move through the storage system, such as to and from the SAS
controller. A new KEK is generated each time the storage system boots, by using the AES Key Wrap
algorithm. The DEKs are wrapped with the KEK and stored in the keystore.

The Key Encryption Key Wrapping Key (KWK) is used to wrap the KEK as it travels throughout the array
and to the SAS controller. Similar to the KEK, the KWK is also generated using the AES Key Wrap
Algorithm.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 263
New DEKs will be created through any method of binding drives to a private RAID group, such as creating
or expanding a storage pool or FAST Cache.

Or when sparing a new drive. Each time a drive is bound, an entirely new, unique key will be created.

Drive’s keys are also permanently deleted as a result of unbinding drives, such as when a storage pool or
FAST Cache is deleted or FAST Cache is shrunk.

Because DEKs are permanently deleted whenever drives are unbound, simply deleting storage pools and
FAST Cache can be an effective method of rendering residual data unreadable, as the drives will never
again be able to be decrypted without their corresponding DEKs.

The keystore should be backed up whenever drives are added to or removed from a pool. The keystore
can later be restored in the event the existing on-array keystore becomes corrupted or unavailable.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 264
Data at Rest Encryption is a licensed feature available only for physical deployments of Unity systems.
The encryption state is set the first time a license is applied.

Once set, the encryption state cannot be changed without completely reinitializing the array (which
removes all data and configuration).

From the Unisphere GUI, open the Settings window and select Encryption under the Management
section. From this page it is possible to verify the status of the Controller Based Encryption, perform the
manual off-array backup of the keystore file, and download audit logs and checksum. Keystore file backup
is very important to ensure the integrity of the drive’s data encryption keys.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 265
There are some key considerations, that must be observed for Unity systems with Data at Rest encryption
enabled.

The keystore is tied to the storage processors, so care must be taken when performing service operations
in order to preserve the encryption keys. System maintenance that involves the replacement of the
chassis and both storage processors requires some precautions to be taken. During the replacement
procedure, both storage processors should not be replaced at the same time. One of them should be
retained until the array is back online, prior to replace the second storage processor. In the case that
storage processors have already been replaced at the same time and the keystore was lost, the keystore
may be restored from an external backup.

In the unlikely event all redundant keystore copies stored securely on array become corrupted or
otherwise unavailable, the keystore backup file can be used to restore access to the data. Unity also
maintains an audit log which supports logging keystore operations such as D@RE feature activation, key
creation, key deletion, keystore backup, disk encryption completion, and IO module addition. These audit
logs can be downloaded along with its corresponding checksum information.

The Unity Data at Rest encryption feature only encrypts and decrypts data as it passes the SAS controller
level. On another words, the data is only encrypted and secure when stored on the backend drives or the
internal M.2 SATA device. Data that travels throughout the network to external hosts is not protected. To
protect data at host access level, an external data in flight encryption solution must be used such as SMB
protocol encryption or host-based encryption.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 266
This demo covers how to verify if D@RE support is enabled in a Unity system and how to backup the
keystore file.

Click the Launch button to view the video.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 267
This lesson covers the Host I/O Limits feature. Students will be able to explain the Host I/O Limits feature,
benefits and use cases, describe the available policies created for Host I/O Limits, explain how policies
can be paused and resumed and describe how I/O Burst policies are calculated.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 268
Unity Host I/O Limits or Quality of Service (QoS) is a feature that limits I/O to Block storage resources:
LUNs, attached snapshots, and VMFS Datastores. Host I/O Limits can be set on physical or virtual
deployments of Unity systems. Limiting I/O throughput and bandwidth provides more predictable
performance in system workloads between hosts, their applications, and storage resources.

Host I/O Limits are either enabled or disabled in a Unity system. All host I/O limits are active if the feature
is active. Host I/O limits are active as soon as policies are created and assigned to the storage resources.
In the GA release for Unity, Host I/O Limits provided a system wide pause and resume control feature. For
Unity OE v4.1, the pause and resume feature was enhanced to allow users to pause and resume a
specific Host I/O limit on an individual policy level.

Limits can be set by throughput in I/Os per second (IOPS) or Bandwidth defined by Kilobytes or
Megabytes per second (KBPS or MBPS), or a combination of both limits. If both thresholds are set, the
system limits traffic according to the threshold that is reached first.

Only one I/O limit policy can be applied to a storage resource. For example, an I/O limit policy can be
applied to an individual LUN or to a group of LUNs. When an I/O limit policy is applied to a group of LUNs,
it can also be shared.

When a policy is shared, the limit applies to the combined activity from all LUNs in the group.

When a policy is not shared, the same limit applies to each LUN in the group.

Host I/O Limits provide more granularity and predictable performance in system workloads between hosts
and applications and storage resources.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 269
Host I/O Limits is useful for service providers to control service level agreements. If customers would like
an SLA that specifies 500 IOPS, a limit can be put in place that allows a maximum of 500 IOPS. A Service
Provider can create a Host I/O polices that meet their requests.

Another use case is setting limits for say, run away processes or noisy neighbors. These are processes
that take resources away from other processes.

In a test and development environment, a LUN with a DB on it may be used as for testing and you want
to run some test against it. Users can create a snapshot of the LUN and mount it. Putting a limit on the
snapshot would be useful to limit I/O on the snap since it’s not a production volume.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 270
The slide shows the available policy level controls and status conditions that are displayed in Unisphere.
We’ll look at the Policy Status conditions in the upcoming slide.

New in Host I/O Limits is the ability to pause and resume a specific host I/O limit. Unlike previous policies
enforced with Unity GA, this feature allows each configured Host I/O policy to be paused or resumed
independently of the others, whether or not the policy is shared.

Pausing the policy stops the enforcement of that policy.

Resuming the policy will immediately start enforcement of that policy and throttle the I/O accordingly.

Policy Status can be viewed for each host I/O policy and are in one of three conditions;

Active, Paused or Global Paused.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 271
The table looks at the policies and their relationship for both System Settings and Policy Status. When a
policy is created, the policy is displayed as “Active” by default.

System Settings are global settings and are displayed as either Active or Paused.

When the System Settings are displayed as Active, the Policy Status will be displayed as Active or Paused
depending on the status of the policy when the System Settings was changed.

For example, if the System Setting was “Active” and the user had configured three policies A, B and C. A
user could pause A, and the system would update the status of “A” to “Paused”. The other two policies B
and C would still display an “Active” status. At this point if the user decided to change the System settings
to “Pause” the Policy status will be displayed as “Global Paused” on policies B and C but “Paused” on A.
and the policy will not be enforced.

When both the System setting and Policy Setting are Paused, the Policy Status will be shown as Paused.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 272
There are two ways in which to set a Host I/O Limit at the System level, users can click on the “Settings”
icon then navigate to and expand the “Management” > Performance option , or from the Host I/O Limits
page, select the “Manage host I/O limits system settings” from the upper right text option. This will link you
to the same page.

The example shows the Settings method. Selecting “Performance” displays the “Host I/O Limits Status”.

<click> If there are “Active” policies, users have the option to “Pause” the policies on a system wide basis.

Once a user selects the “Pause” option, they will be prompted to confirm the operation. (Not shown)

<click> The Host I/O Limits Status now displays a “Paused” Status and users have the option to “Resume”
the policy.

Navigating to the Performance > Host I/O Limits page shows the policies that were affected by the Pause.
In the example, three policies display a Status of “Global Paused” indicating a System wide enforcement
of those policies.

Select “Resume” to allow the system to continue with the throttling of the polices.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 273
The example displays the Host I/O Limits policies from Unisphere > System > Performance > Host I/O
Limits window. There are several policies created, three of which show a status of “Active”. (Remember
when a policy is first created the default will be displayed as Active). The Density_Limit_1 policy is
selected.

From the “More Actions” tab, users have a chance to Pause an Active session. (resume will be greyed
out). Once the Pause option is selected, a warning message will be issued to the user to confirm the
Pause operation.

Selecting Pause will start a background job and after a few seconds, causes the Status of the policy to be
displayed a “Paused”. All other policies are still “Active” since the pause was done at the Policy level, not
the System level.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 274
In the example, the IO_Burst_Policy has been selected and an option to “Resume” the policy is available
from the “More Actions” drop down menu.

Note that a “Global Paused” policy cannot be resumed, it must first be “Paused” before the “Resume”
operation is available.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 275
Host I/O Limits allows users to implement a shared policy when the initial Host I/O policy is created. The
policy is in effect for the life of that policy and cannot be changed, users must create another policy with
the shared check box unchecked if they want to disable the setting.

When the shared box is left unchecked, each individual resource will be assigned a specific limit(s).

When the shared box is selected, the resources are treated as a group, and all resources share the limit(s)
applied in the policy.

In the example, a Host I/O policy has been created to limit the number of host IOPS to 100.

In this case, both LUN 1 and LUN 2 will share these limits. This does not guarantee the limits will be
distributed evenly, if a particular LUN is using more IOPS it will be serviced.

Also, limits are shared across Storage Processors it doesn’t matter if a LUN is owned by each SP, the
policy applies to both.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 276
A Host I/O Limit policy can be one of two types; absolute or density-based. An absolute limit applies a
maximum threshold to a storage resource regardless of its size.

In the example, there are three LUNs under the same policy, setting an absolute policy for the LUNs would
limit each LUN to 1000 IOPS regardless of its size.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 277
Host I/O Limits has been enhanced to include a policy based on a GB value for a given resource. A
density-based limit scales with the amount of storage that is allocated to the storage resource.

The limit is based on the resources capacity. As with other limits, the policy can be shared with other
resources. When a density policy is in place, the IOPS and Bandwidth are based on a GB value not a
maximum value as with a absolute policy.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 278
Customers in todays storage environment are looking for ways by which they can consume storage that
not only meets their capacity requirements, but also meets their performance requirements. At the same
time, this storage consumption model needs to be presented as a predictive and easily consumable
service. Host I/O Density Policy can meet these needs.

Consider the case where a 7.2k RPM disk drive is capable of handling 50 IOPS. However there are
several drives with varying capacities that spin at 7.2k. As the size of the drive increases, the IOPS/GB
ratio worsens (i.e. you get 50 IOPS/GB with 1GB drive and ​6.25 IOPS/GB with an 8TB drive).
Implementing a Host Density I/O Limit can solve this problem.

IO Density is the measurement of Host IOPS generated over a given amount of stored capacity and
expressed as IOPS/GB ratio. Another way is to say this, is I/O Density measures how much performance
can be delivered by a given amount of storage capacity. Customers with a good understanding of their
applications, can then use density policies to offer certain storage service levels based on this knowledge.

The graph on the left displays the calculations for a 200GB LUN with a Max IOPS per GB setting of 2.
The IOPS to GB ratio would be 2. (400/200=2)

The customer now expands his LUN to 400GB which increases the IOPS limit to 800. This calculates to
800/400GB = 2. The benefit to the customer is he can now maintain the performance to GB ratio even if
he chooses to expand an existing LUN/s without having to create a whole new policy.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 279
The Density Based Host I/O Limit configuration is calculated by taking the Resource Size x the Density
Limit that is set by the Storage Administrator. Once set, the Host I/O Limits driver will throttle the IOPS
based on the calculation.

LUN A is a 100 GB LUN, so the calculation is 100 (Resource Size) x 10 (Density Limit). This sets the
maximum number of IOPS to 1000.

LUN B is 500 GB so the calculation is 500 (Resource Size) x 10 (Density Limit). This sets the maximum
number of IOPS to 5000.

The Service Provider can simply add both LUNs under a single Density Host I/O Limit to implement the
policy. In previous Host I/O Limits implementations, the Service Provider would have had to create two
policies, one for each LUN.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 280
The Density Based Shared Host I/O Limit calculation is done by taking the combined size of all resources
sharing the policy * the Density Limit that is set by the Storage Administrator. Once set, the Host I/O
Limits driver will throttle the IOPS based on the calculation.

In the example, LUN A is a 100 GB LUN, LUN B is 500 GB, so the calculation is 100 + 500 (combined
resource size) * 10 (Density Limit). This sets the maximum number of IOPS to 6000.

The Storage Administrator can simply add both LUNs under a single shared Density Based Host I/O Limit
to implement the policy.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 281
When configuring Density Based Limits, there are minimum and maximum values the GUI will take as
input, these values are shown in the slide.

If a user tries to configure a value outside the limits as shown in the example, the box will be highlighted in
Red to indicate the value is incorrect.

Hover over the box to view the max allowed value.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 282
The Burst feature typically allows for a one time exceptions that are set at some user defined frequency.
This allows applications such as a Boot Storm application, to periodically catch up. For example if a limit
was setup to limit IOPS in the morning at some level, you may setup an I/O Burst policy for some period of
time to account for the increased log on traffic.

The Burst feature provides Service Providers with an opportunity to upsell an existing SLA. Service
Providers can afford customers the chance to use more IOPS than the original SLA called for. If
applications are constantly exceeding the SLA on a regular basis, they can go back to the customer and
sell additional usage of extra I/Os allowed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 283
When creating a Host I/O Limit policy, users can select the “Optional Burst Settings” from the
Configuration page of the wizard or if there is an existing policy in place, users can edit that policy anytime
to create a Burst configuration. If a Unity was running GA code and had policies already in place, once an
upgrade has been completed to Unity OE v4.1 code, users can go change the settings and add the Burst
policy if needed.

This is also where users configure the duration and frequency of when the policy runs. This timing starts
from when the Burst policy is created or edited, it is not tied in any way to the system or NTP server time.
Having the timing run in this manner prevents several policies from running at the same time, say at the
top of the hour.

Burst settings can be changed or disabled at any time by clearing the Burst setting in Unisphere.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 284
Host Burst configuration parameters can be set at creation or when an existing policy is edited.

The “Burst” percentage option is the amount of traffic over the base I/O limit that can occur during the
burst time. This is a percentage of the base limit. This value is configurable from 1% to 100%.

The “For” option is the duration in minutes to allow burst to run. (not a strict window) Set from 1 minute to
60 minutes. This setting is not a hard limit and is used only to calculate the extra I/O operations allocated
for bursting. The actual burst time depends on I/O activity and can be longer than defined when activity is
lower than the allowed burst rate.

The “Every” option is the frequency to allow the burst to occur. Set from 1 hour to 24 hours.

The example shows a policy configured to allow a 10 % increase in IOPS and Bandwidth. The duration of
the window is 5 minutes and the policy will run every 1 hour.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 285
The Burst allowed is based on several factors shown below;

• The calculation of the number of extra IOPS to allow

• Calculated using the Limit, Burst, and For settings

• Actual Burst will never go above the calculated Burst Limit

• Actual length of the burst can vary

The Burst allowance resets at the next interval based on the “Every: x hour(s)” setting.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 286
The example shows how an I/O burst calculation is configured. What the policy does is allow X number of
extra IOPS to occur based on the percentage the user input.

In this case, the absolute limit is 1000 IOPS with a burst percentage of 20%. The policy is allowed for a
five minute period and will reset at 1 hour intervals. The policy will never allow the IOPS to go above this
20% limit.

The limit is 1000 IOPS * .20 * 5 * 60 = 60,000 extra IOPS allowed.

After the additional I/O operations allocated for bursting are depleted, the limit returns to 1000 IOPS. The
policy will not be able to burst again until the 1-hour interval ends.

Note the I/Os are not allowed to happen at one time, you will only reach the burst percentage increase
over time. The duration comes down to how long it takes to reach the burst percentage.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 287
Here we see two scenarios that may be encountered when configuring IOPS Burst.

In the first case, the Host I/O is always above the Host I/O limit and Burst limit.

So we have a Host I/O Limit and Burst Limit configured, but the incoming Host I/O continually exceed
these values.

Next, we have a case where the Host I/O is above Host I/O Limit, but below Burst Limit. The Host IOPS
generated are somewhere in between these two limits.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 288
In this scenario, the Host I/O being sent is always greater than the Host I/O Limit and Burst Limit values.

When a IOPS Burst policy is configured. the policy will do the following;

Throttle the Host I/O as to never allow IOPS to go above the Burst Limit ceiling. If Burst Limit is 20%, then
only 20% more IOPS allowed at any point in time.

For this scenario the duration of the extra IOPS will nearly match the “For” setting.

Once all Extra IOPS have been consumed, the burst allowance ends and any Extra I/O calculations are
refreshed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 289
Here is a graph showing the Total incoming IOPS on the “Y” axis and the time in minutes (60 min) on the
“X” axis.

The Host I/O Limits are configured to be a maximum of 1000 IOPS with a burst percentage of 20. (1200)
The duration of the burst is 5 minutes and will refresh every hour.

We can see the Host target IOPS is around 1500, well above the Host I/O and Burst Limit settings. This is
the I/O that the host is performing. The blue line is what the Host I/O limit is so we will try to keep the I/O at
this limit of 1000 IOPS.

The Burst Limit is the limit that was calculated from the user input and is at 1200 IOPS. The policy will
never allow the IOPS to go above the burst limit. It also means that you will match the “For” window for the
duration period since the Host I/O is always above the other limits.

The IOPS come in and are throttled by the Host I/O Limit of 1000 IOPS.

IOPS continue up until the 42 min. mark where it comes to the 5 minute window where we allow I/O to
burst to 1200 IOPS during that period.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 290
Lets look at bit closer at the way in which the Burst feature throttles the I/Os. The calculations are the
same as the previous scenario where the total number of extra IOPS was based on the Limit * % * For *
Frequency. So we calculated a number of 60,000.

The I/O burst period starts, and a calculation is taken between minute 39 and 40 (60 secs). In that 60
secs., we allowed and extra 200 IOPS (1200 – 1000), so 200 * 60 will produce the value of 12,000. So
every 60 sec. sample period then will allow 12,000 IOPS.

Our “For” value is 5 minutes, so in a 5 minute period we should use our 60,000 extra IOPS. (1200 * 5 =
60,000). The 12,000 is subtracted from our total of 60,000 for each 60 sec. period. (60,000 – 12,000 =
48,000)

This continues for the frequency of the burst. Every 60 sec period subtracts an additional 12,000 IOPS
until the allotted extra I/O value is depleted.

Since the Host I/O rate was always above our calculated values during the period, the extra IOPS will be
used within the 5 minute window. Once the burst frequency ends, it will start again in 1 hour as determined
by the “Every” parameter.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 291
In this scenario, the same calculations are used as in the previous slides however the Host I/O being
generated is around 1100 IOPS, right between our two limits 1000 and 1200.

As Host I/O continues, we see at the 39 minute mark the start of the I/O burst that in this case, is a 10
minute period. The thing to note is the I/O does not cross the 1100 IOPS since this is all the I/O the host
was attempting to do. Also, since the number of IOPS is smaller, it will continue to run for a longer period
of time before the total Extra I/O count is reached.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 292
Lets again look at the calculations for this scenario. Since the Host I/O is between the two limits and is
only generating 1100 IOPS, The difference between the Host I/O Limit of 1000 and the actual Host I/O is
100 IOPS. So the calculation is based on 100 * 60 = 6,000 IOPS.

The total number of IOPS calculated based on the original numbers is 60,000. So for each 60 sec period,
we allow 6,000 IOPS. Effectively, this doubles the “For” time since we throttle the IOPS at 6000.

So even though the “For” period was 5 minutes, the number of IOPS allowed were smaller, thus allowing
for a greater period of the burst than the configured time.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 293
This lab demonstrates how to set Host I/O limits to provisioned block storage resources using the
Unisphere user interface.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 294
This demo covers how to set Host I/O limits to provisioned storage and monitor Quality of Service (QOS)
performance in Unisphere.

Click the Launch button to view the video.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 295
This module covered the advanced features for Block storage provisioning in Unity. Topics included the
use of FAST Cache and FAST VP features for performance and efficiency, applying compression to Block
storage resources, securing data against unauthorized access with Data at Rest encryption, and the
limitation of system resources consumption for Block storage access with the use of Quality of Service
feature.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 296
This module focuses on the advanced features for file storage provisioning in Unity. Topics include the
configuration of NAS servers to support multi-tenant environments, the creation of additional static routes
and enabling the IP packet reflect feature, UFS64 File Systems capacity extension and shrink capabilities,
and the use of user/tree quotas to control file storage utilization.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 297
This lesson covers the Unity File Server support for multi-tenant environments. The lesson discusses the
use cases for the configuration of multi-tenancy in Unity systems, how tenants are associated with NAS
servers and how networks segments can be isolated with the use of VLAN tagging. The steps to
implement IP multi-tenancy on a Unity system using the Unisphere interface are also covered.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 298
IP multi-tenancy provides the ability to assign isolated, file-based storage partitions to the NAS servers on
a storage processor. IP Multi-tenancy is implemented by associating tenants with a set of VLANs, and
then creating NAS servers for each of these VLANs.

The tenant traffic is separated by the associated VLANs providing tenant data separation and increasing
security. The tenant traffic is separated at the Linux Kernel layer.

Each tenant can have its own network namespace: IP addresses and port numbers, VLAN domain,
routing table, firewall, DNS, administrative servers, etc.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 299
Service Providers, with multiple customers on a single system, must be able to isolate storage resources
for tenants, ensuring that tenant visibility and management are restricted to the assigned resources only.

With the deployment of IP Multi-tenancy the Service Providers are able to manage these tenant partitions
through Unisphere GUI, CLI, and REST API.

Service Providers are also able to provide tenants with their own DNS server or other administrative
servers allowing each tenant their own authentication and security validation from Protocol layer.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 300
Up to 50 Tenants can be configured on a Unity system using the Unisphere GUI. A tenant is created with
a name, a set of VLAN IDs, and a UUID (Universally Unique Identifier) that is by default automatically
created by the system or it can be manually input.

Once a tenant is created NAS servers can be created for each of the tenant’s VLANs. Host configurations
that provide access to hosts, subnets, and netgroups can then be created and associated with the tenants.
These Host configurations are used to control the access of NFS clients to shared file systems. (Access
to SMB file systems is controlled through a file and directory access permissions set using Windows
Directory controls.)

Each Tenant has one or multiple NAS Servers, however, each NAS server can be associated with only
one tenant.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 301
When implementing IP multi-tenancy, each tenant is associated with one VLAN, and the NAS server is
responsible for interpreting the VLAN tags and processing the packets appropriately. The switch ports for
servers are configured to include VLAN tags on packets sent to the NAS server.

The association of the NAS servers with each tenant provide the desired isolated network, providing each
tenant with its own IP namespaces. NAS Server’s IP addresses can then be configured without
concerning IP overlapping issues. This is very helpful in the case of many Service Providers that need to
accommodate the tenants’ Data Path IP access requirements in the Customer Storage network.
Sometimes the IPV4 addresses must conform to the tenants' desired IP address schema (which may
mean same IPs for different tenants) for management of, and access to, storage objects. Example: a
tenant can have 10.10.10.10 as NAS Server IP while other tenants can have the same.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 302
This slide shows the process for implementing IP multi-tenancy on a Unity system.

First it is recommended to create a storage pool for each tenant. Then the tenants should be added to the
system and each one must be assigned a non-overlapping set of VLANs.

Next, a NAS server should be created for each one of the tenants. The tenant must be associated with the
NAS server and its pool selected to store the NAS server’s metadata.

A tenant’s VLAN ID is also associated with the NAS server and the network interfaces for the NAS server
can be created at this point.

After the NAS server is configured, the file systems and shares can be created for each tenant. Windows
hosts access for the tenant’s SMB shares is normal. Hosts access for the tenant's NFS shares requires
the configuration of a host profile.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 303
To add a tenant in Unisphere, select the File page under the Storage section and the Tenants tab from
the top menu.

The Add Tenant wizard is launched by clicking on the “add sign” link. Fill in the information on the Add
Tenant window. The fields with asterisk cannot be blank.

Provide a name for the tenant.

You can choose whether to have the system automatically generate the UUID or to enter it manually.
Manual UUID input is useful to match the tenant on both source and destination systems of replication
deployment. Leave the box unchecked to have the UUID automatically generated by the system.

You must then select one or more VLANs to associate with the tenant. Click on the add button to launch
the Add VLAN window.

Type the VLAN ID or select it from the drop-down list and click on the Add button. The VLAN ID will be
displayed on the list of VLANs associated with the tenant.

Repeat the operation to associate more than one VLAN with the tenant.

Clicking the Ok button will commit the changes and add the tenant to the Unity system.

The new tenant will be displayed in the list.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 304
From the Tenants page it is possible to add a new tenant to the Unity system, view the properties of an
existing tenant, modify its associated VLANs (change and/or remove VLANs), and delete an existing
tenant. Observe that a tenant can only be deleted if there are no resources associated with it: NAS server
and host profiles. The association of a tenant with a NAS server or host profile can only be revoked by
deleting these storage resources.
The list shows the added tenants with its assigned UUID, and associated VLANs.

To see the details about a tenant select it from the list and its details will be displayed on the right-pane.
VLANs can be added or removed to a tenant configuration.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 305
To create a NAS servers you must select the File page under the Storage section in Unisphere, and the
NAS Servers tab from the top menu. Then, the Create a NAS Server wizard can be launched by clicking
on the add link.

From the wizard window select a tenant and the VLANs that are associated with the tenant.

The General Settings page allows the selection of the tenant to associate the NAS server with.

If a specific pool was created to store all of the tenant’s NAS servers metadata, it could be chosen from
this window as well.

It is also possible to select the SP the NAS server will run on.

NOTE: After the creation of a NAS server, it is not possible to change the tenant that is associated with it.
Also, if a NAS server was created without been associated with a tenant, it is not possible to add a tenant
to the configuration.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 306
In the Interface step of the wizard it is possible to select one Ethernet port from the associated SP to
create a network interface for the server.

The interface will have an IP address, a subnet, and a gateway. The configuration will also allow the
selection of one VLAN ID used by the chosen tenant.

Some tenants might have more than one associated VLAN. These different VLANs used by the tenant can
be associated with the NAS server at its properties window later on.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 307
The Summary page will display the configuration of the NAS Server with the associated tenant and VLAN.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 308
Once the NAS server is created, it is possible to verify the tenant associated with it by opening its
properties in Unisphere and checking the General tab. Observe that the tenant cannot be changed once it
has been associated with the server.

From the Network tab it is possible to verify the network interface and VLAN used by the server. The
interface properties can be changed including the SP port used for communication, the network address,
and the tenant VLAN.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 309
After creating a NAS server, the file systems and NFS shares for the tenant, a host configuration must be
created for the NFS shares. Host configurations can be created for individual host access, specific
subnets, and netgroups.

There is no need to create host configurations to control hosts access to SMB file systems. Access to
these storage resources is controlled though a file and directory access permissions set using Windows
directory controls. Client authentication for Windows shares access is controlled through the Active
Directory.

In this new release the wizard used for creating each host configuration allows its association with a
tenant.

After the host configuration is created it must be associated with the file system share to have the host
access level defined.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 310
This lesson covers the advanced static routing feature, how it works compared to the basic routing
functionality, and how static routes are created using the Unisphere GUI. The lesson also discuss the
Unity support for IP Packet reflect and how to enable the feature for a NAS server in Unisphere.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 311
In Unity we have the capability of creating routing tables for iSCSI, and route configuration for NAS
servers. By default If a NAS server interface is configured with a default gateway, a default route is
created automatically. If the NAS server has multiple interfaces, one of them can be selected as the
preferred interface.

The Advanced Static Routing feature provides the capability of configuring additional static routes by
allowing each NAS Server interface to have its own routing table. This enables NAS Servers to be
accessible from different isolated IP networks.

In environments that have multiple gateways, and each gateway is used to access different subnets, the
routing tables are necessary to tell the system where to route the packets to.

Additional static host and network routes can be manually created.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 312
When a new interface is created a basic routing table is automatically configured with two routes: Local
and Default. All local traffic to the subnet goes through the interface. And if a default gateway is provided
during the configuration of the interface, a default route is automatically created as well. All the non-local
traffic will use the specified gateway.

For example, if an interface is created with the IP address 192.168.64.5, a 24 subnet mask and a default
gateway of 192.168.64.254, a basic routing table is automatically created. A Local route is created for all
traffic to the 192.168.64.0/24 network will use interface 192.168.64.5. All other traffic will be sent to the
gateway 192.168.64.254, and the gateway will determine where it goes.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 313
Advance Static Routing allows for specifying explicit rules, such as establishing a static route to a specific
host, or determining that a particular subnet should be access from a specific gateway.

For example, a static host route can be created to determine that the host with IP address 192.168.64.101
should be accessed through the interface with IP address 192.168.64.6. Another route can be created
ruling that subnet 192.168.65.0/24 should be accessed through the gateway 192.168.65.254.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 314
From the Settings window in Unisphere under the Access section the Routing option now displays all the
routes that were created for both the iSCSI and NAS Server interfaces. In Unity GA this section would only
show the routes for iSCSI interfaces.

In this example, there were three interfaces created for a NAS server. A default gateway was not provided
when creating the third interface.

Observe that routes were automatically created for the three interfaces. A Local and a default route were
created for the first interface. A similar configuration with a Local and default route was also provided for
the second interface. However, only a Local route was created for the third interface.

On the top of the table there are options to add a route, remove a selected route, refresh the list, and edit
the properties of a selected route.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 315
To add a new route in the Manage Routes page, click on the “add sign” link to launch the Add route
wizard.

First you must use the drop-down list to choose one of the existing interfaces. Up to 20 routes can be
created for each interface

Once the interface is chosen the gateway will reflect the default gateway associated with the interface (if
one was provided during its creation).

There are 3 route types to choose from. The default route which establishes a static route to a default
gateway. The host route which establishes a static route to a specific host, and the network route which
establishes a static route to a particular subnet.

In this example we are creating a static route for the network 192.168.1.0 using the 192.168.64.5 interface.
Observe that a subnet mask must be provided for the network.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 316
When creating a Host route, the IP address of the specific host must be entered in the Destination field.

A default route can be created by providing the default gateway.

Observe that in both cases, the system will calculate the subnet mask.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 317
Routes that were created for NAS servers IP interfaces can also be viewed from the NAS server
properties window. From the properties window we must navigate to the Network tab and select the
Routes to External Services page.

The page will display all the basic routes (Local and default) automatically generated for the NAS server
interfaces, and any routes added using the System Global Settings window, from the Access > Routing >
Manage Routes path as demonstrated on the previous slides.

The More Actions option on the top of the query offer the possibility to filter the list to show only Host or
Net routes that were created.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 318
The incorporation of IP Packet Reflect functionality for NAS server interfaces ensures that outbound
packets always exit through the same interface that inbound packets entered. Packet Reflect eliminates
the need to determine the route for sending the reply packets.

The feature can be enabled on a NAS server using the Unisphere interface or when creating a NAS server
via UEMCLI. By default, the feature is disabled. The feature start working immediately after been enabled
with no need to perform a system reboot.

Packet reflect will only work for the network traffic that is client-initiated. Any communication initiated from
the Unity system still requires consulting the route tables and ARP tables prior to sending outbound
packets.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 319
This diagram depicts a scenario where IP Packet Reflect is enabled in a NAS server with two interfaces
connected to 2 different subnets.

First the a NAS client sends a request packet to the NAS server.

The ingress request packet containing local IP, remote IP and next-hop MAC address is cached. Unity
leverages this information to send the outbound packet to the proper location

The NAS server sends the egress packet reply using the cached information and the same interface used
for the request packet.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 320
In Unisphere, Packet Reflect is enabled or disabled from the properties windows of a NAS Server. You
must navigate to the Network tab, and select the Interfaces and Routes page. On the top of this page
you can verify if the feature is enabled or disabled.

To enable the feature you must click on the pencil icon besides the feature status.

On the Change Packet Reflect Settings window select Enabled and click OK to close the window and
immediately commit the change.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 321
This lesson covers the manual and automatic extension of the capacity of thin UFS64 File Systems. The
lesson also discusses the UFS64 File Systems shrink capabilities.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 322
In Unity systems, the UFS64 architecture allows users to extend thin file systems. Performing UFS64 file
system extend operations is transparent to the client meaning the array can still service I/O to a client
during extend operations.

The capacity of thin file systems can be extended with changes to the advertised capacity and actual
allocation.

Thin file systems can be automatically extended based on the ratio of used-to-allocated space. This
operation happens without user intervention, and does not changes the advertised the capacity.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 323
For thin-provisioned file systems, the manual extend operation increases visible or virtual size without
increasing the actual size allocated to the file system from the storage pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 324
Thin-provisioned file systems are automatically extended by the system when certain conditions are met.

A thin-provisioned file based storage resource may appear full when data copied or written to the storage
resource is greater than the space available at that time. When this occurs, the system begins to
automatically extend the storage space and accommodate the write operation. As long as there is enough
extension space available, this operation will complete successfully.

System automatically allocates space for a thin UFS64 file system along with space consumption. Auto
extend happens when space consumption threshold is hit. Threshold is the percentage of used space in
the whole file system allocated space (system default value is 75%). It cannot exceed the file system
visible size. Only allocated space increases, not file system provisioned size. The file system cannot auto-
extend past the provisioned size.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 325
In Unity, the UFS64 architecture enable the reduction of the space the file system use from a storage pool,
allowing the underlying released storage to be reclaimed. The storage space reclamation is triggered by
the UFS 64 file system shrink operations.

These shrink operations can be manually initiated by the user for both thin and thick file systems.
Automatic shrink operations are initiated only on thin file systems when the Unity system identifies
allocated but unused storage space that can be reclaimed back to the storage pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 326
With thin-provisioned UFS64 file systems, a storage administrator can manually shrink the size of the file
system into, or within, the allocated space. In this example, a thin provisioned UFS64 file system with 450
GB of Allocated space receives a request to shrink 700 GB of 1 TB.

The file system has 250 GB of Used Space. The system performs any evacuation that is necessary in
order to allow the shrinking process on the contiguous free space.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 327
In the example, the free space in the file system is reduced by 700 GB. The total storage pool free space
is increased. The file system Allocated Space and Pool Size Used is decreased. The Allocated space
after the shrink drops bellow the original allocated space, allowing the storage pool to reclaim the space.

Observe that the only space that is reclaimed is the portion of the shrink that was in the original allocated
space of 450 GB. That was because the remaining 550 GB of the original thin file system was virtual
space advertised to the client.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 328
Thin-provisioned file systems are automatically shrunk by the system when certain conditions are met.
Automatic shrink improves space allocation by releasing any unused space back to the storage pool. The
file system is automatically shrunk when the used space is less than 70% (system default value) of the
allocated space after a period of 7.5 hours. The file system provisioned size does not shrink, only the
allocated space decreases.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 329
In order change the size of a file system you must select the File page under the Storage section in
Unisphere, and the File System tab from the top menu.

The properties of the File System can be launched by double-clicking the File System from the list or by
hitting the pencil icon from the menu on the top of the File Systems list.

From the General tab the size of the File System can be extended by increasing the Size field. To shrink
the File System you must decrease the Size field. The Apply button must be hit in order to commit the
changes.

The change to the File System configuration (size and percentage of allocation space) will be displayed in
the list.

In this example the Thin File System has its size manually extended by 10GB.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 330
This lab demonstrates how to manually extend the capacity of a UFS64 file system using the Unisphere
user interface, and verify the new advertised size and the space allocated from the pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 331
This lesson covers the limitation of disk space consumption in Unity by applying File System quotas.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 332
Unity supports the File System Quotas which will enable a storage administrator to track and/or limit
usage of a file system.

Limiting usage is not the only application of quotas. The quota tracking capability can be useful for tracking
and reporting usage by simply setting the quota limits to zero.

Quota limits can be designated for users, or a directory tree. Limits are stored in quota records for each
user and quota tree. Limits are also stored for users within a quota tree.

The usage is determined by “File Size” (which is the default) or “Blocks”.

The File Size quota policy calculates the disk usage based on logical file sizes in 1K increments.

The block quota policy calculates the disk usage in file system blocks at 8KB file system blocks.

Hard and soft limits set on the amount of disk space allowed to be used.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 333
EMC recommends that quotas are configured before the Unity system becomes active in a production
environment. Quotas can be configured after a file system is created.

Default quota settings can be configured for an environment where the same set of limits are applied to
many users. These parameters can be configured when following parameters can be configured at the
Manage Quota Settings window: Quota policy, and default user quota limits: soft limit, hard limit and grace
period.

The soft limit is a capacity threshold above which a countdown timer will begin. While the soft limit may be
exceeded, this timer, or grace period, will continue to count down as long as the soft limit is exceeded. If
the soft limit remains exceeded long enough for the grace period to expire, no new data may be added to
the particular directory or by the particular user associated with the quota. However if sufficient data is
removed from the file system or directory to reduce the utilization below the soft limit before the grace
period expires, access will be allowed to continue as usual.

A hard limit is also set for each quota configured. Upon reaching a hard limit, no new data will be able to
be added to the file system or directory. When this happens, the quota must be increased or data must be
removed from the file system before additional data can be added.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 334
Unity File System Quotas has the capability for tracking and reporting usage of a file system.

In this example, a user quota was configured on a file system for a particular user. The Soft Limit is 20GB,
the Grace Period is 1 day and the Hard Limit is 25GB. The user copies 16GB of files to the file system.
Since the capacity is below the user’s quota, the user can still add more files to the file system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 335
After adding more files to the file system the user crosses the 20GB soft limit.

The storage administrator receives an alert in Unisphere stating that the soft quota for this user has been
crossed.

The Grace Period of 1 day begins to count down. At this point the user is still able to add additional data to
the file system.

However, the user must remove data prior to the expiration of the Grace Period for the usage to fall below
the soft limit.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 336
If the Grace Period is reached and the usage is still over the soft limit, then an error message is sent to the
client. The storage administrator also receives a notification of the event.

The transfer of additional data to the file system is interrupted until usage drops below the allowed soft
limit.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 337
If the Grace Period is not expired yet, and the user continues writing to and using additional space from
the file system despite passing the soft limit, then eventually the Hard Limit is reached. When that
happens the user will no longer be able to add any data to the file system. And the storage administrator
will receive a Unisphere alert notification informing that the user has reached the Hard Limit.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 338
This lab demonstrates the configuration of limits for specific users and directories on a file system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 339
This lab demonstrates how to manually shrink a file system and reclaim space back to the storage pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 340
This module covered the advanced features for file storage provisioning in Unity. Topics include the
configuration of NAS servers to support multi-tenant environments, the creation of additional static routes
and enabling the IP packet reflect feature, UFS64 File Systems capacity extension and shrink capabilities,
and the use of user/tree quotas to control file storage utilization.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 341
This module focuses on the Snapshots feature of Unity for local data protection. It provides an overview of
the Snapshots feature, examines its architecture, use cases and capabilities. LUN and file system
Snapshot creation is covered and their specific operations are performed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 342
This lesson covers an overview of the Snapshots feature of Unity. It provides information on what a
Snapshot is, what storage resources can be snapped, how Snapshots are used, the feature architecture
and its capabilities on the various Unity models.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 343
The Unity Snapshots feature is enabled with the Local Copies license for Unity that enables space efficient
point-in-time snapshots of storage resources for Block, File and VMware. The snap images can be read-
only or read/write and used in a variety of ways. They can provide an effective form of local data
protection to restore the production data to a known point-in-time data state should data be mistakenly
corrupted or deleted by the users. Images can be accessed by hosts for data backup operations, data
mining operations, application testing, or decision analysis tasks. In the upcoming slides we will dive into
the details of its architecture, capabilities, benefits, and specifics of its operations and uses.

Note: Snapshots are not full copies of the original data. It is recommended that you do not rely on
snapshots for mirrors, disaster recovery, or high-availability tools. Because snapshots of storage
resources are partially derived from the real-time data in the relevant storage resource, snapshots can
become inaccessible (not readable) if the primary storage becomes inaccessible.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 344
Snapshots of storage resources (block LUNs, file systems, and VMware Datastores) are architected using
Redirect on Write technology. This architecture avoids a performance penalty that Copy on First Write
technology has when existing data is changed. With Redirect on Write technology, when a snapshot is
taken, the existing data on the storage object remains in place and provides the snapshot point-in-time
view of the data. Production data access also uses this view to read existing data. Another benefit of
Redirect on Write is that no storage resource is needed to create a snapshot. With Copy on First Write
technology, a storage resource had to be reserved to hold original data that changed to preserve the point-
in-time view. As its name implies, with Redirect on Write technology, when writes are made to the storage
object, those writes are redirected to a new location allocated as needed from the parent pool in 256
megabyte stripes. New writes are stored in 8 kilobyte chunks on the newly allocated stripe. Reads of the
new writes are serviced from this new location as well.

If the snapshot is writable, any writes are handled in a similar manner; stripe space is allocated from the
parent pool and the writes are redirected in 8 kilobyte chunks to the new space. Reads of newly written
data are also services from the new space.

Storage space is needed in the pool to support snapshots as stripes are allocated for redirected writes.
Because of the on-demand stripe allocation from the pool, snapped thick file systems will transition to thin
file system performance characteristics.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 345
A LUN Consistency Group (CG) is a grouping of multiple LUNs to form a single instance of LUN storage.
They are primarily designed for a host application that will access multiple LUNs, like a database
application. Snapshots provide a mechanism for capturing a snapshot of the multiple LUNs within a
consistency group. When a Consistency Group Snapshot is taken, the system will complete any
outstanding IO to the group of LUNs, then suspend writes to the LUNs until the snap operation completes.
This allows the snapshot to capture a write-order consistent image of the group of LUNs.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 346
With Unity Snapshots, it is possible to create multiple snapshots of a LUN to capture multiple point-in-time
data states. In this example the 3 o’clock and the 4 o’clock snapshots are two different “child” snapshots of
a common parent; meaning they capture two different data states of a common storage object.

It is also possible to copy a snapshot. In this example, the 4 o’clock snapshot is copied and other than
having a unique name, the copy will be indistinguishable from the source snapshot and both capture
identical data states.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 347
Multiple hosts can be attached to any specific LUN Snapshot or multiple snapshots within the tree. When
a host is attached to a Snapshot for access to its data, the attach can be defined for read-only access or
read/write access. In the example, a host attaches to the 3 o’clock Snapshot for read-only access and will
remain unmodified from it’s original snapped data state. A different host will be attached to the 4 o’clock
Snapshot copy for read/write access. By default, the system will optionally create a copy of the snapshot
to preserve its original data state. When the snap is read/write attached, its data state is marked as
modified from its source.

It is also possible to nest copied read/write attached snapshots that form a hierarchy of snapshots to a
maximum of 10 levels deep.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 348
As with LUN snapshots, it is possible to create multiple snapshots of a file system to capture multiple
point-in-time data states. The 3 o’clock and the 4 o’clock snapshots in this example are two different
“child” snapshots of the same file system parent and capture its two different point-in-time data states.

Snapshots of a file system can be created either as read-only or read/write and are accessed in different
manners which will be covered later.

Copies of snapshots are always created as read/write snapshots. The read/write snapshots can be shared
by creating an NFS export or SMB share to them. When shared, they are marked as modified to indicate
their data state is different from the parent object.

It is also possible to nest copied and shared snapshots that form a hierarchy of snapshots to a maximum
of 10 levels deep.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 349
This lesson covers the creation of block and file Snapshots. It also details using a schedule for creating
Snapshots. The operations that can be performed on Snapshots are also shown.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 350
Snapshots are created on storage resources for Block, File and VMware. And all are done in a similar
manner.

For Block, the snapshot is created on a LUN or a group of LUNs in the case of a Consistency Group. For
File, the snapshot is configured on a file system. For VMware the storage resource is either going to be a
LUN for a VMFS datastore or a file system for an NFS datastore. When creating each of these storage
resources, the Unity system provides a wizard for their creation. Each wizard provides an option to
automatically create snapshots on the storage resource. Each resource snapshot creation is nearly
identical to the other resources.

For storage resources already created, snapshots can be manually created for them from their Properties
page. As with the wizard, the snapshot creation from the storage resource Properties page is nearly
identical to the other resources. The following few slides will show snapshot creation within the Block
storage LUN creation wizard and the File storage file system creation wizard. It will also show creating
manual snapshots from the LUN and file system Properties pages.

Video demonstrations will be provided showing all forms of storage resource snapshot creation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 351
The operations that can be performed on a snapshot differ based on the type of storage resource the
snapshot is on. Operations on LUN-based snapshots are Restore, Attach to host, Detach from host, and
Copy. Operations on file system-based snapshots are Restore and Copy.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 352
The Snapshot Restore operation will roll back the storage resource to the point-in-time data state that the
snapshot captures. In this Restore example series of slides, a LUN is at a 5 o’clock data state and is going
to be Restored from a snapshot with a 4 o’clock data state.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 353
Before performing a restore operation, detach hosts attached to any of the LUN snapshots. Also ensure
that all hosts have completed all read and write operations to the LUN you want to restore.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 354
Finally disconnect any host accessing the LUN. This may require disabling the host connection on the
host-side.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 355
Now the Restore operation can be performed. From the 4 o’clock snapshot select the Restore operation.
The system will automatically create a snapshot of the current 5 o’clock data state of the LUN to capture
its current data state before the restoration operation begins.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 356
The LUN is restored to the 4 o’clock data state of the snapshot.

The hosts can now be reconnected to the resources they were connected to prior to the restore and
resume normal operations.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 357
The Snapshot Restore operation for a file system is similar to the Restore operation of a LUN. It will roll
back the file system to a point-in-time data state that a read-only or read/write snapshot captures. In this
Restore example, a file system is at a 5 o’clock data state and is going to be Restored from a read-only
snapshot with a 4 o’clock data state.

Before performing a restore operation, disconnect clients from any of the file system snapshots. Also
quiesce IO to the file system being restored.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 358
Now the Restore operation can be performed. From the 4 o’clock snapshot select the Restore operation.
The system will automatically create a snapshot of the current 5 o’clock data state of the file system to
capture its current data state before the restoration operation begins.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 359
The file system is restored to the 4 o’clock data state of the snapshot.

The connections and IO to the resources can now be resumed for normal operations.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 360
The Snapshot Copy operation will make a copy of an existing snapshot that is either attached or detached
from a host. The copy will capture the existing data state of the snapshot it copies. In this Copy example, a
copy of an existing 4 o’clock snapshot is being made. The copy inherits the parent snapshot data state of
4 o’clock and its retention policy.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 361
The Snapshot Copy operation will make a copy of an existing file system snapshot that is either read-only
or read/write shared or unshared. The copy will capture the existing data state of the snapshot it copies.
In this Copy example, a copy of an existing 4 o’clock read-only snapshot is being made. The copy will be
read/write and inherits the parent snapshot data state of 4 o’clock and its retention policy.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 362
The Snapshot Attach to Host operation will attach a connected host to a LUN snapshot. In this Attach
example, a secondary host is going to Attach to the 3 o’clock snapshot of the LUN.

Before performing an Attach to host operation, the host being attached will need to have connectivity to
the Unity array.

Now the Attach operation can be performed. The first step is to select a snapshot to attach to. The next
step is to select an Access Type, either Read-Only or Read/Write. Then the host or hosts are selected to
be attached. Next, the system will optionally create a copy of the snapshot if a Read/Write Access Type
was selected. Thus preserving the data state of the snapshot prior to attach.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 363
Finally, the selected host is attached to the snapshot with the Access Type selected. The read/write
attached snapshot is marked as modified from its 3 o’clock data state.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 364
The Snapshot Detach operation will detach a connected host from a LUN snapshot. In this Detach
example, a secondary host is going to Detach from the 3 o’clock snapshot of the LUN.

Before performing an Detach operation, allow any outstanding read/write operations of the snapshot
attached host to complete.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 365
Now the Detach operation can be performed. From the 3 o’clock snapshot, select the Detach from host
operation. The secondary host is detached from the 3 o’clock snapshot of the LUN.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 366
This lesson covers the use of Snapshots. It details accessing block and file snapshots. It also shows how
the Checkpoint Virtual File System is accessed by Windows and NFS users to restore corrupted or
deleted files.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 367
The process of accessing a LUN snapshot requires performing tasks on the storage system and on the
host that will access the snapshot.

A host has to have connectivity to the storage, either via fibre channel or iSCSI, and be registered. Next,
from the Snapshots tab, a snapshot is selected and the snapshot operation Attach to host needs to be
performed.

Now tasks from the host will need to be done. The host will need to discover the disk device that the
snapshot presents to it. Once discovered, then the host can access the snapshot as a disk device.

A short video follows that demonstrates a host accessing a LUN snapshot.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 368
The process of accessing a file system read/write snapshot requires performing tasks on the storage
system and on the client that will access the snapshot.

On the storage system an NFS and/or SMB share will have to be configured on the read/write snapshot of
the file system. This task is done from their respective pages.

Now tasks from the client will need to be done. The client will need to be connected to the NFS/SMB share
of the snapshot. Once connected, then the client can access the snapshot shared resource.

A short video follows that demonstrates client access to a file system snapshot.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 369
The process of accessing a file system read-only snapshot is very different than accessing a read/write
snapshot. The read-only file system snapshot is exposed to the client through a checkpoint virtual file
system (CVFS) mechanism that Snapshots provides. The read-only snapshot access does not require
performing any tasks on the storage system. All the tasks are performed on the client through its access
directly to the file system. The tasks for NFS clients are slightly different than the tasks for SMB clients.

The first task for an NFS client is to connect to an NFS share on the file system. Access to the read-only
snapshot is established by accessing the snapshot’s hidden .ckpt data path. This path will redirect the
client to the point-in-time view that the read-only snapshot captures.

Similarly, the first task for an SMB client is to connect to an SMB share on the file system. Access to the
read-only snapshot is established by the SMB client accessing the SMB share’s Previous Versions tab.
This will redirect the client to the point-in-time view that the read-only snapshot captures.

Because the read-only snapshot is exposed to the clients through the CVFS mechanism, the clients are
able to directly recover data from the snapshot without any administrator intervention. For example, if a
user either corrupted or deleted a file by mistake, that user could directly access the read-only snapshot
and get an earlier version of the file from the snapshot and copy it to the file system to recover from.

A short video follows that demonstrates client access to a file system read-only snapshot.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 370
This lab covers LUN snapshots. You will manually create a snapshot of an existing LUN and create a
Snapshot Schedule. You will then attach a host with read/write access to a LUN snapshot for data access.
Finally you will perform a Snapshot Restore operation to a LUN.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 371
This lab covers file system snapshots. You enable a snapshot schedule during the creation of a file
system. You will also create read-only and read/write snapshots of existing file systems. Then you will
configure access to a read/write snapshot and perform write operations to it. Finally you will access read-
only snapshots from an SMB Windows client and from an NFS Linux client.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 372
This module covered the Snapshots local data protection feature of Unity. It provided an overview of the
Snapshots feature, its architecture and capabilities. The creation of Snapshots for Unity storage resources
was detailed and demonstrated. Operations for Snapshots were also presented.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 373
This module focuses on the Replication feature of Unity that provide data protection. It provides an
overview of the Replication feature, examines its architecture, use cases and capabilities. The creation of
LUN and file system replications are covered and their specific operations are performed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 374
In this lesson, an overview of Unity Replication is provided. The architectures of Asynchronous and
Synchronous Replications are discussed and the benefits and capabilities are listed for Unity Replication.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 375
Unity Replication, a feature enabled with the Replication license, enables replication of Unity storage
resources; block storage LUNs, VMware Datastores, file storage file systems, VMware NFS Datastores
and NAS Servers. Remote replication, shown here, provides storage resource replication between Unity
systems for storage resources. The file-based storage resources of VMware NFS Datastores, NAS
Servers, and file systems are replicated to a remote Unity system asynchronously. The block-based
storage resources of VMware VMFS datastores, LUNs and LUN Consistency Groups are replicated to a
remote Unity system in either an asynchronous or synchronous manner.

Remote Replication is one method that enables data centers to avoid disruptions in operations. In a
disaster recovery scenario, if the source site becomes unavailable, the replicated data will still be available
for access from the remote site. Remote Replication uses a Recovery Point Objective (RPO) which is an
amount of data, measured in units of time to perform automatic data synchronization between the source
and remote systems. The RPO for asynchronous replication is configurable. The RPO for synchronous
replication is set to zero. The RPO value represents the acceptable amount of data that may be lost in a
disaster situation. The remote data will be consistent to the configured RPO value. The minimum and
maximum RPO values are 5 minutes and 1440 minutes (24 hours).

Remote Replication is also beneficial for keeping data available during planned downtime scenarios. If a
production site has to be brought down for maintenance or testing the replica data can be made available
for access from the remote site. In a planned downtime situation, the remote data is synchronized to the
source before being made available and there is no data loss.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 376
With Unity Replication, it is possible to asynchronously replicate storage resources locally within the same
Unity system as shown here. The storage resources are replicated from one storage Pool to another
within the same Unity system. The feature can be helpful should a storage resource need to be moved for
Pool capacity reasons or for changing the type of storage the resource uses. For example, a resource
could be moved from a Pool having performance disks to a Pool having capacity disks for archival
reasons.

The main focus of this training is with remote replication since it has more elements to configure, create
and manage.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 377
The architecture for Unity Asynchronous Remote Replication is shown here. The graphic illustrates a
remote asynchronous replication session for a file system. The architecture is the same for replicating any
other file or block-based storage resource asynchronously.

Fundamental to remote replication is connectivity and communication between the source and destination
systems. A data connection is required to carry the replicated data and it is formed from Replication
Interfaces. They are IP-based connections established on each system. A communication channel is also
required to manage the replication session. The management channel is established on Replication
Connections. It defines the management interfaces and credentials for the source and destination
systems.

Asynchronous Replication architecture utilizes Unified Snapshots. The system creates two snapshots for
the source storage resource and two corresponding snapshots on the destination storage resource. These
system created snapshots cannot be modified. Based on the replication RPO value the source snapshots
are updated in an alternating fashion to capture the source data state differences, known as deltas. The
data delta for the RPO timeframe is replicated to the destination. After the data is replicated the
corresponding destination snapshot is updated. The two corresponding snapshots capture a common data
state, known as a common base. The common base can be used to restart a stopped or interrupted
replication session.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 378
The architecture for Unity Asynchronous Local Replication is shown here. The difference between the
local and remote architecture seen previously is that the local architecture does not require the
communications to a remote peer. The management and data replication paths are all internal within the
single system. Otherwise, local replication uses Snapshots in the same manner. Local replication uses
source and destination objects on the two different Pools similar to how remote replication uses source
and destination on two different systems.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 379
The asynchronous replication process is the same for local and remote replication. Shown here is remote
replication. The asynchronous replication of a storage resource has an initial process followed by an
ongoing synchronization process. The starting point is a data populated storage resource on the source
system that is available to production and has a constantly changing data state.

The first step of the initial process for asynchronous replication is to create a storage resource of the
exact same capacity on the destination system. The storage resource is created automatically by the
system and contains no data.

In the next step, corresponding snapshot pairs are created automatically on the source and destination
systems. They capture point-in-time data states of their storage resource.

The first snapshot on the source system is used to perform an initial copy of its point-in-time data state to
the destination storage resource. This initial copy can take a significant amount of time if the source
storage resource contains a large amount of existing data.

Once the initial copy is complete, the first snapshot on the destination system is updated. The data states
captured on the first snapshots are now identical and form a common base.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 380
Because the source storage resource is constantly changing, its data state is no longer consistent with the
first snapshot point-in-time.

In the synchronization process, the second snapshot on the source system is updated, capturing the
current data state of the source.

A data difference, or delta is calculated from the two source system snapshots and a differential copy is
made from the second snapshot to the destination storage resource.

After the differential copy is complete, the second snapshot on the destination system is updated to form a
common base with its corresponding source system snapshot.

The cycle of differential copies continues for the session by alternating between the first and second
snapshot pairs based on the RPO value. The first source snapshot is updated, the data delta is calculated
and copied to the destination, the first destination snapshot is updated forming a new common base. The
cycle repeats using the second snapshot pair.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 381
The architecture for Unity Synchronous Replication is shown here. The graphic illustrates a remote
synchronous replication session for a LUN. The architecture is the same for replicating any other block-
based storage resource synchronously.

The same fundamental remote replication connectivity and communication between the source and
destination systems seen earlier for asynchronous remote replication are also required for synchronous
replication. A data connection to carry the replicated data is required and is formed using fibre channel
connections between the replicating systems. A communication channel is also required to manage the
replication session. For synchronous replication, part of the management is provided using Replication
Interfaces that are IP based interfaces for SPA and SPB using specific Sync Replication Management
Ports. The management communication between the replicating systems is established on a Replication
Connection. It defines the management interfaces and credentials for the source and destination systems.

Synchronous Replication architecture utilizes Write Intent Logs (WIL) on each of the systems involved in
the replication. These are internal LUNs created automatically by each system. There is a WIL for SPA
and one for SPB on each system. They hold fracture logs that are designed to track changes to the source
LUN should the destination LUN become unreachable. When the destination becomes reachable again it
will automatically recover synchronization to the source using the fracture log, thus avoiding the need for a
full synchronization.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 382
The synchronous replication of a storage resource has an initial process followed by an ongoing
synchronization process. The starting point is a data populated storage resource on the source system
that is available to production and has a constantly changing data state.

The first step of the initial process for synchronous replication is to create a storage resource of the exact
same capacity on the destination system. The storage resource is created automatically by the system
and contains no data.

In the next step, SPA and SPB Write Intent Logs are automatically created on the source and destination
systems.

An initial synchronization of the source data is then performed. It copies all of the existing data from the
source to the destination. The source resource is available to production during the initial synchronization
but the destination will be unusable until the synchronization completes.

Once the initial synchronization is complete, the process to maintain synchronization begins. When a
primary host writes to the source the system delays the write acknowledgement back to the host. The
write is replicated to the destination system. Once the destination system has verified the integrity of the
data write it sends an acknowledgement back to the source system. At that point, the source system
sends the acknowledgement of the write back to the host. The data state is synchronized between the
source and destination. Should recovery be needed from the destination, its RPO would be zero.

Should the destination become unreachable, the replication session will be out of synchronization. The
source Write Intent Log for the SP owning the resource will track the changes. When the destination
becomes available the systems will automatically recover synchronization using the WIL.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 383
Synchronous replications will have states for describing the session and its associated synchronization.

An Active session state indicates normal operations and the source and destination are In Sync.

A Paused session state indicates the replication has been stopped and will have a Sync State of
Consistent indicating the WIL will be used to perform synchronization of the destination.

A Failed Over session will have one of two Sync States. It can show Inconsistent meaning the Sync State
was not In Sync or Consistent prior to the Failover. If the Sync State was In Sync prior to Failover, it will be
Out of Sync after session Failover.

A Lost Sync Communications session state indicates the destination is unreachable. It can have any of the
following Sync States: Out of Sync, Consistent or Inconsistent.

A Sync State of Syncing indicates a transition from Out of Sync, Consistent or Inconsistent due to the
session changing to an Active state from one of its other states; for example if the system has been
recovered from the Lost Sync Communications state.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 384
This lesson focuses on creating the replication communications between the two replicating systems. It
also details the process steps for creating Asynchronous and Synchronous Replication sessions.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 385
Replication sessions are created on storage resources for Block, File and VMware. And all are done in a
similar manner.

For Block, the replication is created on a LUN or a group of LUNs in the case of a Consistency Group. For
File, the replication is configured on a NAS server and file systems. For VMware the storage resource is
either going to be a LUN-based VMFS datastore or a file system-based NFS datastore. When creating
each of these storage resources, the Unity system provides a wizard for their creation. Each wizard
provides an option to automatically create the replication on the resource. Each resource replication
creation is nearly identical to the other resources.

For resources already created, replications can be created manually from their Properties page. As with
the wizard, the replication creation from the resource Properties page is nearly identical to the other
resources. The following few slides will show replication creation within the Block storage LUN creation
wizard and the File storage file system creation wizard. It will also show replications created manually
from the LUN and file system Properties pages.

Video demonstrations will also be provided for the resource replication creation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 386
Because file system access is dependent on a NAS Server, to remotely replicate a file system, the
associated NAS Server will need to be replicated first. When a NAS Server is replicated, any file systems
associated with the NAS server will also be replicated as separate replication sessions; a session for the
NAS Server and a session for each associated file system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 387
Before you create a remote replication session, you first need to configure active communications
channels between the two systems. This involves first creating Replication Interfaces on the source and
destination systems. Then a Replication Connection is created between the systems.

For Asynchronous Replication, the Replication Interfaces are dedicated IP-based connections between
the systems that will carry the replicated data. The interfaces are defined on each SP using IPv4 or IPv6
addressing and will establish the required network connectivity between the corresponding SPs of the
source and destination systems. The Replication Connection pairs together the Replication Interfaces
between the source and destination systems. It also defines the replication mode between the systems;
Asynchronous, Synchronous or both. The connection is also configured with the management interface
and credentials for both of the replicating systems.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 388
The active communication channels required for Synchronous Replication are significantly different from
the previously discussed communications for Asynchronous Replication. The first communications
configuration required for Synchronous Replication are the Fibre Channel connections between the
corresponding SPs of the source and destination systems. The Fibre Channel connectivity can be zoned
fabric or direct connections. This connectivity will carry the replicated data between the systems. Next
configured are the Replication Interfaces which are IP-based connections configured on specific Sync
Replication Management Ports on the SPs of each system. These interfaces are part of the replication
management channel. The Replication Connection is configured next and is the same as discussed for
Asynchronous Replication; it defines the replication mode, the management interface and credentials for
both of the replicating systems, and completes the configuration of the management channel.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 389
One of several Fibre Channel ports on each SP of the Unity system is configured and used for
Synchronous Replication. If available, the system will use Fibre Channel Port 4 of SPA and SPB. If not
available, then the system uses Fibre Channel Port 0 of IO module 0. If that is not available, then Port 0 of
IO module 1 is used.

The CLI console command can be used to verify the Fibre Channel port that the system has specified as
the Synchronous FC Ports on the SPs. The slide shows an example of running the UEMCLI command
“/remote/sys show –detail” command. In the abbreviated example output, Fibre Channel Port 4 is
specified by the system as the Synchronous FC port for SPA and SPB.

Once the Synchronous FC Ports on the source and destination systems have been verified, Fibre Channel
connectivity can be established between the corresponding ports on the SPs of each system. Direct
connect or zoned fabric connectivity is supported.

Although the Synchronous FC ports will also support host connectivity, it is recommended that they be
dedicated to Synchronous Replication.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 390
The steps for creating remote replication sessions are somewhat different depending upon the replication
mode; either Asynchronous or Synchronous. Asynchronous remote replication steps are covered here.
The Synchronous replication steps will be covered in a later section.

Before an Asynchronous Replication session can be created, communications need to be established


between the replicating systems. The first step for establishing communications is to create Replication
Interfaces on both the source and destination systems. The interfaces will form the connection for
replicating the data between the systems. The next step is to create a Replication Connection between the
systems. This is done on either the source or the destination. It establishes the management channel for
replication. After it is created, the connection should be verified from the peer system. With these steps
complete, communications are now in place for a session to be created for a storage resource. A session
can be defined for a storage resource during the resource creation or if the storage resource already
exists, it can be selected as a source for replication. The replication settings are defined which include the
replication mode, RPO and the destination. The system will automatically create the destination resource
and the Unified Snapshot pairs on both systems. Then the replication session is established.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 391
As mentioned previously, the steps for creating a Synchronous replication session are different than for
Asynchronous replication.

Before a Synchronous Replication session can be created, communications need to be established


between the replicating systems. The first step is to verify the Synchronous FC Ports on the source and
destination systems and establish FC connectivity to form the connections for replicating data. The next
step is to create Replication Interfaces on both the source and destination systems. The interfaces must
be created on the Sync Replication Management Ports and will form a portion of the management channel
for replication. A Replication Connection between the systems is created next. This is done on either the
source or the destination. It establishes the management channel for replication. After it is created, the
connection should be verified from the peer system. With these steps complete, communications are now
in place for a session to be created for a storage resource. A storage resource can now be selected for
replication. It can be selected during the resource creation wizard. Or if the storage resource already
exists, it can be selected from the storage resource Properties page. The next step is to define the
replication settings which define the replication mode and destination system. The system will
automatically create the destination resource and the Write Intent Logs on both systems. Then the
replication session is established.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 392
This demo covers the creation of remote replication communications. It details synchronous FC port
verification and the creation of replication interfaces and the replication connection supporting
asynchronous and synchronous replication sessions.

Click the Launch button to view the video.

https://edutube.emc.com/Player.aspx?vno=Ad0hF2+GaFno6QWhy1U3MQ==&autoplay=true

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 393
This video demonstrates the creation of remote replication sessions. It details the creation of
asynchronous and synchronous replication sessions.

Click the Launch button to view the video.

https://edutube.emc.com/Player.aspx?vno=IfZJCQAp4dEjP7GUYOL4XA==&autoplay=true

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 394
This lesson focuses on the operations for Remote Replication. Replication operations of Failover with
Sync, Failover, Resume, and Failback are performed. Also detailed is data access from the remote site
during failover.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 395
Replication sessions can be managed from the source or destination systems. The operations possible
will differ between source or destination. The operations will also differ based on the type of replication,
Asynchronous or Synchronous, and the state of the session.

The example is from an asynchronous file system replication that is in a normal state. From the source it is
possible to perform session Pause, Sync or Failover with Sync operations. From the destination it is only
possible to perform a session Failover operation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 396
The table provides a list of replication operations, a brief description of it, and which replication mode
supports the operation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 397
Because a NAS Server has a network interface associated with it, when the server is replicated its
network configuration is also replicated. During replication, the source NAS Server interface is active and
the destination NAS Server interface is not active. Having the source and destination interfaces the same
for the two NAS Servers is fine for an environment where there is similar networking in place for both the
source and destination sites. For an environment where the source and destination sites have different
networking, it is important to modify the network configuration of the destination NAS Server to ensure it
will operate correctly in a failover event. This is performed from the NAS Server’s Properties page on the
destination system. Select the Override option and configure the destination NAS Server for the
networking needs of the destination site. Because the NAS Server effectively changes its IP address when
failed over, clients may need to flush their DNS client cache to connect to the NAS Server when failed
over.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 398
Because of the file system dependence on the NAS Server, replication operations must be done on the
NAS Server first then to the associated file system.

The example illustrates the order of failover for a NAS Server and an associated file system. Failover must
be done first to the NAS Server, then to its associated file system. The same is true for the Resume
operation after Failover. The Resume operation is initiated first on the NAS Server then the associated file
system. Failback is done in the same order; first to the NAS Server then to the associated file system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 399
Failover with Sync is an operation available to Asynchronous replication sessions. It is used for a planned
event, either scheduled maintenance or disaster recovery testing when both the primary and secondary
sites are available. It provides data availability from the secondary site.

The example illustrates the process of the operation. It starts with issuing the Failover with Sync operation
from Site A which is the primary production site.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 400
The operation will remove access to the replicated object on Site A.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 401
A synchronization from the Site A object to the Site B object happens next and when it completes the
replication session is paused.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 402
The operation then makes the Site B object available for access.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 403
Failover is an operation available to replication sessions of either mode; Asynchronous or Synchronous. It
is used for an unplanned event when the primary production site is unavailable. It provides access to the
replicated data from the secondary site.

The example illustrates the process of the operation. The primary production site becomes unavailable
and all its operations cease. Data is not available and replication between the sites can no longer proceed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 404
A Failover operation is issued from Site B which is the secondary production site.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 405
The operation Pauses the existing replication session so that the session will not start again should Site A
become available.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 406
The operation then makes the Site B object available for access and production can resume.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 407
Resume is an operation available to replication sessions of either mode; Asynchronous or Synchronous. It
is used to restart a Paused replication session. When a failed over session is resumed from its Paused
state the direction of replication is reversed.

The example illustrates the process of a Resume operation for a session that is failed over. The Site A
replicated object must be available before the replication session can be resumed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 408
The Resume operation is issued from Site B.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 409
The operation will restart the Paused session in the reverse direction. This will update the Site A object
with any changes that may have been made to the Site B object during the failover. This results in the
session resuming in the reverse direction and returned to a normal state. One may use this method of
restarting replication rather that a Failback operation if production had been serviced from the Site B
object for a significant amount of time and thus has accumulated a significant amount of change from the
Site A object. Replication in the reverse direction will synchronize the Site A object to the data state of the
Site B object. To return production to the Site A object would require a session Failover operation followed
by another Resume operation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 410
Failback is an operation available to replication sessions that have failed over, either Asynchronous or
Synchronous. As its name implies it is used to return a replication session to its state prior to the failover
operation.

The example illustrates the process of a Failback operation. The Site A replicated object must be available
before the Failback operation can be initiated on a session.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 411
The Failback operation is issued from Site B.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 412
The operation will remove access to the Site B object and synchronize the Site A object to the data state
of the Site B object.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 413
The operation then allows access to the Site A object for production.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 414
Replication is restarted using the Site A object as a source and the Site B object as a destination. This
single operation returns the object’s replication state as it was prior to the failover. One would use this
operation to fail back from failovers lasting for only short time periods. It is important to note that if the Site
B object had accumulated a significant amount of change due to long periods of failover, the resync time
can take a significant amount of time.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 415
This video demonstrates failover and failback of a synchronous replication session of a LUN. It details host
access to data after the operations.

Click the Launch button to view the video.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 416
In this lab, you will configure the Destination VSA to prepare for asynchronous replication in the next lab.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 417
This lab covers Asynchronous Remote Replications. You will configure the remote replication
communications between the two replicating systems. You will create a replication session for a LUN.
Then create replication sessions for a NAS Server and its associated file systems. You will finally perform
the replication operations of: Failover, Failback, Failover with Synch, and Resume. After each operation
you will test data access to the replicated object.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 418
This module covered the Replication data protection feature of Unity. It provided an overview of the
Replication feature, its architecture and capabilities. The creation and configuration of Synchronous
replication sessions for LUNs, Consistency Groups and VMware datastores were detailed. Asynchronous
replication sessions for LUNs, NAS Servers and file systems were also covered. Operations for
Replications were also presented.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 419
This module focuses on data protection that AppSync provides through its integration with Unity. It covers
the policy based level of service provided to applications, the administrative roles, and lists applications
protected by AppSync on supported storage resources of Unity.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 420
EMC AppSync is a policy driven, self-service, software for managing copies of various
applications/databases running on various EMC arrays. For the purposes of this presentation we will
restrict the scope to Unity.

AppSync provides a web-based interface to centrally managed light-weight application software.


Application owners can use AppSync to create and manage copies of their data in a self-service manner.
This alleviates many of the conventional inefficiencies associated with copy requests, for things such as
creating copies for development, operational recovery, operational patch management, and analytic
reporting, to name just a few.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 421
AppSync provides a simple, self-service, policy approach for protecting applications and data in Unity.
Protecting data is made easy by a customizable service catalog with pre-built service plans and a
workflow-like editing tool to callout actions that should be taken before and after the replication. After
defining a Service Plan, an application owner can protect and make space-efficient copies of production
data without involving IT and recover data quickly. AppSync provides an application protection monitoring
service that generates alerts when the SLAs are not met.

AppSync creates write-consistent/application-consistent snapshots on the Unity array for each application
you add to a service plan.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 422
Getting started with AppSync to protect your Applications, and VMware Datastores is simple. The first step
is to run the install wizard to setup the AppSync Server. This install will allow the configuration of the
default “Admin” password, as well as any non-standard ports for client/server communication, should there
be a need for such. At this point you should create user accounts. These accounts will be associated with
future step in the process.

Once this process is complete you will login to the server and register resources such as vCenter Servers,
the Unity array, and RecoverPoint Appliances. In most cases the plug-in is pushed to the host, but the
plug-in installation can be performed directly on the host if required in a Windows environment.

The client then reports back to the applications resident on Unity array and eligible to be replicated by
AppSync. Once the clients have been added, they can be subscribed to a Bronze, (local), Silver (remote),
or Gold (local & remote) service plan. From that point forward replication will automatically take place at
the interval specified in the AppSync Service Plan settings. The resultant replicas may then be used for
reporting, backup, or test development environments.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 423
AppSync offers three levels of service, namely Bronze, Silver, and Gold. Each storage array has its own
set of replication technologies, but for Unity, these are both Unity Snapshots and RecoverPoint
Bookmarks.
• For Bronze Service Plans administrators may choose to create local copies of their volumes using
either native snapshots or RecoverPoint local bookmarks.
• Silver Service Plans provide remote replication, which in this case is serviced exclusively by
RecoverPoint remote bookmarks
• Gold Service Plans (both Local AND Remote) are serviced by RecoverPoint continuous local and
remote replication. This of course assumes RecoverPoint is present in the Unity SAN. If
RecoverPoint is not present, Bronze snapshot replica volumes are the only option.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 424
AppSync is designed to have multiple users each with different levels of privilege for executing a variety of
tasks. Each user can be assigned one or more roles that correspond to their responsibilities and
requirements. You can create users that are local to AppSync, as well as optionally add users through an
LDAP server (Windows Active Directory) which handles the authentication and authorization.

The four user roles are security administrator, resource administrator, service plan administrator, and data
administrator. User roles are cumulative, not hierarchical. The included “admin” administrator account has
all these roles. In an application driven scenario, DBA and messaging admins could be given the “Data
Administrator” privilege to run service plans and act on the resultant replicas. The Storage Administrator
would retain control over AppSync server and storage arrays by having the Security and Resource
Administrator roles. The Service plan Administrator would be given to whichever type of admin would be
registering the applications for protection and configuring their RPOs.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 425
The storage array plays a significant role in AppSync feature support. In the case of Unity there is
protection for both Unity File and Block and the applications are shown in the slide. AppSync provides the
ability to perform protect, mount, unmount and restore operations for all applications listed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 426
AppSync has the following license types:

• Full license - Is an array based license.

• Starter Bundle license - Allows you to use unlimited amount of storage on a specific array with some
restrictions on the features supported by AppSync.

With Starter Bundle, you can:


• Add only one storage array per AppSync server
• Mount only 20 application copies

• Volume Software License (VSL) - Allows you to use a specific amount of storage regardless of the
array type.

• Unlimited license for DPS - This is similar to a VSL license without any enforcement on capacity or
utilization. Allows you to use storage from any type of array configured in AppSync with no limitation on
the amount of storage from each array.

For RecoverPoint and VPLEX, no separate license is required. License checks are performed on the
back-end array.

For ViPR arrays, no separate license is required.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 427
This lab covers the configuration of AppSync to replicate data stored on Unity, in an application consistent
manner.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 428
This module covered an overview of application protection AppSync provides through integration with
Unity. It described the policy based level of service protection for applications, listed the administrative
roles, and identified applications protected and listed supported Unity storage resources.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 429
This module focuses on the Local LUN Move data mobility feature. It will cover an overview of the feature,
identifying use cases, detailing it’s architecture, requirements and capabilities. The configuration and
operation of the feature is also detailed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 430
This lesson will describe the Local LUN Move feature and identify its use case. The feature’s requirements
and capabilities are also listed.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 431
Local LUN Move is a native feature of Unity to move LUNs within a single Unity or UnityVSA system. It
moves LUNs between different pools within the system. Or it can be used to move LUNs within the same
pool of a system. The move operation is transparent to the host and has minimal performance impact on
data access.

There are several use cases for the feature. It provides the ability to load balance between pools. For
example, if one pool is reaching capacity, the feature can be used to move LUNs to a pool that has more
capacity. It can also be used to change the storage characteristics for a LUN. For example, a LUN could
be moved from a pool comprised of a disk type and RAID scheme to a pool having a different disk type
and RAID scheme.

Another use of the feature is for data compression of an existing LUN. For example, an existing
uncompressed LUN on an all Flash pool can be moved within the same pool to use the inline compression
feature which will compress the LUN’s data during the move.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 432
When a LUN is moved using the Local LUN Move feature, the moved LUN will retain its LUN attributes
and some extra LUN feature configurations. For example, if a LUN is moved that is configured with
snapshots, its existing snapshot schedule is moved but any snapshot data is not.

Also, if Replication is configured on a LUN, after the move that replication configuration will not be present
on the moved LUN. The graphic here details the LUN attributes and configurations that will and will not be
imported.

Another aspect of a LUN move is the LUN type, Thick or Thin. If a Thick LUN is moved with the GUI, the
resultant moved LUN will be a Thin LUN. If a Thick LUN is moved with UEMCLI, the moved LUN will be
maintained as Thick unless the –thin yes option is specified.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 433
Prior to using the Local LUN Move feature, a host will access the LUN created from a specific pool in a
normal fashion.

The Local LUN Move feature utilizes Transparent Data Transfer (TDX) technology. It was introduced with
the initial release of Unity for VVol operations. TDX is a transparent data copy engine that is multi-
threaded and supports online data transfers. The data transfer is designed so its impact to host access
performance is minimal. TDX makes the LUN move operation transparent to a host.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 434
When a move operation is initiated on a LUN, the move operation utilizes TDX and the move begins.

As TDX transfers the data to move the LUN, the move operation is transparent to the host. Even though
TDX is transferring data to move the LUN, the host still has access to the whole LUN as a single entity.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 435
Eventually TDX transfers all of the data and the LUN move completes. The original LUN will no longer
exist and the host will access the moved LUN in its normal fashion.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 436
There are several considerations to keep in mind for the Local LUN Move feature.

The feature supports moving a standalone LUN or LUNs within a Consistency Group.

To successfully move a LUN, it cannot be participating in a replication session. It cannot be expanding or


shrinking, or being restored from a snapshot. The LUN cannot be a LUN participating in an import session
from VNX and the LUN cannot be offline or needing recovery.

Additionally, if a LUN Move session is ongoing, the Unity system cannot be upgraded.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 437
The Local LUN Move feature capabilities are the same for all Unity models, physical systems and
UnityVSA. The feature supports having 100 move sessions defined. Only 16 sessions can be active at a
time.

Move sessions have Priority settings defined when the session is created. The possible priority settings
are: Idle, Low, Below Normal, Normal, Above Normal, and High.

The TDX resources used by the move operations are multi-threaded. Of the possible 16 active sessions,
TDX multiplexes them into 10 concurrent sessions based on session priority.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 438
This lesson covers the Local LUN Move feature configuration and operations. It describes the feature
configuration and lists the ways a move session is created. The lesson also describes how a session is
monitored and performs a move operation for the LUN.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 439
The Local LUN Move feature has a “Set and Forget” configuration. There is nothing to preconfigure to
perform a LUN Move operation. The move is initiated simply by selecting the LUN to move and choosing
the Move operation from the More Actions dropdown list.

The session configuration simply involves selecting the target Pool to move the LUN into and selecting a
Priority for the move.

Once initiated, the move operation starts automatically. The operation will then continue to completion and
cannot be paused or resumed.

The move is completely transparent to the host. There are no actions or tasks to be performed on the host
for the move. Once the move is completed, the session is automatically cutover and the host data access
to the LUN continues normally.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 440
There are two ways that a move session can be created. One method of starting a move session is
universal and works for any LUN built from any pool comprised of any disk type. The universal method for
starting a LUN Move Session is to go to the LUNs page and select the LUN to move. Then, from the More
Actions dropdown list, simply select the Move operation. A window for configuring the session will be
displayed to select a move Session Priority and the Pool the LUN will be moved to. The Session Priority
dropdown contains a list of move priorities to choose from. The Pool dropdown list will display a list of
available pools on the system.

The other method is only available to LUNs built on Flash disks and involves Inline Compression. For a
LUN built on Flash disks, from the General tab of the LUN Properties page there is an option for
Compression. When the option is enabled, this adds a Compress Now option to the More Actions
dropdown list for the LUN. When the Compress Now option is selected, the Compress Now confirmation
screen is displayed with information that the LUN will be moved within the same pool and its data will be
compressed during the move operation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 441
When a move session is started, its progress can be monitored from a few locations.

From the LUNs page, with the LUN selected that is being moved, the right side pane displays move
session information. The move Status and Progress are displayed. The Move Session State, its Transfer
Rate and Priority are also shown.

From the General tab of the LUN’s Properties page, the same information is displayed. The page does
provide the added ability to edit the session Priority setting.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 442
A LUN Move operation will start a move session for the selected LUN. The operation is not available if the
LUN already has a session started. The operation will run to completion and the host will access the
moved LUN normally from the pool it has been moved to. The completed session will be displayed for
three days to provide a historical reference of the operation.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 443
A LUN Cancel Move operation will cancel a move session. The operation is only available if a move
session is ongoing. The operation will cancel the move and the original LUN will be accessed normally
from its pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 444
In this lab exercise you will move an existing LUN to a different storage pool. Data access to the LUN will
be performed prior to the move, during the move and after the move is complete. You will then move the
LUN back to the pool it was originally on.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 445
This module described the Local LUN Move data mobility feature and identified its requirements and
capabilities. It also covered the Local LUN Move configuration and performing a move operation on a LUN
to a different storage pool.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 446
This module focuses on the tasks for performing basic maintenance procedures on a Unity storage
system. Students will be able to identify the various service tasks required for maintenance and gather all
diagnostic information before starting any service tasks. Students will also be able to locate all applicable
documentation for performing any service tasks using the available EMC support sites. In addition, the
student will be shown how to recognize faults in the storage system by understanding the various alert
levels and logs. Once a failed component is diagnosed, student should be able to perform the removal and
replacement of that component. Lastly, the student should be able to identify the available Data-in-Place
conversion paths should a customer want to upgrade his current Unity to a higher performing model.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 447
This lesson covers the common service tasks performed at both the system level and storage processor
level. Other topics include a discussion on system service logs, service technical advisories, viewing alerts
and system logs

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 448
Shown here is the System Service Overview page. Administrators can use service to determine if EMC
Secure Remote Services has been configured and configure ESRS if required, verify and manage your
support contracts, support credentials, and contact information, troubleshoot and repair the storage
system and Storage Processors (SPs) and review technical advisories and log messages.

In the example we have launched Unisphere and selected “Service” from the available “System” options.
Menu options include Overview (default) Service Tasks, Technical Advisories, and Logs.

We can tell from this view that all Storage Processors are running in normal mode, and we can see the
software version and serial number of the Unity system.

Tasks available here include Test or Change ESRS, Refresh or Review Support Contracts, change
Support Credentials, and change Contact Information. These tasks can also be completed in the settings
window.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 449
In the example, the Service Tasks tab has been selected from the menu options. (Note that anytime a
menu option is selected the text will be displayed in blue) On this page there are a number of storage
system tasks as well as some Storage Processor tasks.

Let’s first take a look at the various storage system tasks. To assist with diagnosing and resolving
problems with your system, users should collect service information about the system and save it to a file.
The file can be used by your service provider to analyze your system.

The example shows the “Collect Service Information” task has been selected. (highlighted in blue) To view
additional information, select the “More information” which will launch a help page in Unisphere. To run the
service task, select the blue “Execute” button.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 450
Executing the Collect Service Information task displays a window from which administrators can either
create a new service data collection file or download a service data collection file from existing collection
files if previous collections were already done.

Select the “+” to create a new service data collection file. The “Collect Service Information” window will
open and a job will created and run. It will take up to 10 minutes for the data collection to complete. Once
completed, you can then choose to open or Save the file to a location of your choice. By default, selecting
Save will save the file to the logged in users “Downloads” directory. For example, if I’m logged in as
Administrator the file is saved to Administrator > Downloads directory.

To download an existing file, highlight the file you want to download and select the “download” icon in the
upper left hand corner. You can then choose to open or Save the file to a location of your choice.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 451
The “Save Configuration” option saves details about the current system configuration settings to a local
file. Your service provider can use this file to assist you with reconfiguring your system after a major
system failure or system re-initialization. You can also use this file to keep a record of system
configuration changes. Select “Save Configuration” and “Execute” As with the ‘Collect Service Information”
option, users can create a new file or download an Existing file.

It is a best practice to save the configuration settings after each major configuration change to ensure you
have a current copy of the storage system configuration settings. It is also recommended that you save
the file to a remote location as a backup against possible failures. Be aware you can only request one
save configuration at a time. When attempting multiple save configuration request, allow a request to
complete before initiating the next request. Note only details about your system configuration are saved to
the file and you cannot restore your system from this file. The save process allows a maximum of 120
minutes in which to complete the capture. The system will abort uncompleted capture requests upon
reaching this time limit.

The configuration details include information about:

• System specifications

• Users

• Installed licenses

• Storage resources

• Storage servers

• Hosts

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 452
The example displays a saved configuration file using the Service> Save Configuration option. Users have
an option to save the file to a location of their choice. Files are saved with a timestamp and Array Serial
Number in html format. Open the file and select one of the many the tabs on the menu bar.

In this example, the “storage” tab has been selected and the various sub-menus displayed. Configuration
files a key for service providers for analyzing system configuration.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 453
There may be times when communications between Unisphere and the storage system get interrupted.
Restarting the management server can resolve this issue. The restart process will take approximately 3-5
minutes to complete. During that time, alerts will be generated indicating the connection and down and
once restored, when the connection is back up. Although the “Reboot” option for the Storage Processor
can be used as well, the “Restart Management Server” option is preferred since data access to the
storage system is preserved.

You cannot restart the management software when both Storage Processors (SPs) are in Service Mode.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 454
Selecting the “Reinitialize” option will reset the storage system to the original factory settings. Both
Storage Processors (SPs) for a dual SP system or the single SP for a single SP system must be installed,
operating normally, and in Service Mode. To put the SP(s) in Service Mode, execute the Enter Service
Mode task on each SP below. Note with physical deployments, if you removed an SP or an SP faulted,
you must replace it before placing it in Service Mode.

Important: Reinitializing will destroy all system configuration settings and stored data on the storage
system. It is recommended that you back up all the data and configuration settings to an external storage
system. Once the system is reinitialized, copy or restore all the data back to it. The Unisphere may get
timeout when the system start reinitializing. Wait 90 to 120 minutes while the system is being reinitialized.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 455
After the storage processor has been initialized, the default password will be “service”. When any service
operations need to be done, the service password must entered. As a best practice your may want to
change the default service password once the system is available.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 456
As part of a maintenance procedure or in the case where rebooting or reimaging the storage processor
could not resolve an issue, a power cycle of the storage system may be warranted. The system shut down
or power down procedure involves shutting down the Storage Processors (SPs). When all SPs are down,
all I/O services stop and hosts lose access to the system. Before performing this procedure, it is
recommended that you disconnect all network shares, LUNs, and VMware datastores from each host to
prevent data loss. When the system is fully powered up, you can reconnect the hosts to these storage
resources.

The shutdown process can take between 10 to 20 minutes to complete. During this time, the connection to
the system will be lost and you will not have access to Unisphere or the online help. It is important that you
print the power up instructions from the help menu to be sure you have all of the information you need to
power up the system. You will need to physically remove power from the SPs and then reconnect them to
power the system back up. The system has shutdown and the power can be removed from the SPs when
the Status Fault LED is blinking amber and the power LED is solid Green.

Important: Make sure to read all instructions before performing this service task. Users will have to enter
the service password to complete the power down operation.

Note the power up procedure must be performed in a particular order. Click Help to review and print the
power up procedure prior to shutting down the system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 457
To connect to the system and perform advanced system maintenance, you need to enable the Secure
Shell (SSH) protocol on the storage system. To enable SSH, highlight the task and click Execute. You will
be prompted for the service password. This service action allows you to run service tools, such as service
actions or service scripts, on the storage system. Once SSH is enabled, you or your service provider can
run the tools through a service portal. When the service tools have finished running, disable the SSH
protocol to ensure that the system is secure. To learn more about using service commands, refer to the
Unity Service Commands Technical Notes document available on support.emc.com.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 458
Hardware upgrades or Data-in-Place (DIP) conversions allow you to upgrade from one Unity storage
processor to a larger capacity Unity storage processor while leaving all data and configurations in place.
Upgrade and replace hardware components.

Important: Hardware upgrade procedures may take up to 120 minutes. This procedure will involve full
shutdown of the storage system, halting of all I/O services, and physically replacing hardware
components.

The example shows the “Hardware Upgrade” wizard launched on a Unity 300F storage system. Available
options include any Unity storage system with a higher capacity. Remember this is a an offline procedure.
I/O will be interrupted and data will become unavailable!

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 459
The “Enter Service Mode” task is a Storage Processor service task, meaning it applies to an individual SP,
and not to the storage system as a whole. Entering service mode may be required to reimage an SP,
reinitialize the storage system, or replace certain hardware components. An SP will also automatically
enter service mode when it is unable to resolve an issue and thus requires intervention.
When an SP enters service mode it stops servicing I/O to hosts and all I/O loads on the SP fail over to the
peer SP, if it is healthy. Note also one of the SP’s will be the “Primary” SP as shown next to SPA in the
example.

Once you select the task and click Execute, you will be prompted for the service password. Wait at least
10 minutes for the SP to enter service mode and do not attempt any actions in Unisphere until it has
completed.

To verify that the SP is in service mode, check that the Mode field displays Service. Here, both SP Modes
are reported as Normal. Note that Unisphere may not refresh automatically. If prompted, reload
Unisphere. If not, refresh the browser manually. To physically confirm the SP is in service mode, ensure
the SP fault LED flashes alternating amber and blue.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 460
The Reboot an SP task can resolve minor problems with the SP, its components, or the system software
on the SP. This task is also used to reboot an SP that is in service mode to return it to normal mode,
provided that the SP is healthy enough to do so. Note that when an SP reboots, it stops servicing I/O to
hosts. If the peer SP is healthy and in normal mode, it will service the rebooting SP’s I/O to hosts and it’s
write cache will remain enabled.

Once you highlight the task and click Execute, you will be prompted for the service password. After waiting
a few minutes and refreshing Unisphere, confirm the SP reboot has completed by noting the SP Mode
Field. It should display a Normal Mode.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 461
In order to monitor the SP boot process, you need to establish a Serial Over LAN (SOL) connection to the
SP service port. This is accomplished using the IPMI tool. The IPMI tool can be found on support.emc.com
by searching the Unity Support Tools section. First, install the tool to the C:\ directory of your service
laptop. Next, set your local network adapter to the IP address 128.221.1.250 and then issue the command
shown to connect to SP A. If you need to connect to both SP A and SPB, you can connect to the service
ports via a small Ethernet switch. Open another command prompt window and issue the command again
to SP B. It is the same command except that the IP address for the service port for SP B ends in 253.

Please refer to the EMC IPMI Tool Technical Notes document available on support.emc.com for the
complete details.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 462
The Reimage a Storage Processor (SP) task is used to safely fix problems with the system software that
could not be resolved by rebooting the SP. Reimaging reinstalls the system's root operating system while
leaving the user's data intact. The system configuration settings and stored files will not be changed.
Reimaging an SP requires that it is first placed in service mode. That is why the Execute button for this
task is now greyed out. Once you have the SP in service mode and execute the reimage task, wait at least
20 minutes while the system reimages the SP and do not attempt any actions in Unisphere until it has
completed. After it has completed reimaging, it will boot into service mode. You can then use the reboot
service task to reboot it to Normal Mode.

When an SP is in service mode it stops servicing I/O to hosts. In physical deployments, all NAS servers on
the SP fail over to the other SP, if it is healthy. By default, when reimaging has completed, the NAS
servers fail back to the SP. If the Failback Policy is disabled, all NAS servers on the SP will not fail back
automatically and will remain on a single SP. Performance can degrade significantly when all NAS servers
reside on a single SP. You can fail back the NAS servers manually.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 463
The “Reset and Hold” service task attempts to reset and hold the selected SP, so that users can replace
the faulty I/O Module(s) on that SP. The process can take several minutes to complete.

While in the “Reset and Hold” state, the SP stops I/O services. All storage resources and NAS servers on
the SP fail over to the other SP, if it is healthy. When the SP returns to the Normal Mode, by default the
storage resources and NAS servers fail back to it with minimal disruption to hosts and I/O services
resume.

An SP that is held in reset cannot be rebooted from Unisphere unless Unisphere can communicate with
the peer SP. If the peer SP is not running, the SP that is held in reset would need to be physically power
cycled in order to reboot it.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 464
Now that we’ve looked at all of the service tasks, let’s take a look at the other two tabs in the System
Service window. The Technical Advisories tab displays up-to-date, real time information and advisories
specific to your system from the Knowledgebase available on the support website. Here you can select an
advisory from the list and choose the link in the Knowledgebase Number column. A new Web browser
displays the technical advisory article from the support website. Note that you can customize the view as
well as sort, filter, and export data.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 465
The Unity storage system monitors and reports on a variety of system events. These events are collected
and written to a log shown here in the Services > Logs menu option. Depending on the severity of the
alert, logs entries are displayed by a colored icon shown in the chart at the bottom. Log entries are useful
for providing a quick status of the system and help in determining which components may be impacting
the storage system.

Note that you can customize the view as well as sort and filter data by date and time, user, source SP,
Category and Message.

You can also configure remote logging. By selecting the “Manage Remote Logging” link to navigate
directly to the Specify Remote Logging Configuration section in the System Settings menu.

To configure remote logging, the remote host must be accessible from the storage system. By default, the
storage system transfers log information on port 514 using the UDP protocol. For more information on
setting up and running a remote syslog server, refer to the documentation for the operating system
running on the remote system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 466
The Support page provides links to resources for learning about and getting assistance with your storage
system. Options include watching how-to videos, accessing online training modules, downloading the
latest product software, searching and participating in the online community, and much more. Of
importance to the CE is the “Replace Drives ,Power Supplies and Other Parts” which provides valuable
information and links on ordering parts, replacing and identifying a failed CRU.

For physical and virtual deployments, you can access support options through your product's support
website. To register your system, download licenses for your storage system, or obtain update software,
you must first establish a support account.

If you are using a Community Edition, go to the EMC Community Network website.

The EMC Community Network website includes product-specific communities that include relevant
discussions, links to documentation and videos, events, and more. The community not only provides you
more information on the products but also helps guide you through specific issues you may be
experiencing.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 467
Alerts are usually events that require attention from the system administrator and one of the first indicators
that a system may need maintenance. For example, you might receive an alert telling you that a disk has
faulted or that the Unity system is running out of space.

There are several ways for viewing alerts on a Unity storage system. On the left Dashboard menu,
navigate to the EVENTS section and select “Alerts”. The system displays all alerts and a brief description
of the alert in the right window.

Another method is to click on the “bell” from the top menu. A window will appear displaying recent alerts
and provide links to “Search the Knowledge Base” or “View All Alerts”.

Users also have the option to “Customize” the Dashboard view and add a “system Alerts” view block (not
shown). With this option, when the Unisphere Dashboard is selected, the view block on the dashboard
shows these alerts categorized by critical, error, and warning. (see next slide). Clicking one of the icons
will open the Alerts page showing the records filtered by the chosen severity level.

Alerts will display an icon to indicate the severity of the issue.

The view block on the dashboard shows these alerts categorized by critical, error, and warning which will
be better explained on the next slide. Clicking one of the icons will open the Alerts page showing the
records filtered by the chosen severity level.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 468
System alerts with their severity levels are recorded on the System Alerts page. The Dashboard of the
Unisphere interface shows an icon with the number of alerts for each recorded severity category. The link
on these icons will open the Alerts page filtered by the selected severity level.

Shown here is a table providing an explanation about the alert severity levels from least to most severe.
Logging levels are not configurable.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 469
To view detailed information on a system alert, select the alert from the list of records of the Alerts page.

Details about the selected alert record will be displayed in the right pane. The information will include the
time the event was logged, severity level, alert message, description of the event, acknowledgement flag,
the component that was affected by the event, and the current status of the component.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 470
This lesson covers the Software and License upgrade options available from the “Settings” page in
Unisphere. The lesson will discuss the methods to locate, download and install licenses on a Unity storage
system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 471
The example displays the available options under the “Software and License settings option. Navigate and
selecting the “Settings” icon in the upper right corner of Unisphere to launch this window.

One selected, the Software and Licenses menu is displayed. Use the bar to scroll down and view all
licenses. A “green” icon indicates an installed license for the respective feature/function. A “red” icon
indicates the license for the feature/function is not installed or valid.

Users have the option of selecting the blue text to “Install License” of “Get License Online”

In the example we see the issued license does not include Data at Rest Encryption. Unity storage systems
are orderable as either encrypted or non-encrypted. The encryption state is set the first time a license is
applied and you cannot apply another license at a later time to enable or disable. A destructive re-
initialization would be required to change the encryption state.

Select “Install License” and follow the wizard to locate and install the requested license.

To “Get License Online” users must have a valid support account to download the license.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 472
Updates are made when a new version or patch is released, or when new information is discovered.
Depending on the implementation, you can obtain updates from online support or from your service
provider. It’s a best practice given the time, to perform a system health check about a week before
installing an upgrade. This will ensure you have time to resolve underlying problems that may prevent a
successful update. Select “Perform Health Check” to launch the script.

Selecting “Start Upgrade” prompts the user to select a file to upload to the server. Users can use the
”Browse” option to search for the location of the file. When you upload a new upgrade file onto your
system, it replaces the previous version since there can only be one upgrade candidate on the system at a
time. In the example, the current version of code is displayed along with the release date of the version.

Selecting “Download New Software” brings the user to the support page where the latest released version
of the Unity OE upgrade file is located.

You will be prevented from using Unisphere or the CLI to make configuration changes to the system while
the upgrade is in progress. Also note, that Unisphere is temporarily disconnected during the upgrade when
the primary storage processor reboots, and may take a few minutes to be automatically reconnected.

Note: Automatic Reboot of SPs: The default option during a software upgrade is to automatically reboot
both storage processors, one-at-a-time, as soon as the software upgrade image is staged and the system
is prepared for upgrade. If you like tighter control over when the reboots happen, you can clear this option
so that upgrade can be started and staged, but neither storage processor will reboot until you are ready.
Doing so reduces the duration of the window (approximately by 10-20%) when the storage processors
could be rebooting, which makes it easier to plan for a time of reduced activity during the upgrade. If that
window is not a factor during your upgrade, then leave the default option of rebooting the storage
processors automatically to avoid delays with the upgrade completing.

To summarize, selecting this option will automatically reboot your storage processors during the upgrade
and finalize the new software. Unselecting this option will pause the upgrade after all non-disruptive tasks
have completed. User input is required to manually reboot the storage processors and finish the upgrade.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 473
At times, drive firmware may need to be updated for compatibility with the Unity OE version. From the
“Drive Firmware” option verify the version under the “Firmware Version” column.

If a new version is available, use the “Obtain Drive Firmware Online” option shown on the bottom.

This option will link to the support page and display any new drive packages.

In the example, we see a new “Unity_Drive_Firmware_Package-V2” is available for the existing drive
drives in the array. Click on the package and download to a directory of your choice, then use the “Install
Drive Firmware” option to load it.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 474
Locating and downloading Language Packs follows the same process a drive firmware. If a new version is
available, use the “Obtain Language Pack Online” option shown on the bottom.

This option will link to the support page and display any available Language packages.

In the example, we see several “Language Packs” available. Click on the package you want to download
and choose a directory of your choice, then use the “Install Language Pack” option to load it.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 475
The “System Limits” option provides users with a table of limits for the currently running Unity OE version.
Be aware these limits may change between versions. The SolVe pocket reference guide is also a good
source for the limits. Use the scroll bar to view all limits.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 476
This lesson covers how to locate and download the maintenance documentation for properly identifying,
removing and replacing a failed CRU in a Unity storage system.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 477
When component replacement needs to be done locate the proper documentation by navigating to
support.emc,com and search on Unity. Select the “Documentation” and then select “Manuals and
Guides” > “Maintain”

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 478
Use the SolVE desktop for locating and downloading documents for EMC products. In the example, Unity
has been selected and displays a number of options for installation, maintenance and many other guides
such as FRU and CRU replacements procedures and hardware information.

By selecting the “Top 10 Service Topics” > Unity from the top menu, users can find other useful
information on the Unity product.

When performing any type of maintenance procedure its always a good idea to downloaded and read the
procedure to be performed before going on-site.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 479
SolVe can be used to generate CRU and FRU procedures for EMC products. In this example, SolVe has
been launched and Unity is selected as the product. To generate a procedure for CRUs, select “CRU
procedures from the menu select the procedure you are going to perform, DPE procedures in this case.

Select the type of procedure, Hardware Replacement Procedures are shown here.

Finally, select the hardware from the list to generate the document. Read all the instructions and perform
the procedures In the steps shown in the document. Note that SolVe is updated periodically so it’s a good
idea to load the latest version when prompted to do so. Always try to maintain the most current version.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 480
The example displays the links to the Unity infohub links. Here you can find some of the key technical
documentation, presented in PDF and HTML formats, videos, CRU procedures and much more. All
related to Unity and UnityVSA systems.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 481
These demos are available as part of the Learning Center resources.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 482
This lesson covers the Data-in-Place conversions for Unity storage systems. Students will be able to
identify which Unity systems are candidates for conversions and explain the steps needed to perform the
DIP tasks.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 483
Data-In-Place conversions are now supported with the v4.1 OE. Data-In-Place (DIP) conversion is a
procedure used to upgrade storage processors while leaving all data and configurations intact. For
example, a Unity 300 can be upgraded to any of the other 3 faster models (400, 500, 600). This new
feature addresses the case where the customer wants higher performance and/or hit the system limits for
the current model.

Conversions allow for a Unity array to be upgraded to any of the higher models. DIP conversions are
supported for Unity physical systems only and not UnityVSAs.

Conversions include both All-Flash and Hybrid systems, All-Flash > All-Flash and Hybrid > Hybrid
systems.

A DIP conversion is a offline CUSTOMER procedure that steps users through a wizard.

Also, for customers with D@RE enabled, the procedure does not affect the ability to perform DIP
conversion.

Note: The Data in Place conversion is an offline procedure. Before starting the conversion, you should
disconnect all network shares, LUNs, and VMware datastores from each host to prevent data loss. When
the system is fully powered up, you can reconnect the hosts to these storage resources.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 484
The slide shows the available upgrade paths for Unity storage systems. As the slide shows, any Unity
storage array can be upgraded to a higher performing array. For example, we can see a Unity 400 can be
upgraded to either a Unity 500 or Unity 600 storage system.

DIP upgrades are applicable across the entire Unity product line. Both hybrid and All-Flash.

Note: Downgrading a Unity array is not supported.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 485
In order to perform a DIP conversion, there are a few things to consider. Since the operation must be
performed offline, both SPs must be powered off before the swap can be completed.

Only SPs are swapped; chassis and drives stay intact.

Any CNA SFPs or I/O Modules in the SP need to be moved over during SP swap. No need to replace
internal components such as the DIMMs, M.2 SSD device or Fans etc.

Hybrid model to All-Flash model or vice versa is not supported. Also as mentioned in the previous slide,
DIP conversions are only performed on Unity systems from lower to higher models. DIP not supported on
UnityVSA models (no hardware)

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 486
Data-In-Place conversions can be performed from Unisphere by navigating to the “Dashboard > Service”
menu and selecting “Service Tasks > Hardware Upgrade” Be aware the hardware upgrade procedure may
take up to 90 minutes. This procedure will involve full shutdown of the storage system, halting of all I/O
services, and physically replacing hardware components.

The Upgrade Wizard displays the current hardware model and prompts the user to select a Unity model
for the upgrade. Note that only higher performing models will be displayed as available options.

Once a target model is selected, It is recommended that you perform a system health check prior to
starting the upgrade. This ensures the system is in a healthy and upgradable state. If there are issues that
need attention, "Warning" and/or "Error" messages may be displayed. Note that Warnings may typically do
not prevent the system from performing the upgrade whereas Errors” need to be resolved before
continuing. Read the Summary page and then continue the upgrade.

Start the upgrade process. The process now goes through a series of steps, it will perform a health check,
prepare the system, halt the system so the CE can swap SPs and power cables. Depending on number of
IO Modules moved over, number of SFPs, whether new blades are out of the shipping package, etc.

Once completed the upgrade will continue by updating the firmware, reimaging the M.2 device, start the
system services (Stack startup time depends on the number of configured storage resources and other
configurations) and perform a cleanup. The result should now display the target system personality. The
total time to complete the process is anywhere from 48-123 minutes. Note the Unisphere GUI displays the
upgrade time of 90 mins.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 487
Before performing a Data-In-Place conversion, the CE should have conversion kit on site.

The kit includes the following:

• Two (2) brand new SPs

• Six (6) cable clamps

• New part number sticker sheet

• One page document pointing the user to Unity InfoHub

• Data-In-Place Conversion Guide will be available online

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 488
When the Upgrade wizard halts the array, the CE will begin the physical hardware replacement steps. The
steps are performed in the following order;

• To ensure that the system is halted, check that the fault and power lights on both SPs are off, and that
the amber fault LEDs on both power supplies are lit. The solid green AC/DC power indicator LED will
still be lit on the power supplies..

• Put cable clamps on I/O module, CNA, and onboard port cables for both SPs

• Remove the power cords for SPA and SPB and wait for the DAE to power off.

• Remove all other cables for SPA

• Remove old SPA

• Insert new SPA

• Insert CNA SFPs, I/O Modules from old SPA, and insert all cables except for power for new SPA

• Repeat steps 4-7 for SPB

• Insert power cords in both SPA and SPB

• Put new part number sticker over PSNT tag for new model. Note the Serial number does not change.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 489
In this lab you will collect service information and configuration information from the UnityVSA system.
You will also enter Service Mode, look at alerts and logs on the system, and look at CRU replacements.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 490
This module focused on the key topics and tasks for performing basic maintenance procedures on a Unity
storage system. We discussed the system and storage processor service tasks required for maintenance.
Some of these tasks such as gathering diagnostic information were performed prior to doing any
maintenance on the system

Students were shown how to gather documents in preparation for performing any maintenance using the
support page or the SolVe desktop application. I

In addition, the student was shown how to recognize faults in the storage system by understanding the
various alert levels and logs. Once a failed CRU is diagnosed, student should be able to perform the
removal and replacement of that component. Lastly, the module covered the available Data-in-Place
conversion paths for a Unity storage system should a customer want to upgrade his current Unity to a
higher performing model.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 491
This course covered the knowledge necessary to understand the features, functionality and key use cases
of a Unity storage system. The topics included an overview of the Unity platform, hardware components,
UnityVSA, installation and configuration, storage provisioning, data protection and mobility, maintenance
activities and Best Practices for getting the most performance out of your Unity storage system.

This concludes the training.

Copyright © 2017 Dell Inc.


[email protected] Unity Deep Dive 492

You might also like