Unity Deep Dive - Lecture
Unity Deep Dive - Lecture
Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks
of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the
USA.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE.
Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license. The trademarks, logos, and service marks
(collectively "Trademarks") appearing in this publication are the property of DELL EMC Corporation and other parties. Nothing contained in this publication should be construed
as granting any license or right to use any Trademark without the prior written permission of the party that owns the Trademark.
AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager,
AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC
CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information Model, Compuset,
Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge ,
Data Protection Suite. Data Protection Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS
ECO, Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter,
EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic
Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC
LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy, Multi-Band
Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere,
ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak,
Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne,
SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex,
UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize
Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression,
xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.
To provide a “hands on” lab exercise focus, the course lecture sections have been shortened. The
downloadable Student Resource Guide PDF file provides a complete set of information to support this
exam and is designed to be your study guide for it. The Lecture Slides PDF file covers the lecture slides
and is not intended to fully support the exam. Please be sure to reference the Student Resource Guide to
studying for the exam.
Unity is optimized for core IT applications. These include transactional workloads such as Oracle, SAP,
SQL, Exchange or SharePoint, server virtualization and end user computing such as VDI, and all other
applications that need traditional file, block or unified storage. All models are available as an AF, All Flash
option.
Unity is also a good fit for partner lead configurations optimized for virtual applications with VMware and
Hyper-V integration. The Unity platform with multi-core optimized architecture unleashes the power of
Flash, taking full advantage of the latest Intel multi-core technology.
The Unity family of storage arrays comes in two types of storage, an All FLASH version as shown on the
left and a HYBRID version on the right. To identify the different models, the front bezels have a slightly
different appearance with the ALL FLASH bezels being white and the HYBRIDs blue and white.
To differentiate the two models, the naming conventions are slightly different as well.
• An ALL FLASH model is designated by the addition of the “F” after the name. For example, Unity
400F.
• HYBRID models simply use the model number such as Unity 600. All Unity models directly
integrate with the current VNX portfolio of products.
LUNs and Consistency Groups provide generic block-level storage to hosts and applications that use the
Fibre Channel (FC) or iSCSI protocol to access storage in the form of virtual disks. LUN (Logical Unit) is a
single element of storage while Consistency Group is a container with one or more LUNs.
File Systems and shares provide network access to NAS clients in Windows and Linux/UNIX
environments. Windows environments use the SMB/CIFS protocol for file sharing, Microsoft Active
Directory for authentication, and Windows directory access for folder permissions. Linux/UNIX
environments use the NFS protocol for file sharing and POSIX access control lists for folder permissions.
VMware datastores provides storage for VMware virtual machines through datastores that are accessible
through the FC, or iSCSI protocols (VMFS) and the NFS protocol.
Another modality of supported VMware datastores are the VVol (Block) and VVol (File) datastores. These
storage containers will store the virtual volumes (VVols) which are VMware objects that corresponds to a
Virtual Machine (VM) disk, and its snapshot and its clones. These VVol (File) use NAS protocol endpoints
and VVol (Block) uses SCSI protocol endpoints for I/O communications from the host to the storage
system. The protocol endpoints provides access points for ESXi hosts communication to the storage
system.
The Software Package provides management software including Unisphere Element Manager; Unisphere
Central, a consolidated dashboard and alerting software; thin provisioning; Proactive Assist to configure
remote support, online chat and open a service request; Quality of Service for block storage; and EMC
Storage Analytics Adapter for VMware vRealize. Unified protocols include file, block and VVols. Local
data protection is provided by local point-in-time copies, anti-virus software, and an optional controller-
based encryption which is provided by a separate license.
It also provides remote data protection with Native Asynchronous Block and File Replication, and Native
Synchronous Block Replication, and performance optimization with FAST Cache and FAST VP.
The UnityVSA is a Unified Array, providing both Block (iSCSI) and File (NFS & SMB/CIFS), and VVols in
one integrated platform. Easy configuration and management of the storage array is possible using the
same HTML5 Unisphere interface as Unity purpose-built storage arrays. A consistent feature set and data
services such as Unified Snapshots and Replication are available with the UnityVSA.
Benefits of this approach is a low acquisition cost option for hardware consolidation, multi-tenant storage
instances, remote/branch office storage environment, and easier to build, maintain, destroy environment
for staging and testing. UnityVSA can coexist with and provide storage to applications running on the
same server hardware, enabling customers to implement an affordable software-defined solution. Multiple
VSA instances can be deployed on a single server.
It is available as a free 4TB capacity Community Edition and a Professional Edition (10TB, 25TB and
50TB) subscription product offering with EMC support. The Community Edition is for test/dev
environments and the Professional Edition is for production environments
For users who initially purchase a 10TB or 25TB subscription and require additional capacity, the following
capacity upgrades are supported:
Capacity upgrades and license renewals can be installed non-disruptively. When a capacity upgrade is
installed, the limits on the system also scale accordingly.
Traditionally, the best practices for optimizing storage performance involved manual, resource intensive
processes. Unity allows SQL administrators to leverage an easy-to-use and potentially hands-off
mechanism for optimizing the performance of the most demanding applications. Automating the
movement of data between storage tiers saves both time and resources. Unity eliminates the need to
spend hours manually monitoring and analyzing data to determine a storage strategy, then maintaining,
relocating and migrating LUNs (Unity logical volumes) to the appropriate storage tiers.
The common business requirement in SAP environments is reducing TCO while improving performance
and service level delivery. Frequently, responsiveness to sensitive SAP applications has deteriorated
over time due to increased data volumes, unbalanced data stores, and changing business requirements.
By using Unity with block data, SAP deployments can gain a significant performance boost without the
need to redesign the applications, adjust the data layouts, or reload significant amounts of data. With
automated sub-LUN level tiering and extended cache, Administrators can properly balance data
distribution across the tiers that allow capacity and performance optimization.
Virtualization management integration allows the VMware administrator or the Microsoft Hyper-V
administrator to extend their familiar management console for Unity related activities.
VMware vStorage APIs for Array Integration (VAAI) for both SAN and NAS connections, allows Unity to be
fully optimized for virtualized environments. EMC Virtual Storage Integrator (VSI) is targeted towards the
VMware administrator. VSI supports Unity provisioning within vCenter, full visibility to physical storage,
and increases management efficiency.
In the Microsoft Server 2012 and Hyper-V3.0 space, the Array Offloaded Data Transfer (ODX) allows
Unity to be fully optimized for Windows virtual environments. This technology offloads storage-related
functions from the server to the storage system.
EMC Storage Integrator (ESI) for Windows provides the ability to provision block and file storage for
Microsoft Windows or Microsoft SharePoint sites.
Topics include NAS servers, the UFS64 file system, FAST Cache, Data at Rest encryption, antivirus
protection, 3-way NDMP backup, the native local and remote protection solutions, the native File Import
and SANCopy features, and Unity support for file archiving to the cloud with the integration of CTA and
Virtustream.
Unity has a single-enclosure, two storage processor architecture with no concept of designated hardware
for File services. File data is served through the use of virtual file servers known as NAS servers.
A NAS server is required prior to creating file systems. NAS servers are used for NAS protocols only. And
with Unity, the storage pools are shared by all resource types, meaning that File systems, LUNs and
VMware Virtual Volumes or VVols can be provisioned out of the same unified pools without need for a
second level “file pool.
Each NAS server is a separate file server. Users on one NAS server cannot access data on another NAS
server. Each NAS server has a separate configuration with independent network interfaces, sharing
protocols, directory services, NDMP backup and Security.
UFS64 is a 64-bit file system architecture used in Unity systems and has a 64TB maximum file system
size as of Unity OE v4.1. All NAS servers and file systems in Unity will use only UFS64. Better
performance is also provided through faster fail overs, file shrink and expand, space efficient snapshots
and simpler quotas.
The FAST Cache is a large capacity secondary cache that uses SAS Flash 2 drives to improve system
performance by extending the storage system's existing caching capacity. The FAST Cache can scale up
to a larger capacity than the maximum DRAM Cache capacity.
FAST Cache reduces the load on back-end hard drives by identifying when a chunk of data on a LUN is
accessed frequently, and copying it temporarily to FAST Cache. The storage system then services any
subsequent requests for this data faster from the Flash disks that make up the FAST Cache
Subsets of the storage capacity are copied to the FAST Cache in 64KB chunks of granularity.
Compression is supported on physical hardware only. For Hybrid arrays, the functionality is only
supported on All Flash pools with no additional licenses required.
Unity software also includes the Fully Automated Storage Tiering for Virtual Pools or FAST VP as a
storage efficiency feature.
FAST VP enables the system to retain the most frequently accessed or important data on fast, high-
performance disks and move the less frequently accessed and less important data to lower-performance,
cost-effective disks.
FAST VP tracks data in a Pool at a granularity of 256 MB – a slice – and ranks slices according to their
level of activity and how recently that activity took place.
Unity supports the File System Quotas which will enable a storage administrator to track and/or limit
usage of a file system.
Quota limits can be designated for users, or a directory tree. Limits are stored in quota records for each
user and quota tree. Limits are also stored for users within a quota tree.
Unity Quality of Service (QoS) is a feature that limits I/O to Block storage resources: LUNs, snapshots,
and VMFS Datastores. QoS or Host I/O limits can be set on physical or virtual deployments of Unity
systems.
Limits can be set by throughput in IOs per second or Bandwidth defined by Kilobytes or Megabytes per
second, or a combination of both limits. If both thresholds are set, the system limits traffic according to the
threshold that is reached first.
D@RE uses hardware embedded in the SAS IO controller chip in all SAS IO modules and embedded in
the Storage Processor instead of hardware embedded in the disk drive as with self-encrypting drives
(SEDs). D@RE supports all drive types. The IO modules encrypt data as it’s sent to the disks.
Since the encryption/decryption functions occur in the SAS controller, it has minimal impact on data
services such as replication, snapshots, etc.
EMC Common Anti Virus Agent provides an antivirus solution to clients using a Unity system. It uses an
industry-standard CIFS protocol in a Microsoft Windows Server domain as well as supporting Windows
clients. The antivirus agent uses third-party antivirus software to identify and eliminate known viruses
before they infect files on the storage system.
The Network Data Management Protocol or NDMP support, enables backup of Unity file data to a tape
library or virtual tape appliance such as Data Domain or Avamar.
Unity supports remote NDMP, also known as 3-way NDMP. Direct attach (2-way) is not supported.
Maximum concurrent streams are: two for UnityVSA, eight for Unity 300-500, and twenty for Unity 600.
The introduction of UFS64 will require a new tape format. The format is named Format N. The previous
generation format for UFS32 is named Format N-1.
The backup module will format the data on tape in different ways based on the type of file system on
which the backup is performed. When backing up data on a UFS64 file system, the data will be written to
the tape in Format N. When backing up data on a UFS32 file system, the data will be written to the tape in
Format N-1.
Unified Snapshots provide point-in-time copies of data. Snapshots are provided for both Block and File
resources. The snap images can be read-only or read/write and can be used for local data protection to
restore the production data to a known point-in-time.
Auto-delete and expiration can be configured so that snapshots are automatically deleted at a specified
time or based on user defined storage consumption thresholds.
Replication is a licensed feature for Unity that enables replication between Unity systems for storage
resources. Replication connections can be asynchronous, synchronous, or both.
Replication is a method that enables data centers to avoid disruptions in operations. In a disaster recovery
scenario, if the source site becomes unavailable, the replicated data will still be available for access from
the remote site.
Replication uses a Recovery Point Objective (RPO) which is an amount of data, measured in units of time
to perform automatic data synchronization between the source and remote systems. The remote data will
be consistent to the configured RPO value.
Replication is also beneficial for keeping data available during planned downtime scenarios. If a
production site has to be brought down for maintenance or testing, the replica data can be made available
for access from the remote site.
Other Data Protection solutions supported by Unity are RecoverPoint, RecoverPoint for VMs, and
AppSync.
RecoverPoint provides Block replication functionality across all RecoverPoint supported platforms and can
be used for VNX1/VNX2 migration or replication to Unity. RecoverPoint for Virtual Machines provides VM-
granular protection of Virtual Machines and associated data.
AppSync is a policy driven, self-service, software for managing copies of various applications/databases
running on various Dell EMC arrays. Unity uses AppSync to create consistent snapshots.
File Import is a Unity native file migration feature that supports the migration of NFS configured VDMs and
their File Systems from legacy VNX systems. The use case of the feature is for replacing VNX1 and VNX2
systems with Unity. The feature migrates a VNX NFSv3 VDM to a Unity NAS Server.
File Import is supported on all physical Unity models and UnityVSA. The operation is transparent to host
I/O with little or no disruption to client access of data.
SANCopy import is a Unity native block migration feature. It migrates block storage resources (LUNs and
Consistency Groups) from a VNX1/VNX2 system to a Unity system. The use case of the feature is also for
replacing the VNX legacy systems with Unity.
SANCopy must be enabled on the system. The SANCopy enabler is contained in the VNX Installation
Toolbox on the EMC Support site if needed.
The import of block data is configured and controlled from the Unity system using the SANCopy engine
running on the VNX. Then the data is migrated to Unity using a SANCopy push from the VNX.
The Native SANCopy Import feature is managed from Unisphere, UEMCLI commands and REST API
calls.
Unity OE supports the Distributed Hierarchical Storage Management (DHSM) functionality using the
Cloud Tiering Appliance (CTA) release 11 as a policy manager and Virtustream cloud storage as a
destination.
Unity storage arrays are used as source File Servers allowing the tiering of file data to cloud storage
based on policies. The release also included support to. Microsoft Azure and Amazon S3 are also
supported as cloud destinations.
The only supported tiering destinations are Virtustream, Microsoft Azure, or Amazon S3.
Topics include the administrative and centralized management interfaces, the integration with VMware
tools, VASA/VVols support, and integration with Windows environments with ESI and SCVMM.
EMC Unisphere for Unity provides a flexible, integrated experience for managing Unity storage systems.
Unisphere for Unity supports a wide range of browsers including Google Chrome, Internet Explorer,
Mozilla Firefox, and Apple Safari. It is HTML5 based so it does not require browser plugins.
The Unisphere wizards help the user to provision and manage the storage while automatically
implementing best practices for the configuration.
Unisphere contains a complete ecosystem, the highlight of which is Proactive Assist with call home, and
Cloud-based management. Proactive Assist is a self-service portal with a robust on-line set of community
activities (live chat, videos, documentation, and more), direct parts ordering, system views, and dial home
assistance.
In Unisphere, the user can also analyze system performance by viewing and interacting with line charts
that display historical and real-time performance metrics.
UEMCLI is a management tool which provides a way for users to manage a system through prompts from
a Microsoft Windows or UNIX/LINUX platform. UEMCLI is intended for advanced users who want to use
commands in scripts for automating routine tasks, such as provisioning storage or scheduling snapshots
to protect stored data.
Unity also allows programs to easily integrate with the storage system using REST API. Representational
State Transfer (REST) is a popular web Application Programming Interface (API) design model that uses
simple HTTP calls to create, read, update, and delete information on a server.
The REST API allows interaction with Unisphere management functionality, including system settings,
host and remote system connections, network settings, storage management, and data protection.
The Unisphere Central management tool can be used for federated management of multiple Unity
systems, purpose built or VSA.
Unisphere Central is a network application that remotely monitors the status, activity, and resources of
multiple supported EMC storage systems from a central location. The application allows administrators to
take a look at their storage environment from a single interface and rapidly access the systems that need
attention or maintenance.
CloudIQ is a proactive management system that contains all of the capabilities needed to resolve
problems fast. Once a user is signed up to the CloudIQ ecosystem, they can:
• Live chat with an EMC support rep and ask questions of other CloudIQ community members.
CloudIQ requires that Unity is configured with ESRS and a valid support account in order to work.
Unity supports the creation of storage containers for VMware Virtual Volumes (VVols). VVols are storage objects provisioned
automatically by the system to store virtual machine data. VVols are stored in storage containers which are created by the storage
administrator. The storage Containers have a 1:1 mapping with VVol datastores.
VVols are based on the vStorage API for Storage Awareness (VASA) 2.0 protocol and requires vSphere 6.0 in order to run.
Unity has EMC tools to enhance its integration with VMware, plus it works closely with existing VMware features.
Storage Analytics for Unity provides automated operations management using patented analytics and an integrated approach to
performance, capacity and configuration management. SRA is a storage replication adapter that extends the disaster-restart
management functionality of VMware vCenter Site Recovery Manager (SRM) to the Unity storage environment.
Key VMware features which Unity seamlessly integrates are VMware vSphere Storage APIs Array Integration for SAN, VMware
vSphere Storage APIs Array Integration for NAS, Virtual Storage Integrator, and VMware vCenter Site Recovery Manager.
The Unity array is designed to integrate with the Microsoft Windows Server and its System Center Virtual Machine Manager
(SCVMM), and it provides APIs to support the storage health monitoring feature. Microsoft Windows Server 2016 and SCVMM use
the SMI-S API to manage external storage.
Health monitoring requires storage vendors to deliver lifecycle indication of alerts on specific storage objects, including: Array,
LUN/Disk, LUN/Pool Capacity, File System, File Share, File System Capacity, Fan/Power supply, LUN/LUN group replication as
defined in the SMI-S* Indication and Health Profiles.
Health monitoring requires storage vendors to deliver lifecycle indication of alerts on specific storage objects, including: Array,
LUN/Disk, LUN/Pool Capacity, File System, File Share, File System Capacity, Fan/Power supply, LUN/LUN group replication as
defined in the SMI-S Indication and Health Profiles.
Our platforms also offer great free tools for Microsoft centric management, such as the EMC Storage Integrator for Windows or ESI.
In a typical environment, a storage admin creates a LUN, a Database admin creates a database and then a SharePoint admin
creates the Web application on the database.
If all the admins execute their respective tasks in a sequential order it would take about 90 minutes. In the real world, this takes a
lot longer as you are waiting for one or more admins to get to this work order.
With ESI however, all you need is a Storage Pool and one can accomplish the same task in about 20 minutes. ESI also includes
System Center integrations such as System Center Operations Manager (SCOM), and SCVMM.
* SMI-S (Storage Management Initiative Specification) is a standard developed by the Storage Network Industry Association (SNIA)
that is intended to facilitate the management of storage devices from multiple vendors in storage area networks (SANs).
For adding storage capacity, Unity uses another hardware unit called a Disk Array Enclosure [DAE].
Shown here are the two different DAEs. One DAE has a disk drive layout format that holds up to twenty
five 2.5 inch disk drives. The other DAE format holds up to fifteen 3.5 inch disk drives. The DAE disks are
accessed from the front and the power and connectivity cabling are accessed from the rear.
DPEs and DAEs have redundant power and cooling for operational resilience if a hardware component
fails. The DPEs and DAEs and the components they house have power and status LEDs to quickly
identify any hardware component failures.
Each Unity Storage Processor includes embedded or onboard devices and cabling ports that facilitate
connectivity to Unity. These embedded devices provide Ethernet connectivity for managing and servicing
the system. Additional Ethernet ports are available to provide data access to file clients and block hosts.
The SP also has embedded Converged Network Adaptors [CNA] that can be configured for either Fibre
Channel or Ethernet connectivity for data access. Embedded Serial Attach SCSI ports are also available
to provide connectivity to additional DAEs. Each SPE also contains two expansion slots for installing
expansion modules to provide additional connectivity options.
The components inside each SPE are viewable when the SPE cover is removed. The SP CPU, DIMMs,
and a non-volatile M.2 SSD are shown here. Redundant component cooling is provided by five dual fan
packs. A Battery Backup Unit is included to provide DPE components power if the main house power fails.
A single Battery Backup Unit is capable of powering both Storage Processors’ CPUs, DIMMs, M.2 SSD,
and the first four DPE disks. The system will protect any data in-flight during house power events by de-
staging the in-flight data to the non-volatile M.2 SSD and then perform an orderly shutdown of the system.
The 4-port 12Gb SAS I/O module is available for installation in the Unity 500 and 600 hybrid and All Flash
models to add more SAS buses to the system.
The 4-port 16Gb/s Fibre Channel module is available to support additional Fibre Channel connected hosts
for block storage resources on Unity.
There are four different types of Ethernet I/O modules available to support gigabit and 10 gigabit Ethernet
speeds on copper or optical media. The Ethernet modules support simultaneous iSCSI block access to
data and NAS file access to data.
The A and B side Link Control Cards provide SAS ports to connect the disks within the DAE to the A and
B side SAS buses. Should a SAS bus or LCC fail, the DAE disk can be accessed over its peer SAS bus.
The LCCs also have SAS ports to extend SAS connectivity to additional DAEs.
There are three types of rotating drives available for hybrid Unity models; 10K SAS drives, a 15K SAS
drive, and Near Line SAS drives. These spinning drives are available in a number of different capacities.
There are three different types of solid state drives available for Unity hybrid and All Flash models; SAS
Flash 2, SAS Flash 3 and SAS Flash 4 dives. They are available in a number of different capacities as
shown.
The NL-SAS drives are available only in the 3.5 inch form factor. All other drives are available in the 2.5
inch form factor. A 3.5 inch carrier is available for fitting a 2.5 inch form factor drive into a DAE or DPE that
supports 3.5 inch drives.
The module will also discuss how to deploy and initialize a Unity Virtual Storage Appliance (UnityVSA).
Follow the guide to install any optional disk array enclosures (DAEs) and cable them to the DPE. Next,
attach the Storage Processor (SP) management ports to the end user’s network and power the system up.
Let’s take a look at the cabling and power up sequence.
If the end user has an enabled DHCP environment on their network, the Unity storage system will
automatically acquire an IP address when you power it up and no further action is required. You can open
a web browser and point it to the DHCP acquired address to open Unisphere. If the end user requires a
static IP address for the Unity storage system, then you will need to download and install the Unity
Connection Utility or use the InitCLI tool. We will discuss the InitCLI tool a little later.
The Unity Connection Utility software is a program used to discover and configure Unity systems with IP
networking and hostname information. The Connection Utility is available for download at
support.emc.com or the Unity All-Flash & Hybrid Info Hub site. It must be installed on a Windows host.
The installer package includes the 32-bit JAVA Platform Standard Edition 1.8 Update 60. Note that you
may need to stop host firewall services to be able to discover storage systems using the Connection
Utility.
Before running the Connection Utility, the Unity Family Configuration Worksheet should be downloaded
and completed. The Connection Utility requires the Product Serial Number Tag (PSNT), a user defined IP
address, subnet mask, and gateway. These can all be found on a properly filled out Configuration
Worksheet. The PSNT can be found in the packing materials that came with the system or on the tag on
the front of the DPE.
Unisphere will launch the Initial Configuration wizard to walk the user through the initial setup of Unisphere
and prepare the system for use. After the initial setup is concluded, the wizard can be launched at any
time from the settings window.
The wizard sequence is displayed here. You will run through the wizard during the labs. As mentioned
earlier, the Unity Family Configuration Worksheet should be filled out prior to using the Initial Configuration
wizard. The user will also need to create an EMC Online Support account for support.emc.com if he or
she hasn’t already done so. This information should be recorded on the Configuration Worksheet.
The InitCLI tool is available on support.emc.com. The tool must be run from a Windows host on the same
subnet as the Unity storage system. There are two parameters, initcli discover and initcli configure.
The discover parameter searches the network for available systems to be configured and will list the
information about the system including its serial number (ID or PSNT) as shown in the first example with
the red box. The output qualifier specifies the output format as NVP (Name Value Pair) or CSV (Comma
Separated Value). In this case the CSV format was chosen.
The discovered Unity system can then be configured by using the configure parameter and specifying the
serial number of the system or PSNT, and the IP address, subnet mask, gateway, and a friendly name. In
this case, the PSNT is also used for the friendly name.
Reliable block storage is provided to the UnityVSA via a RAID card associated with the hypervisor and
host hardware (direct attached storage) or via an external RAID protected storage array (EMC arrays or
third-party storage arrays).
RAID protection is provided at the physical level – UnityVSA adds no RAID protection on top of the virtual
disks. Storage is provisioned to the UnityVSA via Fibre Channel/iSCSI (block) or NFS (file).
VMware vSphere datastores are built from file systems (NFS) or LUNs (VMFS) provisioned by the
backend.
vDisks for the UnityVSA are created from the provisioned ESXi datastores.
UnityVSA storage pools can then be provisioned from the vDisks. Unity storage resources (block, file, and
VMware datastores) can be provisioned to hosts using the storage pools. UnityVSA provision block
storage to hosts only using the iSCSI protocol.
It is a good idea to check that the ESXi server used for OVA deployment has access to the client server
running Unisphere and access to the client server to perform I/O operations from the UnityVSA.
From the vSphere Web Client Deploy OVF Template wizard, locate the OVA file on the local machine
and follow the next steps to deploy the virtual appliance.
Please note that the UnityVSA MUST be powered on prior to making any edits to the UnityVSA virtual
machine settings.
Changes to the physical configuration of a UnityVSA VM such as adding or removing network interfaces or
modifying the cache size are not supported. The only accepted modifications of the physical configuration
of a UnityVSA VM is the addition of virtual disks to store user data.
The initialization is now complete and you can move on to creating vDisks for the UnityVSA.
A static IP address can be assigned to the management interface of the UnityVSA when the OVA package
is deployed using the OVF Template wizard in vSphere. Also, from vSphere client interface it is possible to
run the svc_initial_config command from the Console tab to initialize the management port.
Alternatively, the Unity Connection Utility or the InitCLI applications can be run from a Windows host to
discover and configure the network settings for the management interface.
After the initialization completes, at least one virtual disk must be added for user data prior to performing
the initial configuration in Unisphere. More vDisks can be added when additional storage for user data is
needed. Up to a maximum of 16 user vDisks are supported.
From Hosts and Clusters, expand the ESXi server where the UnityVSA was deployed, right click the
UnityVSA and select Edit Settings from the pull down menu.
From the bottom of the Virtual Hardware tab, use the New Device menu to select New Hard Disk. Enter
the appropriate settings to the screen
If more than 12 user vDisks are added, another SCSI controller will be automatically added to
accommodate the additional disks.
Ensure the SCSI controller type is "paravirtual". Other types are not supported and can cause boot
problems and unrecognizable disks. vSphere can be used to view and change the controller type if
needed.
To add more disks select New Hard Disk from the New Device drop-down list and click Add.
To finalized the Edit Settings and reconfigure the virtual machine click the OK button.
Unisphere can be launched by simply entering the IP address of the Unity management port in the URL
address field of a supported web browser.
Unity provides two administrative user authentication scopes. Users can log in with credentials maintained
through either local user accounts or domain-mapped user accounts. There is no Global authentication
scope as the concept of storage domain does not exist for the Unity systems.
Local user accounts can be created and managed through the User Management Settings page in the
Unisphere GUI. These user accounts are associated with distinct roles and provide username and
password authentication only for the system on which they were created. They do not allow management
of multiple systems unless identical credentials are created on each system.
The Lightweight Directory Access Protocol (LDAP) is an application protocol for querying directory
services running on TCP/IP networks. LDAP provides central management for network authorization
operations by helping to centralize user and group management across the network. With this method, the
accounts used to access the system are domain-mapped user accounts (members of an LDAP domain
group). It uses the username and password specified on an LDAP domain server. Integrating the system
into an existing LDAP environment provides a way to control user and user group access to the system
through Unisphere CLI or Unisphere.
The user authentication and system management operations are performed over the network using
industry standard protocols such as Secure Socket Layer (SSL) and Secure Shell (SSH). GUI
administration can be performed using a supported network browser or CLI administration is done with the
use of the Unisphere CLI client software.
For deployments where the Unity will be managed by more than one administrator, multiple unique
administrative accounts are allowed. Different administrative roles can be defined for those accounts to
distribute administrative tasks between users.
• Management accounts: Administrator privileges for resetting default passwords, configure system
settings, create user accounts, and allocate storage.
• Service account: Perform specialized service functions such as collecting system service information,
restart management software, reset system to factory defaults, etc.
During the initial configuration process, you are required to change the passwords for the default admin
and service accounts.
The Unisphere interface has 3 main areas which are used for navigation and visualization of the content:
the navigation pane, a main page, and the sub-menus.
The navigation pane on the left has the Unisphere options for provisioning storage, provide host access,
protect the data, and monitor the system operation.
The main page is where the pertinent information about options from the navigation pane and a particular
submenu is displayed. The page also shows the available actions that can be performed for the selected
object. The selectable items will vary depending on the selection. It could be information retrieved from the
system, or configuration options for storage provisioning, host access, and data protection. In this example
the page shows the System View content.
A sub-menu with different tabs (links) on the top of the main page provides additional options for the
selected item from the navigation pane.
There is also a top menu on the right-corner with links for the system alarms, job notifications, help menu,
and the configuration of Unisphere preferences and global settings.
View blocks can be added to a dashboard. These view blocks can be used to view a summary of system
storage usage, monitor system alerts, view health of storage and system resources, and provide graphs of
system performance at a high-level.
To add view blocks to the selected Dashboard, the user must open the sub-menu on the top, choose
customize, then select the desired block and click the Add View Block button.
The Unisphere CLI client can be downloaded from the support web site and installed on a Microsoft
Windows or UNIX/Linux computer.
Unisphere CLI supports provisioning and management of network block and file-based storage. The
application is intended for advanced users who want to use commands in scripts for automating routine
tasks. The routine tasks include:
• Managing users.
• Provisioning storage.
• Protecting data.
• Storage types
In the example the uemcli command accesses the Unity system using the management port with IP
address 10.126.91.11 and logs into the system as the local user admin. The system certificate is displayed
and the user has the choice to accept or reject it. If the certificate is accepted,
the command retrieves the array general settings and outputs its details on the screen.
For more information refer to the latest Unisphere Management REST API Programmer’s Guide
available in the EMC Online support web site (http://support.emc.com).
The Unisphere Central server is deployed as a virtual machine (VM) built from an OVF template in a
VMware environment. The Unisphere Central OVF template can be downloaded from EMC Online
Support. When deploying the OVF template an IP address for the Unisphere Central can be assigned
within vCenter or in the console of the VM on an ESX/ESXi host.
The Unisphere Central server obtains aggregated status, alerts, host details, performance and capacity
metrics, and storage usage information from the Unity, VNX, VNXe and CX4 systems in the environment.
Unity collects several metrics at various predetermined time intervals and sends the information to ESRS.
(ESRS must be configured and functional in order for CloudIQ to work). ESRS then sends the data to the
Cloud where administrators can access the data through any number of supported browsers.
System metrics include: Alerts, Performance, Capacity, Configuration, and Data Collects.
ESRS or EMC Secure Remote Services is better detailed in the Support configuration lesson.
The Dashboard Heath Score widget provides a view of the overall health score of a single system or all
monitored systems.
The Alerts trending widget aggregates alerts across all CloudIQ monitored systems. Users can drill down
and select items in a specific time frame or select a particular severity level. Details can be found on
performance, capacity and configuration, and include alerts and pool aggregate views.
The Pools widget (Running out of space) shows a summary of how many pools will reach capacity in a
given period of time (week, month, and quarter). This option, along with the Pools (Most available space)
option allows storage administrators to proactively move files to a pool with more space to avoid a pool
from filling up to capacity.
The Alerts Trend window allows administrators to view Alerts in a given period of time by selecting the
time frame in which the display shows. In the screen, the trend is displayed for 1 day; select 1 month to
display the trends over that period of time.
The two Pools windows will show the capacity of each pool and if a particular pool is running out of space.
We can clearly see in the example that pool2 has the most capacity of all monitored pools.
Your score can help you spot where your most severe health issues are based on five core factors:
• Configuration
• System Health
• Performance
• Capacity
• Data Protection
The area with the highest risk to your system's health will hurt your score until actions are taken towards
remediation.
Storage administrators can also view an aggregated list of pools, LUNs, file systems, and hosts for all
monitored systems.
For each category, CloudIQ runs a check against a known set of rules and makes a determination if a
particular resource has issues. The health score engine creates a number of impact points and those
impact points contribute to the overall health score of the system.
Categories:
• Configuration: Hosts - non-HA, Drive Issues: Faults, weighted by use (Hot Spare, RAID 6, RAID 5)
• Data Protection: RPOs not being met, last snap not taken
Systems (not categories) are given a score with 100 being the top score and 0 being the lowest. CloudIQ
services running in the background collect data on each of the five categories and determine its impact
point on the system based on a set of rules.
As shown in the slide, system VV-D9094-spa has a score of 60 (occupies 60% of the circle). Its color is
red indicating a system needs attention. The resource itself, the Performance icon, also shows a score of -
40 and is red also. Total scores are based on the number of impact points across the five categories. The
calculation is then, -40 from 100 = 60.
For system BC-D1165-spa, the health score is at 70% yellow. Note there are two categories that have
impact points, the Configuration icon at -20 and the Performance icon at -30. The calculation for the health
score only takes into consideration the -30 (Performance) since it has the highest number of impact points
(meaning the customer should look at and fix these first). The total is not based on the cumulative sum of
points (i.e., -50).
All other icons show a status of green which indicates no issues. Total scores are based on the number of
impact points across the five categories.
Users can also select one of the sub menus, such as Pools, Storage, Drives, or Hosts.
Selecting the object displays additional details. In the example, Storage has been selected and details
about the types of storage (file system, LUNs) are visible.
In the example there are three pools shown in blue text; selecting one of the pools will allow users to view
the specifics for the pools.
The Properties windows displays detailed information about an individual pool. Users can determine the
status of the pool and the system the pool resides on, as well as the state of FAST Cache or FAST VP.
To view additional details on pool objects, select either the Storage or Drives option. The Storage option is
selected in the example. We can see the name, type of object and additional details on the total size,
amount of space used and allocated to the object and in this case, since a file system is shown, the NAS
server on which the file resides.
Capacity displays the amount of capacity used and free in the pool, as well as the time to full, the amount
of capacity used by the storage objects on the system, and historical capacity trends. The section below
provides additional details.
Performance will allow users to view the top performance storage objects and displays graphs on IOPS,
bandwidth, and backend IOPS.
In the Settings configuration window it is possible to monitor installed licenses, manage users that can
access the system, configure the network environment, allow monitoring of the system by Unisphere
Central and the logging of system events to a remote log server, start and pause FAST suite feature
operations, register support credentials and enable ESRS, create IP routes, enable CHAP authentication
for iSCSI operations, and configure email and SNMP alerts.
The first screen of the Settings window allows the management of licenses for the available features. A
product license can be selected from the list to have its description displayed. To unlock features, license
keys must be installed.
To create an user account you must open the Settings window, and then select Users and
Groups > User Management.
Then select the type of user or group to add: a local user or an LDAP user or group. When users log in to
Unisphere with an LDAP account, they must specify their username in the following format:
domain/username.
Define the role for the new user (explained on the next slide).
Then verify the information on the summary slide then click Finish to commit the changes.
Type the base distinguished name (DN) of the root of the LDAP directory tree. A DN is a sequence of
relative distinguished names connected by commas, which describes a fully qualified path to an entry.
Type the password the system will use for authentication with the LDAP server.
If LDAP over SSL (LDAPS) protocol is used, then the default communication port is set to 636 and the
system will request that an LDAP trust certificate is uploaded to the storage system.
For secure communication between the two entities (storage system and LDAP server), one entity must
accept the certificate from the other. When the certificate of the LDAP server is accepted or validated for
the storage system, the certificate is placed in the storage system’s certificate store and is marked as
trusted. So the subsequent connections to the LDAP server are trusted by the storage system.
The user must follow the LDAP Server operating environment instructions to learn how to retrieve the
LDAP SSL certificate that will be imported.
Click on the Advanced link to configure user and Group settings parameters then click OK to save the
changes. Then click on Apply to commit the changes. The Login operations are authenticated by the
LDAP domain server.
Unity supports two methods for configuring system time: Manual settings or NTP synchronization.
The NTP synchronization method allows the Unity system to connect to a Network Time Protocol (NTP)
server in order to synchronize the system clock with other applications and services on the network. Some
applications will not operate correctly if the Unity system clock is not synchronized to the same source as
the host system clock.
Time synchronization is key for Microsoft Windows environments, when the Unity system must be time
synchronized to the same NTP servers as the systems accessing the Unity storage. In Microsoft Windows
environments, this is typically one or more of the Domain Controllers. This configuration is necessary to
connect a NAS server to the Active Directory, to allow SMB and multi-protocol access.
If the network does not have an NTP server, the administrator can set the Unity system time in the
following ways:
• Manually set system time: Enter a specific time for the Unity system in the UTC format. Consolidated
Universal Time (UTC) is the primary time standard by which the world regulates clocks and time.
• Set system time to client time: Synchronize the Unity system time with the host where you are running
Unisphere (in UTC format).
In the System Time and NTP option under the Management section select the Enable NTP
synchronization. Then click add to enter the IP address of a NTP server. Click the add button from the
dialog box to close the window and add the NTP server.
Select Obtain DNS server address automatically to have Unity automatically retrieve one or more IP
addresses for the DNS server if running Unity on a dynamic network that includes DHCP and DNS
servers.
If not running Unity on a dynamic network, or choosing to provide a static IP address for the DNS server
manually, select Configure DNS server address manually, then click on the Add button and enter the IP
address of a DNS server, and click Add.
CloudIQ also provides proactive serviceability that informs the user about issues before they occur and
provides the user with simple, guided remediation.
EMC Secure Remote Services (ESRS) must be enabled on the storage system to send data to CloudIQ.
• Open the Unisphere Settings window and then navigate to the Management section and select
Centralized Management.
• On the CloudIQ tab, Check the Send data to CloudIQ checkbox to enable the system to provide
status and alert information to CloudIQ.
Once CloudIQ is enabled, it is possible to disable ESRS without changing the CloudIQ setting.
Without ESRS, data is not collected and sent to CloudIQ, but if ESRS is re-enabled, the system
remembers the CloudIQ setting and immediately resumes sending data to CloudIQ.
Expand the Management section of the Settings window, then choose the Unisphere IPs option.
In the Specify Network Configuration page enter or modify the information for the Unity network
configuration. Each IP version has radio buttons to disable the configuration, and select the dynamic or
static configuration.
If you are running the Unity system on a dynamic network, the management IP address can be assigned
automatically, by selecting the proper radio button.
For a static IP address select the radio button to enable it and enter the IP address, the subnet mask and
gateway in the proper fields.
Observation: If enabling ESRS support for the Unity system, then EMC recommends that a static IP
address is assigned to Unity to have it managed by ESRS.
Before configuring remote logging, a remote host running syslog must be configured to receive logging
messages from the storage system. In many scenarios, a root or administrator account on the receiving
computer can configure the remote syslog server to receive log information from the Unity system by
editing the syslog-ng.conf file on the remote computer. For more information on setting up and running a
remote syslog server, refer to the documentation for the operating system running on the remote
computer.
To configure Remote Logging you must check the Enable logging to a remote host checkbox. Then you
must specify the network address of the host that will receive the log data. The remote host must be
accessible from the Unity system, and security for the log information must be provided through the
network access controls or the system security at the remote host.
Select the component that generates the log messages you want to record.
• Kernel Messages - Messages generated by the operating system kernel. These messages are
specified with the facility code 0 (keyword kern)
• User-Level Messages - This is the default option. Messages generated by random user processes.
These messages are specified with the facility code 1 (keyword user)
• Messages Generated Internally by syslogd - Messages generated internally by the system logging
utility (syslogd). These messages are specified with the facility code 5 (keyword syslog)
Then select the protocol used to transfer log information: UDP or TCP. By default, the system transfers log
information using the UDP protocol on port 514. Click Apply to commit the changes.
Proxy server configuration allows the exchange of service information for Unity systems that cannot
connect to the internet directly. Once configured, the storage administrator will perform the following
service tasks using the proxy server connection:
- Configure ESRS
- Receive notifications about support contract expiration, technical advisories for known issues, software
and firmware upgrade availability, and Language pack update availability.
To configure Proxy Server the user must open the Settings page and expand the Support Configuration
section. Then the Proxy Server option must be selected and the Proxy server information entered on the
valid fields.
The secure (SOCKS) protocol should be selected for IT environments where HTTP is not allowed. This
option uses port 1080 by default and does not support the delivery of notifications for technical advisories,
software and firmware upgrades.
The non-secure (HTTP) protocol supports all service tasks including upgrade notifications. This option
uses port 3128 by default.
The user must enter the IP address of the Proxy Server and the credentials (user name and password) if
required. The SOCKS protocol requires user authentication.
In addition, support credentials are required to configure EMC Secure Remote Services (ESRS), which
provides EMC Support direct access to the storage system (via HTTPS or SSH) to perform
troubleshooting on the Unity system and resolve issues more quickly.
Enter the username and password on the proper fields and click Apply to commit the changes. The
credentials entered here must be associated with a support account.
ESRS options available with the Unity release include an embedded version that runs on the Unity
storage system or the ESRS Virtual Edition (VE) which is a Gateway version installed as an off-array
Virtual Machine (VM). It can be managed with Unisphere, UEMCLI, and RestAPI.
Software Licensing Central is an EMC wide service the allows EMC products to send electronic licensing
and usage information to EMC via ESRS VE. This information is visible to both EMC and end users. Unity
systems will automatically send the information on licensed features once per week. This feature is
enabled automatically when remote support is enabled.
The Configure button on the EMC Secure Remote Services page will be only enabled and launch the
Configure ESRS wizard, if valid EMC Support Credentials and the Contact Information were entered in
the respective pages of the Support Configuration section.
Integrated ESRS runs on the Unity storage system and allows only this system to communicate with EMC.
The ESRS software is embedded into the Unity operating environment (OE) of the Unity physical system
as a managed service. The Unity OE is responsible for persisting the configuration and the certificates
needed for ESRS to work.
Centralized ESRS runs on a gateway Virtual Machine (VM). The ESRS Virtual Edition (VE) software is
installed as an off-array VM, which allows multiple storage systems to communicate with EMC. UnityVSA
systems can only be managed by a Centralized ESRS.
Here is the syntax of the uemcli command for changing the integrated ESRS configuration on a physical
system. The command on the example enables ESRS in the Unity system with site ID 234.
Please refer to the EMC Unity Family Unisphere Command Line Interface User Guide for the complete
details on the available ESRS commands.
Alerts are registered to the System Alerts page in Unisphere. The system alerts page can be accessed
through the link on the top menu bar, from the option under the Events section of the navigation pane
and from the notification icons of the view block added to the dashboard.
The view block on the dashboard shows these alerts categorized by critical, error and warning which will
be better explained on the next slide. Clicking on one of the icons will open the Alerts page showing the
records filtered by the chosen severity level.
Shown here is a table providing an explanation about the alert severity levels from least to most severe.
Logging levels are not configurable.
- Select the alert from the list of records of the Alerts page.
Details about the selected alert record will be displayed in the right pane. The information will include time
the event was logged, severity level, alert message, description of the event, Acknowledgement flag, the
component that was affected by the event, and the current status of the component.
In Unisphere open the Settings configuration window and select the Alerts section. On the Specify Email
Alerts and SMTP configuration click on the Add button. The Add a New Email window will open -
enter the e-mail that will receive the notification messages and click OK to save it.
Select from the drop-down list the severity level of the alert notifications to be sent.
Then type the IP address of the Simple Mail Transport Protocol (SMTP) server required to send emails.
Click the Send Test Email to verify that the SMTP server and destination email addresses are valid.
On the Manage SNMP Alerts page click on the Add button to add the SNMP trap destination target. The
SNMP target window will open - enter the network address (host name or IP address).
Type the user name to authenticate, and select the authentication protocol used by traps. You can only
specify the privacy protocol used to encode trap messages when you edit an existing destination. Click OK
to save the SNMP target – the new entry will be displayed in the list.
Select from the drop-down list the severity level of the alert notifications to be sent.
Click Send Test SNMP Trap to verify that the SNMP configuration is valid.
LUNs and Consistency Groups provide generic block-level storage to hosts and applications that use the
Fibre Channel (FC) or iSCSI protocol to access storage in the form of virtual disks. LUN (Logical Unit) is a
single element of storage while Consistency Group is a container with one or more LUNs.
File Systems and shares provide network access to NAS clients in Windows and Linux/UNIX
environments. Windows environments use the SMB/CIFS protocol for file sharing, Microsoft Active
Directory for authentication, and Windows directory access for folder permissions. Linux/UNIX
environments use the NFS protocol for file sharing and POSIX access control lists for folder permissions.
VMware datastores provides storage for VMware virtual machines through datastores that are accessible
through the FC, or iSCSI protocols (VMFS) and the NFS protocol.
Another modality of supported VMware datastores are the VVol (Block) and VVol (File) datastores. These
storage containers will store the virtual volumes (VVols) which are VMware objects that corresponds to a
Virtual Machine (VM) disk, and its snapshot and its clones. These VVol (File) use NAS protocol endpoints
and VVol (Block) uses SCSI protocol endpoints for I/O communications from the host to the storage
system. The protocol endpoints provides access points for ESXi hosts communication to the storage
system.
Storage pools provide optimized storage for a particular set of applications or conditions. The storage pool
configuration defines the types and capacities of the disks in the pool. A user can defined the RAID
configuration (RAID types and stripe widths) when selecting a tier to build a storage pool.
Pools can be heterogeneous (made up of more than one type of drive) or homogeneous (composed by
only one type of drive).
In a homogeneous pool, only one disk type (flash, SAS, or NL-SAS) is selected during pool creation.
If the FAST VP license is installed, and there are multiple disk types on the system, you can define
multiple tiers for a storage pool. There can be a maximum of three disk types in a heterogeneous pool.
Each tier can be associated with a different RAID type. Flash, SAS, and NL-SAS disks provide tiers of
Extreme Performance, Performance, and Capacity, on Unity systems. However, observe that SAS Flash
3 drives cannot be part of a heterogeneous pool.
In the wizard window, enter the pool name, and optionally the pool description. Then click Next.
The wizard will display the available storage tiers. The user can select the tier and change the RAID
configuration for the selected tier, and choose whether the pool will use FAST Cache.
In the next step, the user can then select the amount of drives from the selected tier to add to the pool.
Also, the user can create and associate a Capability Profile to the pool been created. A capability profile is
a set of storage capabilities for a VVol datastore. This feature will be described when discussing the
management of VMware storage.
The pool configuration can be reviewed from the Summary page. The user can go back and change any
of the selections made for the pool, or click Finish to start the creation job.
The results page will show the status of the job. A green check mark with a 100% status indicates a
successful completion of the task.
On the General tab it is possible to change the pool name and description.
The Disks tab displays the characteristics of the disks in the pool.
The Capacity option on the Usage tab shows information about storage pool allocation and use including
(as shown here):
• Total amount of space allocated to existing storage resources and metadata. This value does not
include the space used for snapshots.
• Free Space, which is the amount of pool space that is available for provisioning storage resources and
snapshots.
• Subscribed capacity, which is the percentage of the pool's total space requested by its associated
storage resources. When this value is over 100%, the pool is oversubscribed. In this case, the storage
pool can be expanded by adding drives to an existing tier or adding an additional tier.
• Alert threshold, which is the percentage of storage allocation at which Unisphere generates
notifications about the amount of space remaining in the pool. Drag the slider to set the value between
50% and 84%.
• Pool used capacity history
On the FAST VP tab (which appears if FAST VP is licensed), it is possible to view data relocation and tier
information for the pool.
On the Snapshot Settings tab, the user can review and optionally change the properties for snapshot
automatic deletion.
In the Select Storage Tiers step, select the storage tiers for the disks you want to add. If your system is
not licensed for FAST VP, you can only add disks to the existing tier.
If you are adding a new tier to the storage pool, you can select a different RAID configuration for the disks
in the tier. Click the Change link next to the tier name, select the new RAID type (if applicable), and
click OK. When the RAID configuration is complete, click Next.
In the Select Amount of Storage page, select the number and type of disks to add, and click Next.
Verify the information shown on the Summary page, and click Finish to expand the pool.
The iSCSI support allows Block storage access (LUNs, Consistency Groups, and VMware VMFS
datastores) using Initiator paths to each SP. Multiple iSCSI interfaces can be created on one Ethernet
port, and CHAP authentication can be optionally enabled for any host.
The Unity systems supports a 16 Gb/s Fibre Channel I/O module for block access. Fibre Channel (FC)
support provides the ability to share block storage resources over an FC storage area network (SAN).
Unity automatically creates FC interfaces when the I/O module is available to the Storage Processor (SP).
To add or manage iSCSI interfaces at a later time, under Storage, select Block and iSCSI Interfaces..
The iSCSI interfaces page shows the list of interfaces, the SP and Ethernet ports where they were
created, the network settings and their IQN (iSCSI Qualified Name). Observe the format of the IQNs on
the last column. The format is explained in the slide.
From this page it is possible to create, view and modify, and delete an iSCSI interface.
Select the Ethernet port where the interface will be created, then enter the network address information:
• IP Address: You can specify an IPv4 or IPv6-based address.
• Subnet Mask or Prefix Length: IP address mask or prefix length that identifies the subnet where the
iSCSI target resides.
• Gateway: Enter the Gateway IP address associated with the iSCSI network interface.
[CLICK] The IQN Alias is the alias name associated with the IQN. The IQN and the IQN alias are
associated with the port and not the iSCSI interface.
Both IQN and IQN alias are generated automatically.
The VLAN ID should be set only if the network switch port was configured to support VLAN tagging of
multiple VLAN IDs. Click on the Edit button to enter a value between between 1 and 4904 to be associated
with the iSCSI interface.
Then click on the OK button to commit the changes and create the new interfaces.
To see the details about a LUN select the LUN [CLICK] from the list and the details about the LUN will be
displayed on the right-pane.
The other tabs of the LUN properties window allow the user to configure and manage local and remote
protection, and advanced storage features. Advanced storage features will be discussed on another
module.
To see the details about a Consistency Group select it from the list and its details will be displayed on the
right-pane.
The LUNs tab shows the LUNs that are part of the Consistency Group. The properties of each LUN can
be viewed and a LUN can be removed from the group or moved to another pool.
The other tabs of the Consistency Group properties window allow the user to configure and manage local
and remote protection, and advanced storage features. Advanced storage features will be discussed on
another module.
From Unity File, SMB and/or NFS shares are created, and provided to Windows, Linux, and UNIX clients
as a file-based storage resource. Shares within the file system draw from the total storage that is allocated
to the file system.
File storage support include NDMP backup, virus protection, event log publishing and file archiving to
cloud storage using CTA as the policy engine.
The Unity components that work together to provision file-level storage include a NAS server, file system,
and shares.
The Unity NAS server is a virtual file server that provides the file resources on the IP network, and to
which NAS clients connect. The NAS server is configured with IP interfaces and other settings used to
export shared directories on various file systems.
The Unity File system is a manageable "container" for file-based storage that is associated with the a
specific quantity of storage, a particular file access protocol (SMB/CIFS or NFS), and one or more shares
through which network clients can access shared files or folders.
An Unity share is a exportable access point to file system storage that network clients can use for file-
based storage via the SMB/CIFS or NFS protocols.
NAS servers retrieve data from available disks over the SAS backend, and make it available over the
network via the SMB or NFS protocols.
[CLICK] Before you can provision a file system storage over SMB or NFS, or, VMware NFS datastore or a
File VVol datastore, a NAS server that is appropriate for managing the storage type must be configured
and running on the system.
NAS servers can provide multi-protocol access for both UNIX/Linux and Windows clients at the same
time.
To see the details about a NAS server select it from the list and its details will be displayed on the right-
pane.
Enter a name for the NAS server, and select the storage pool that will be used to supply file storage, then
choose the Storage Processor (SP) where you want the server to run. It is also possible to select a Tenant
to associate with the NAS Server. The multi-tenancy feature support will be covered in the Advanced
Storage Features module.
In the next step, select the SP Ethernet port you want to use and specify the IP address, Subnet Mask,
and Gateway. If applicable select a VLAN ID to associate with the NAS server. VLAN ID should be
configured only if the switch port supports VLAN tagging. If you associate a tenant with the NAS server,
you must choose a VLAN ID.
In the Configure Sharing Protocols page choose whether the NAS server supports Windows shares
(SMB, CIFS), Linux/UNIX shares (NFS), or multi-protocol (SMB and NFS shares on the same file system).
If you configure the NAS server to support Windows shares, specify an SMB host name, a Windows
domain, and the user name and password of a Windows domain account with privileges to register the
SMB computer name in the domain.
If you configure the NAS server to support UNIX/Linux shares, the NFSv4 protocol can be enabled if
desired as well as the support to File VVols. Select Next to advance to the next configuration page.
The Unix Directory Service page is only available if UNIX/Linux shares or multi-protocol support are
configured in the NAS Server. A Unix Directory Service (NIS or LDAP) must be used.
The NAS server DNS can be enabled on the next page. For Windows shares enable the DNS service, add
at least one DNS server for the domain and enter its suffix. Click Next to advance to the next step.
Review the NAS server configuration from the Summary page and click Finish to start the creation job..
From the General tab of the properties window you can view the associated pool, SP, and supported
protocols. It is also possible to change the name of the NAS Server.
From the Network tab a user can view the properties of an associated network interface, add and delete
interfaces, change the preferred interface and view and define network routes. From this page it is also
possible to enable the Packet Reflect feature which will be discussed in the Advanced Storage Features
module.
The Naming Services tab allow the user to define the Naming services to be used: DNS, LDAP and/or
NIS.
The Sharing Protocols tab allow the user to manage settings for file system storage access for Windows
shares (SMB,CIFS) using the Active Directory or Standalone option and Linux/UNIX shares (NFS). The
user can also provide multi-protocol access to the file system if a Unix Directory Service is enabled in the
Naming Services tab.
The other tabs of the NAS Server properties window allow the user to enable NDMP Backup, DHSM
support, and Event Publishing, Antivirus protection and Kerberos authentication, and remote protection.
From the Ethernet Ports page, settings such as link aggregation and link transmission can be verified
and changed.
To display information about a particular Ethernet port, select it from the list and click on the edit link. The
properties window shows details about the port, including the speed and MTU size. The user can change
both these fields if necessary.
The MTU has a default value of 1500 bytes. If you change the value, you must also change all
components of the network path (switch ports and host).
If you want to support jumbo frames, set the MTU size field to 9000 bytes. This setting is only appropriate
in network environments where all components support jumbo frames end-to-end. In virtualized
environments, jumbo frames should be configured within the virtual system, as well.
The user can also create a link aggregation, add or remove the port to an existing link aggregation.
This prevents auto-shrink from removing too much space from the file system. Thin-provisioned file
systems are automatically shrunk by the system when certain conditions are met.
In this example, for a file system of 100 GB in size (which represents what the host sees), 6 GB of space
were reserved to the file system.
The General tab of the properties window depicts the details about File System utilization details and free
space. Also, the size of the File System can be expanded and shrunk from this page. And the File System
minimum allocation size can be changed.
The other tabs of the File System properties window allow the user to configure and manage local and
remote protection, configure File System Quotas and enable the event notifications for the file systems
monitored by Event Publishing service. File System Quotas and the Event Publishing service will be
discussed in the Advanced storage Features module.
A VMware Datastore is a storage resource that provides storage for one or more VMware hosts. The
datastore represents a specific quantity of storage made available from a particular NAS server (File) and
storage pool (Block).
A storage pool must be associated to a Capability Profile in order to enable VMware VVols based storage
provisioning. Capability profiles describe the desired storage characteristics so that a user-selected policy
can be mapped to a set of compatible VVol datastores.
The General tab of the properties window depicts the details about Datastore capacity utilization and free
space. Also, the size of the Datastore can be expanded from this page. And the File System minimum
allocation size can be changed.
The other tabs of the Datastore properties window allow the user to set host access, change the tiering
policy, configure and manage local and remote protection, and configure Host I/O Limits in the case of
VMFS and VVol (Block) datastores. The Host I/O limits feature will be discussed in the Advanced
Storage Features module.
The storage profiles can be defined by service levels, usage tags, and storage properties.
[CLICK] A VM-granular integration facilitates the offloading of data services with support to snapshots, fast
clones, full clones, and reporting on existing VVols and affected virtual machines.
[CLICK] Compatible arrays can communicate with the ESXi server through VASA APIs. Communications
are done via Block (iSCSI and FC) and File (NFS) protocols.
This module shows the tasks performed on the storage system. The tasks performed by the vSphere
administrator will be discussed on the Storage Resources Access module.
First the storage administrator must create the storage pools that will be associated with VMware
Capability profiles.
Capability Profile definitions include thin or thick space efficiency, user defined set of strings, storage
properties (drive type and RAID level), and a service level to identify it.
Then the Storage Containers or VVol datastores can be created based on the selected storage pool and
capability profile.
Capability profiles define storage properties such as drive type, RAID level, FAST Cache, FAST VP, and
space efficiency (thin, thick). Also, service levels are associated with the profile depending on the storage
pool characteristics. The user can add tags to identify how the VVol datastores associated with the
Capability Profile should be used.
Only after having a Capability Profile associated with a storage pool you will be able to create a VVol
datastore.
The Details tab of the properties window allows you to change the name of the Capability Profile. Also,
the UUID (Universally Unique Identifier) associated with the VMware object is displayed here for
reference.
The Constraints tabs shows the space efficiency, service level, and storage properties associated with
the profile, and allows the user to add and remove user tags.
The VVol datastore will show as a compatible storage in vCenter or the vSphere Web Client if the
associated capability profiles meet VMware storage policy requirements.
There are two types of VVol datastores: VVols (File) and VVols (Block).
VVols (File) are virtual volume datastores that uses NAS protocol endpoints for I/O communication from
the host to the storage system. Communications are done via the NFS protocol.
VVols (Block) are virtual volume datastores that uses SCSI protocol endpoints for I/O communication from
the host to the storage system. Communications are done via either the iSCSI or the FC protocols.
In the next module we will discuss the steps on the second box that are performed by the vSphere
administrator in order to create a VVol Datastore that maps to the VVol datastore created in Unity, and
allow the provision of virtual machines using storage policies.
Although hosts can be directly cabled to the Unity, connectivity is commonly done through storage
networking, and is formed from a combination of switches, physical cabling and logical networking for the
specific block protocol. The key benefits of switch-based block storage connectivity are realized in the
logical networking. Hosts can share Unity front-end ports; thus the number of connected hosts can be
greater than the number of Unity front-end ports. Redundant connectivity can also be created by
networking with multiple switches, enhancing storage availability.
Storage must be provisioned on the Unity for the host. Provisioning Unity storage consists of grouping
physical disk drives into Storage Pools and creating LUNs from the pools. Unity can also access VMFS
Datastores, VVol (Block) Datastores, and Consistency Groups.
Connected hosts are registered to the Unity as an iSCSI initiator iqn or a FC WWN.
The host must then discover the newly presented block storage within its disk sub-system. Storage
discovery and readying for use is done differently per operating system. Generally, discovery is done with
a SCSI bus rescan. Readying the storage is done by creating disk partitions and formatting the partition.
Install and configure the system using the Initial Configuration wizard.
Use Unisphere or the CLI to configure iSCSI or Fibre Channel (FC) LUNs, on the storage system.
The host will need to have an adapter of some kind to communicate via the storage protocol. In Fibre
Channel environments, a host will have a Host Bus Adapter, or HBA, installed and configured. For iSCSI,
a standard NIC can be used.
A Multi-pathing software is recommended to manage paths to the storage system and provide access to
the storage if one of the paths fails.
With a storage networking device ready on the host, connectivity between the host and the array will be
required. In FC, this will include setting up zoning on an FC switch. In iSCSI environments, initiator and
target relationships will need to be established.
To achieve best performance, the host should be on a local subnet with each iSCSI interface that provides
storage for it. In a multi-path environment, each physical interface must have two IP addresses assigned;
one on each SP. The interfaces should be on separate subnets. To achieve maximum throughput, connect
the iSCSI interface and the hosts for which it provides storage to their own private network. That is, a
network just for them. When choosing the network, consider network performance.
After connectivity has been configured, the hosts need to be registered with the Unity storage array.
Registration is usually automatic though in some cases it will be performed manually. In either case, the
registrations should be confirmed.
Having completed the connectivity between the host and the array, you will then be in a position to
provision the Block storage volumes to the host.
Fibre Chanel HBAs should be attached to a dual fabric for HA. iSCSI connections should be attached
using different subnets for HA.
Note: A server cannot be connected to the same storage system through both NICs and iSCSI HBAs.
Depending on the type of HBA being used on the host (Emulex, Qlogic, or Brocade) users can install HBA
utilities to view the parameters. The utilities can be downloaded from the respective vendor support pages
and can be used to verify connectivity between the HBA’s and the arrays they are attached to.
EMC PowerPath software on a Windows 2003, Windows Server 2008, or Windows Server 2012 host.
Refer to the Unity Support Matrix on the support website for compatibility and interoperability information.
Native MPIO on Windows 2003, Windows Server 2008, or Windows Server 2012 without Multiple
Connections per Session (MCS).
The multi-path I/O feature must first be enabled before it can be used. MCS is not supported by Unity.
Refer to the EMC Unity High Availability, A Detailed Review white paper available from support.emc.com.
Verify the configuration in Unisphere. For details on how to configure iSCSI interfaces, refer to topics
about iSCSI interfaces in the Unisphere online help.
When implementing a highly-available network between a host and your system, keep in mind that:
Directly attaching a host to a Unity system is supported if the host connects to both SPs and has the
required multipath software.
Important: Path management software is not supported for a Windows 7 or Mac OS host connected to a
Unity system.
• Any single host should connect to any single array with 1 protocol only.
• iSCSI should use all HBA or all NIC connections, with no mixing in a host.
For FC expansion I/O modules, there is a 4 port 16GB/s that can also negotiate to lower speeds. 16Gb/s
FC is recommended for the best performance.
Unity supports iSCSI connections on multiple port options. If the CNA port is configured for iSCSI, the port
supports 10Gb/s Optical SFPs or TwinAx cables (active).
For iSCSI I/O expansion modules, Unity supports a 4 Port 1Gb/s or 10Gb/s GbaseT module RJ45,
Copper, Cat 5/6 cable and a 2 or 4 Port 10Gb/s IP/iSCSI module with SFP+ or active TwinAX copper, the
2 Port IO module includes iSCSI offload engine.
If possible, configure Jumbo frames (MTU 9000) on all ports in the end-to-end network, in order to provide
the best performance.
CPU bottlenecks caused by TCP/IP processing have been a driving force in the development of hardware
devices specialized to process TCP and iSCSI workloads, offloading these tasks from the host CPU.
These iSCSI and/or TCP offload devices are available in 1 Gb/s and 10 Gb/s speeds.
As a result, there are multiple choices for the network device in a host. In addition to the traditional NIC,
there is the TOE (TCP Offload Engine) which processes TCP tasks, and the iSCSI HBA which processes
both TCP and iSCSI tasks. TOE is sometimes referred to as a Partial Offload, while the iSCSI HBA is
sometime referred to as a Full Offload.
While neither offload device is required, these solutions can offer improved application performance when
the application performance is CPU bound.
Initiators enable you to register all paths associated with an initiator in place of managing individual paths.
In this case, all paths to the host, including those that log on after the fact, are automatically granted
access to any storage provisioned for the host.
The maximum number of connections between servers and a storage system is limited by the number of
initiator records supported per storage-system SP and is model dependent. An initiator is an HBA or CNA
port in a server that can access a storage system. Some HBAs or CNAs have multiple ports. Each HBA or
CNA port that is zoned to an SP port is one path to that SP and the storage system containing that SP.
Each path consumes one initiator record. Depending on the type of storage system and the connections
between its SPs and the switches, an HBA or CNA port can be zoned through different switch ports to the
same SP port or to different SP ports, resulting in multiple paths between the HBA or CNA port and an SP
and/or the storage system. Note that the failover software environment running on the server may limit the
number of paths supported from the server to a single storage system SP and from a server to the storage
system.
Single path: A single physical path (port/HBA) between the host system and the array
Multipath: More than one physical path between the host system and the array via multiple HBAs, HBA
ports and switches
Alternate path: Provides an alternate path to the storage array in the event of a primary path failure.
Each initiator can be associated with multiple initiator paths. Users can control operations at the initiator
level. The storage system manages the initiator paths automatically.
With a software iSCSI initiator, the only hardware the server requires is a Gigabit networking card. All of
the processing for the iSCSI communication is performed by the server's resources, such as CPU and, to
a lesser extent, memory. This means on a busy server, the iSCSI traffic processing may actually use up
CPU resources that could be used for application needs. Windows has an iSCSI initiator built into the OS.
On the other hand, a hardware iSCSI initiator is a Host Bus Adapter (HBA) that appears to the OS as a
storage device. All of the overhead associated with iSCSI is performed by the iSCSI HBA instead of server
resources. In addition to minimizing the resource use on the server hardware, a hardware iSCSI HBA also
allows additional functionality, because the iSCSI HBA is viewed as a storage device. You can boot a
server from iSCSI storage, which is something you can't do with a software iSCSI initiator. The downside
is that iSCSI HBAs typically cost ten times what a Gigabit NIC would cost, so you have a cost vs.
functionality and performance trade-off.
Most production environments with high loads will opt for hardware iSCSI HBA over software iSCSI,
especially when other features such as encryption is considered. There is a middle ground, though. Some
network cards offer TCP/IP Offload Engines (TOE) that perform most of the IP processing that the server
would normally need to perform. This lessens the resource overhead associated with software iSCSI
because the server only needs to process the iSCSI protocol workload.
The Microsoft iSCSI initiator does not support booting the iSCSI host from iSCSI storage. Refer to the
EMC Support Matrix for the latest information about boot device support.
All iSCSI nodes are identified by an iSCSI name. An iSCSI name is neither the IP address nor the DNS
name of an IP host. Names enable iSCSI storage resources to be managed regardless of address. An
iSCSI node name is also the SCSI device name, which is the principal object used in authentication of
targets to initiators and initiators to targets. iSCSI addresses can be one of two types: iSCSI Qualified
Name (iQN) or IEEE naming convention, Extended Unique Identifier (EUI).
Within iSCSI, a node is defined as a single initiator or target. These definitions map to the traditional SCSI
target/initiator model. iSCSI names are assigned to all nodes and are independent of the associated
address.
EMC recommends changing some driver parameters, refer to the Unity Family for Configuring Hosts to
Access Fibre Channel (FC) or iSCSI Storage Guide for the driver parameters. The configuration file is
/etc/iscsi/iscsi.conf.
Note: The Linux iSCSI driver gives the same name to all network interface cards (NICs) in a host. This
name identifies the host, not the individual NICs. This means that if multiple NICs from the same host are
connected to an iSCSI interface on the same subnet, then only one NIC is actually used. The other NICs
are in standby mode. The host uses one of the other NICs only if the first NIC fails.
Each host connected to an iSCSI storage system must have a unique iSCSI initiator name for its initiators
(NICs). To determine a host’s iSCSI initiator name for its NICs use cat /etc/iscsi/initiatorname.iscsi for
open-iscsi drivers. If multiple hosts connected to the iSCSI interface have the same iSCSI initiator name,
contact your Linux provider for help with making the names unique.
To view and discover the target array use the “iscsiadm -m discovery -t sendtargets -p 192.168.3.100”
command.
Then from the CHAP page check the Enable CHAP Setting option.
When you enable this feature, Unity denies access to this iSCSI interface's storage resources from all
initiators that do not have CHAP configured.
You may also set a global forward CHAP secret that all initiators can use to access the storage system.
Global CHAP can be used in conjunction with initiator CHAP. To implement Global CHAP authentication
select Use Global CHAP and specify a username and Global CHAP secret.
Mutual CHAP authentication occurs when the hosts on a network verify the identity of the iSCSI interface
by verifying the iSCSI interface's mutual CHAP secret. Any iSCSI initiator can be used to specify the
"reverse" CHAP secret to authenticate Unity. When Mutual CHAP Secret is configured for the storage
system, the specified mutual CHAP secret is used by all iSCSI interfaces that run on the system.
To implement mutual CHAP authentication, enable the Use Mutual CHAP option for the iSCSI interface
and specify a username and mutual CHAP secret.
To manage Hosts configurations select Hosts from the Access section of Unisphere.
From the Hosts page it is possible to create a new Host configuration, view and modify a Host
configuration, and delete it.
If creating a host configuration for a NAS client the user must provide its IP address and skip the Initiators
step.
if creating a Host configuration for a SAN host, the user must select the host automatically discovered
initiators or manually add the initiators on the Initiators screen.
If the initiator was not automatically-discovered and logged in the system, the user must click on the +
button of the Manually Added Initiators section.
Then select Create iSCSI Initiator, input the SAN host IQN and the CHAP credentials on the Add iSCSI
Initiator window, and click Add.
Or select Create Fibre Channel Initiator, input the SAN host HBA WWN on the Add Fibre Channel
Initiator window, and click Add.
After selecting the Initiators the user can click Next to advance to the Summary.
On the Summary page review the Host configuration then click the Finish button to accept the changed
configuration.
The initiators will be registered and the host added to the list of hosts.
The General tab of the properties windows allows the change of the host profile.
The Initiators tab shows the registered host initiators and the Initiator Paths tab shows all the paths
automatically created for the host to access the storage.
Use a Windows or Linux/UNIX disk management tool (such as Microsoft Disk Manager) to initialize and
set the drive letter or mount point for the host.
Format the LUN with an appropriate file system: for example, FAT or NTFS.
Note: If you intend to use snapshots for the storage, use the quick-format option when formatting the LUN
from a host running Windows 2008 or above. If you use the full format option, you must allocate a higher
quantity of protection storage than primary storage to the storage resource, or snapshot operations will fail.
(Optional) Configure applications on the host to use the LUN as a storage drive.
Connectivity between the NAS clients and NAS servers is done via IP network using a combination of
switches, physical cabling and logical networking. Unity front-end ports can be shared, and redundant
connectivity can also be created by networking with multiple switches.
Storage must be provisioned on the Unity for the NAS client. Provisioning Unity storage consists of
grouping physical disk drives into Storage Pools and creating file systems from these pools and file
systems shares based on the NAS server supported protocols. NAS client access to the storage
resources are done via SMB/CIFS and NFS storage protocols. Unity can also provision NFS Datastores,
and VVol (File) Datastores (NFS) for ESXi hosts. The NAS client must then mount the shared file system.
This task is done differently per operating system (Windows, Linux/UNIX, VMware vSphere).
NFS clients have host configuration profile in Unity with the network address and Operating systems
defined. A NFS share can be created and associated with host configuration. The shared file system can
be mounted in the Linux/UNIX system.
ESXi hosts must be configured in the Unity system by adding the vCenter Server and selecting the
discovered ESXi host. The host configuration can then be associated with a VMware datastore in the Host
Access page of the datastore properties. VAAI enables the volume to be mounted automatically to the
ESXi host once it is presented.
SMB clients do not need a host configuration to access the file system share. The shared file system can
be mounted to the Windows system.
From the Ethernet Ports page, settings such as link aggregation and link transmission can be verified
and changed.
To display information about a particular Ethernet port, select it from the list and click on the edit link. The
properties window shows details about the port, including the speed and MTU size. The user can change
both these fields if necessary.
The MTU has a default value of 1500 bytes. If you change the value, you must also change all
components of the network path (switch ports and host).
If you want to support jumbo frames, set the MTU size field to 9000 bytes. This setting is only appropriate
in network environments where all components support jumbo frames end-to-end. In virtualized
environments, jumbo frames should be configured within the virtual system, as well.
The user can also create a link aggregation, add or remove the port to an existing link aggregation.
The user must enter the name and determine the host, subnet, or netgroup address according to the
following:
• Single-host access: IP address of the host that will use the storage.
• Subnet access: IP address and subnet mask that defines a range of network addresses that can
access shares.
• Netgroup access: Network address of a netgroup that defines a subset of hosts that can access
shares.
The configuration can be reviewed from the Summary page and the user can click Finish to complete the
job.
In the example a host profile configuration was created for a Linux host
From the NFS Shares page it is possible to create a new share, view its properties, modify some settings,
and delete an existing NFS share.
The NFS Shares page shows the list of created shares, with the used NAS server, its file system and local
path.
To see the details about a share select it from the list and the details about the share will be displayed on
the right-pane.
Click on the “add” link to launch the Create an NFS Share (NFS Export) wizard.
The steps of the wizard include:
• Selection of the file system it will support
• Enter a name and optional description for the Share
• Provide access to an existing host
• Review the NFS share to be created then click Finish
The General tab of the properties window provides details about the Share name and location of the
share: NAS Server, File System, Local Path and the Export path.
• Use Default Access (use the default access permissions set for the share): Applies the default access
permissions set for the file system.
• Read-Only: Permission to view the contents of the file system, but not to write to it
• Read/Write: Permission to view and write to the file system, but not to set permission for it
• Read/Write, allow Root : Permission to read and write to the file system, and to grant and revoke
access permissions (for example, permission to read, modify and execute specific files and directories)
for other login accounts that access the file system.
For Windows (SMB) file systems, no host information is required because host access to the storage is
controlled by network access controls set for shares.
The General tab of the properties window provides details about the Share name and location of the
share: NAS Server, File System, Local Path and the Export path.
The Advanced tab allows the configuration of advanced SMB share properties:
- Continuous availability gives host applications transparent, continuous access to a share following a
failover of the NAS server on the system (with the NAS server internal state saved or restored during
the failover process).
- Protocol Encryption enables SMB encryption of the network traffic through the share.
- Access-Based Enumeration filters the list of available files on the share to include only those to which
the requesting user has read access.
- Branch Cache Enabled copies content from the share and caches it at branch offices. This allows client
computers at branch offices to access the content locally rather than over the WAN.
- Distributed File System (DFS) allows the user to group files located on different shares by transparently
connecting them to one or more DFS namespaces.
EMC recommends the installation of the EMC CIFS Management snap-in to a Windows Server 2003,
Windows Server 2008, Windows Server 2012, or Windows 8 host. It consists of a set of Microsoft
Management Console (MMC) snap-ins that can be used to manage home directories, security settings, and virus-
checking on a NAS server
Because shares are accessible through either SMB or NFS, you do not need to format the storage for the
host.
Before an ESXi host can access the Unity provisioned storage, a Host configuration must be defined for it
by providing its network name and IP address.
Then a storage resource (NFS or VMFS datastore) can be created, and associated with the host profile
with a defined level of access.
From Unisphere select the VMware option from the Access section.
On the Add vCenters wizard window enter the IP address of the vCenter Server and its login credentials.
Then click the Find button to discover the ESXi hosts managed by the vCenter Server.
Select the ESXi host that will be associated with the host configuration profile.
• Snapshot: Read/write access to the snapshots associated with LUN. No access to the LUN
• LUN and Snapshot: Read/write access to both the LUN and the associated snapshots
For VMware host when accessing a File system shared via NFS the permission levels are different:
• Use Default Access (use the default access permissions set for the share): Applies the default access
permissions set for the file system.
• Read-Only: Permission to view the contents of the file system, but not to write to it
• Read/Write: Permission to view and write to the file system, but not to set permission for it
• Read/Write, allow Root : Permission to read and write to the file system, and to grant and revoke
access permissions (for example, permission to read, modify and execute specific files and directories)
for other login accounts that access the file system.
The device Details section of the page displays the volume properties and all the created paths for the
provisioned block storage.
Refer to the video demonstrations for details on provisioning VMware datastores and VVols.
The vSphere administrator can then create storage policies in vSphere. The VM storage policies defines
which VVol datastores are compatible based on the capability profiles associated with them. The
administrator can then provision the Virtual Machine and select the storage policy and the desired VVol
datastore.
Once the virtual machines are created using the storage policies, the user will be able to see the
underlying volumes created by vSphere presented on the Virtual Volumes page of the VMware datastores
section in Unisphere.
Then select the vCenter server on the left pane, and from the top menu select the Manage option and the
Storage Providers option from the sub-menu.
The administrator must then open the New Storage provider window by clicking on the Add sign. And
enter a name to identify the entity, and the IP address or FQDN of the VASA provider (Unity) on the URL
field making sure to use the port and full format described here in this slide. Next, the administrator must
type the credentials to login to the Unity system, then click OK. The first time the array is registered, a
warning message may pop-up for the Unity certificate.
The administrator can click Yes to advanced and validate the Unity certificate.
The wizard is launched and the administrator can select to create a new virtual machine, enter a name,
select the folder and the ESXi host where the virtual machine will be created.
Then on the storage section of the wizard, the administrator must select the VM Storage Policy previously
created from the drop-down list. The available datastores are presented as compatible and incompatible.
The administrator must select a compatible datastore to continue.
The rest of the wizard steps will instruct the administrator to select the minimum vSphere version
compatibility, the guest OS for the virtual machine, and the option to customizer the hardware
configuration.
Once the wizard is complete a new virtual machine created from Storage Policy-Based Management will
be listed.
Data - Stores data such as VMDKs, snapshots, clones, fast-clones, and so on. At least one Data VVol is
required per VM to store its hard disk.
Config - Stores standard VM-level configuration data such as .vmx files, logs, NVRAM, and so on. At
least one Config VVol is required per VM to store its .vmx configuration file.
Swap - Stores a copy of a VM’s memory pages when the VM is powered on. Swap VVols are
automatically created and deleted when VMs are powered on and off.
Memory - Stores a complete copy of a VM’s memory on disk when suspended or for a with-memory
snapshot.
FAST Cache consists of one or more pairs of SAS Flash 2 in RAID 1 (1+1) and provides both read and
write caching. For reads, the FAST Cache driver copies data off the disks being accessed into the FAST
Cache. For writes, FAST Cache effectively buffers the data waiting to be written to disk.
At a system level, the FAST Cache reduces the load on back-end hard drives by identifying when a chunk
of data on a LUN is accessed frequently, and copying it temporarily to FAST Cache.
The storage system then services any subsequent requests for this data faster from the Flash disks that
make up the FAST Cache; thus, reducing the load on the disks in the LUNs that contain the data (the
underlying disks). The data is flushed out of cache when it is no longer accessed as frequently as other
data.
Subsets of the storage capacity are copied to the FAST Cache in 64KB chunks of granularity.
FAST Cache operations are non-disruptive to applications and users. It uses internal memory resources
and does not place any load on host resources.
Online expansion and shrinking of a FAST Cache is possible by adding or removing drives.
• Policy Engine – Manages the flow of I/O through FAST Cache. When a chunk of data on a LUN is
accessed frequently, it is copied temporarily to FAST Cache (FAST Cache optimized drives). The
Policy Engine also maintains statistical information about the data access patterns. The policies
defined by the Policy Engine are system-defined and cannot be changed by the user.
• Memory Map – Tracks extend usage and ownership in 64 KB chunks of granularity. The Memory Map
maintains information on the state of 64 KB chunks of storage and the contents in FAST Cache. A copy
of the Memory Map is stored in DRAM memory, so when FAST Cache is enabled, SP memory is
dynamically allocated to the FAST Cache Memory Map.
During normal operation, a promotion to FAST Cache is initiated after the Policy Engine determines that
64 KB block of data is being accessed frequently. To be considered, the 64 KB block of data must be
accessed by reads and/or writes multiple times within a short period of time.
A FAST Cache Flush is the process in which a FAST Cache page is copied to te HDDs and the page is
freed for use. The Least Recently Used (LRU) algorithm determines which data blocks to flush to make
room for the new promotions.
FAST Cache contains a cleaning process which proactively copies dirty pages to the underlying physical
devices during times of minimal backend activity.
When a FAST Cache expansion occurs, a background operation is started to add the new drives into
FAST Cache. This operation first configures a pair of drives into a RAID 1 mirrored set. The capacity from
this set is then added to FAST Cache, and is available for future promotions. These operations are
repeated for all remaining drives being added to FAST Cache. During these operations, all FAST Cache
reads, writes, and promotions occur without being impacted by the expansion. The amount of time the
expand operation takes to complete depends on the size of drives used in FAST Cache and the number of
drives being added to the configuration.
In the Settings page, select the FAST Cache option under the Storage Configuration section. Then hit
the Create button and select the FAST Cache disks from available SAS Flash drives and choose whether
to enable FAST Cache for existing pools. The process will create RAID Groups, add storage to FAST
Cache and enable it for existing storage pools (if the checkbox was selected). The status of the used disks
can be seen from the FAST Cache disks.
After the FAST Cache is created it can be expanded by adding more SAS Flash disks to it, have disks
removed from it, or it can be deleted.
When expanding FAST Cache, you may only select free drives of the same size and type as what is
currently in FAST Cache. In this example, only 400 GB SAS Flash 2 drives are available to be selected, as
FAST Cache is currently created with those drives. From the drop-down list, you are able to select pairs of
drives to expand the capacity of FAST Cache up to the system maximum. In this example, 8 free drives
were found. Click OK to start the expansion process.
To shrink the FAST Cache, select Shrink. Then select the amount of disks to remove from the FAST
Cache. A warning message will be displayed. Removing the drives from FAST Cache requires the flushing
of dirty data from each set being removed to disk.
You can configure a pool to use the FAST Cache during pool creation.
For existing pools, navigate to the Pools page in Unisphere and select the storage pool to modify its
settings.
Click on the edit sign to open the properties page. Then use the General tab in the Storage Pool
Properties page to enable the FAST Cache. Check Use the FAST Cache and click Apply.
FAST VP helps to reduce the Total Cost of Ownership (TCO) by maintaining performance while efficiently
utilizing the configuration of a Pool. Instead of creating a Pool with one type of drive, mixing Flash, SAS,
and NL-SAS drives can help reduce the cost of a configuration by reducing drive counts and leveraging
larger capacity drives. Data requiring the highest level of performance is tiered to Flash, while data with
less activity resides on SAS or NL-SAS drives.
EMC Unity has a unified approach to create storage resources on the system. Block LUNs, File Systems,
and VMware Datastores can all exist within a single Pool, and can all benefit from using FAST VP. In
system configurations with minimal amounts of Flash, FAST VP will efficiently utilize the Flash drives for
active data, regardless of the resource type. For efficiency, FAST VP also leverages low cost spinning
drives for less active data. The access patterns for all data within a Pool are compared against each other,
and the most active data is placed on the highest performing drives while adhering to the storage
resource’s tiering policy. Tiering policies are explained later in this document.
Use the “Highest Available Tier” policy when quick response times are a priority.
The “Auto Tier” policy automatically relocates data to the most appropriate tier based on the activity level
of each data slice.
The “Start High, then Auto Tier” is the recommended policy for each newly created pool, because it takes
advantage of the “Highest Available Tier” and “Auto-Tier” policies.
Use the “Lowest Available Tier” policy when cost effectiveness is the highest priority. With this policy, data
is initially placed on the lowest available tier with capacity.
This table shows the supported RAID types and drive configurations.
Remember to consider the performance, capacity and protection levels these configurations provide, when
deciding on the RAID configuration to adopt.
RAID 1/0 is suggested for applications with large amounts of random writes, as there is no parity write
penalty in this RAID type.
RAID 5 is preferred when cost and performance are a concern. RAID 6 provides the maximum level of
protection against drive faults of all the supported RAID types.
When considering a RAID configuration which includes a large number of drives, (12+1, 12+2, 14+2),
consider the tradeoffs that the larger drive counts contain, such as the fault domain and potentially long
rebuild times.
The Relocation Schedule defines the timeframe during which FAST VP relocations are allowed to occur.
Users can select which days, and the timeframe during those days, for relocations to occur. Relocations
will continue to cycle as long as the Relocation Window is open.
The user can manually pause (hit the pause button) and resume (hit the resume button) the scheduled
data relocations on the system, change the data relocation rate, disable and re-enable scheduled data
relocations, and modify the relocation window.
• If a FAST VP license is installed on the system, you can view the following information for a pool:
• Start and end time for the most recent data relocation.
• Amount of data in the pool scheduled to move to higher and lower tiers.
<click> From this page it is then possible to change the tiering policy for the data relocation.
This lab also demonstrates how FAST VP data relocation can be scheduled or manually started, and how
it behaves with the selection of tiering policies for the storage resources.
Student will also be able describe the Unity Compression architecture and how Unity Compression is
integrated into the current software model. Lastly, the students should identify how Unity Compression
interoperates will other components such as snapshots, replication and encryption.
Unity Compression provides the ability to reduce the amount storage needed for user data on a storage
device, by compressing portions of the data at the time the data is first written. CPU resources, that might
otherwise go unused, are employed to perform the compression on the write path.
Unity Compression is supported on physical hardware only. For Hybrid arrays, Unity Compression is only
supported on All Flash pools with no additional licenses required.
Unity Compression is supported on thin LUNs and VMFS datastores only. Unity Compression enabled
LUNs must be created from All Flash pools (AFP).
Unity Compression depends on the infrastructure provided by Persistent File Data Cache (PFDC). If the
LUN is not using PFDC, then compression will not be used.
Note: If you need to convert a pool to a hybrid pool (both Flash and non-Flash drives), any LUNs
that use compression must be deleted or moved. Hybrid pools cannot have compression enabled,
and you cannot create a compression-enabled LUN in a hybrid pool. An all-Flash pool can contain
both compression-enabled and non-compression enabled LUNs.
When a block of data is written to the system, the data is saved in System Cache, and the write is
acknowledged with the host.
No data has been written to the drives within the Pool at this time.
After the I/O is acknowledged, the normal cache cleaning process occurs. Space within the storage
resource is utilized or allocated, if needed, and the data is saved to disk. This caching change not only
applies to compression enabled resources, but it is also applicable to Block and File storage resources
(excluding VVols) created on All Flash Pools.
For compression enabled storage resources, compression occurs during the System Cache’s proactive
cleaning operations or when System Cache is flushing cache pages to the drives within the Pool. The data
in this scenario may be new to the storage resource, or the data may be an update to existing blocks of
data currently residing on disk.
The data compression algorithm occurs before the data is written to the drives within the Pool. During the
compression process, multiple blocks are aggregated together and sent through a sampling algorithm,
which determines if the data can be compressed.
If the sampling algorithm determines a sufficient amount of space can be saved, the proper amount of
space is then allocated within the storage resource and the data is compressed and written to the Pool.
Compression will not compress data if the size of the compressed data and overhead to store the data is
greater than the original data size. Waiting to allocate space within the resource until after the
compression estimate is completed helps to not over-allocate space within the storage resource.
If the data is compressed, it must first be uncompressed before the data is sent to the host. If the
compressed data already resides in System Cache, the data is uncompressed to a temporary location, the
data is sent to the host, and the temporary location is released. If the compressed data being requested
resides on disk, the data is first read into System Cache, uncompressed to a temporary location, and the
host is sent the data. Data is never uncompressed on disk due to a read operation, as this would reduce
the amount of savings on the storage resource.
All Unity software features are also supported with Unity Compression. The following sections talk
specifically about certain features of the Unity storage system, and how they relate to Compression.
Unity Compression can also be enabled on only the source, only the destination, or both the source and
destination storage resources, depending on if the system and Pool configuration support Unity
Compression. This allows the user to fully control where to implement Unity Compression. One example
of a supported replication configuration is when utilizing Asynchronous Local Replication. The Source
storage resource may reside on an All Flash Pool and have compression enabled, but the destination may
be on a large capacity Hybrid Pool which does not support compression. Another example of a supported
configuration is when replicating a storage resource from a UnityVSA system or a production system not
utilizing Unity Compression, to a storage resource with compression enabled on a remote system.
When a snapshot is mounted and the source storage resource has compression enabled, Unity
Compression is also utilized on any snapshot I/O. If a read is received for a compressed block of data, the
data is uncompressed and sent to the requestor. Savings can also be achieved on writes to a snapshot.
As write operations are received, if the source storage resource has compression enabled, snapshot
writes are also passed through the Unity Compression Algorithms. This savings is tracked and reported as
part of the GBs saved for the source storage resource.
To expand and convert an All Flash Pool containing compression savings to a Hybrid Pool, all savings
must be removed from the Pool. This not only means disabling compression on all storage resources
within the Pool or utilizing Move to migrate data to a new Pool, but also ensuring that all compression
savings on the Pool is reduced to 0. Disabling compression on a resource does not cause the compressed
data within the resource to be uncompressed. This can be achieved by using Move to a resource on the
same Pool with compression disabled, or to another Pool.
Unity Compression is supported on Thin LUNs, Thin LUNs within a Consistency Group, and VMware
VMFS datastores, All Flash Pools can be created on a Unity Hybrid Flash system, or a Unity All Flash
system. Within a Consistency Group, compression enabled LUNs can be mixed with LUNs which have
compression disabled.
Note the Compression checkbox is displayed for thin LUNs in All-Flash pools only. It allows the thin LUN
to be compressed to save space.
If an upgrade is performed from Unity GA release to a higher release, existing LUNs will not automatically
be enabled and the user will have to manually select Unity Compression on a LUN by LUN basis.
When disabling Unity Compression, data is left compressed until the data is overwritten or migrated by
using “LUN Move” operation.
Compression stops for new writes when sufficient resources are not available, and resumes automatically
once enough resources are available. Data that cannot be compressed is detected, and is written
uncompressed.
Note: When you enable compression, only new data is compressed, not data that already exists on the
LUN. In order to compress existing data, you must move the LUN's data to a destination LUN that has
compression enabled.
As with other features, Move can be managed in Unisphere, Unisphere CLI, and REST API. The
Compress Now option is only available within Unisphere, and is only a method to start a Move operation
to a compression enabled storage resource within the same Pool.
For more information on Move and any restrictions of its usage, refer to the white paper titled EMC Unity:
Migration Technologies.
The Compress Now option may be utilized at any time, not only after enabling compression on a storage
resource. One benefit to utilizing Compress Now is that it sequentially moves data to a new storage
resource. This reorganization can help to improve performance of sequential workloads if the data was
originally written randomly to the resource. The reorganization may also increase the savings achieved by
Unity Compression by putting compressible data sequentially together. For maximum space savings, it is
recommended that only 1 Compress Now operation per Storage Processor be running on the system at a
time.
Note: If you need to convert a pool to a hybrid pool (both Flash and non-Flash drives), any LUNs
that use compression must be deleted or moved. Hybrid pools cannot have compression enabled,
and you cannot create a compression-enabled LUN in a hybrid pool. An all-Flash pool can contain
both compression-enabled and non-compression enabled LUNs.
Data is encrypted with strong encryption, based on FIPS 140-2 Level 1 compliant encryption using
Advanced Encryption Standard (AES) algorithms. Compliance is with industry or government data security
regulations that require or suggest encryption including, HIPAA (healthcare), PCI DSS (credit cards), and
GLBA (finance).
Hardware-based encryption modules located in the SAS IO controller chip in all the 12 GB/s SAS I/O
modules and embedded in the Storage Processor encrypt data as it is written to the backend drives, and
decrypt the data as it is retrieved from these drives.
Since the encryption/decryption functions occur in the SAS controller, it has minimal impact on data
services such as replication, snapshots, etc. There is little to no performance impact with encryption
enabled.
An internal key manager generates and manages encryption keys. This method is simpler, lower cost, and
more maintainable than self-encrypting drives.
With the encryption hardware embedded in the array, drive vendor and drive type are agnostic, allowing
use of any disk drive type and eliminates drive specific vendor overhead.
Securely decommissioning arrays is easily accomplished by deleting pools, this in turn deletes all drive
encryption keys and most often eliminates the need to shred disk drives. Encryption is a licensed feature
and will not appear in the licenses page if the license is not active. No data-in-place upgrades are
supported and changing the encryption state requires a destructive re-initialization.
Key Manager monitors for key store configuration changes (i.e. storage pool configured, disk added to
pool, etc.) that result in key creation/deletion.
Key Manager uses RSA BSAFE libraries to generates several encryption keys, and stores these keys in a
secure Keystore. The combined use of these encryption keys, ensure that neither the drives themselves,
nor the keys which encrypt these drives, can be read. The encryption keys generated by Key Manager
are: Data Encryption Keys (DEKs), Key Encryption Keys (KEKs), and Key Encryption Keys Wrapping Key
(KWK).
The second type of key used is referred to as a Key Encryption Key (KEK), and is a 256 bit key generated
to protect and secure the DEKs as they move through the storage system, such as to and from the SAS
controller. A new KEK is generated each time the storage system boots, by using the AES Key Wrap
algorithm. The DEKs are wrapped with the KEK and stored in the keystore.
The Key Encryption Key Wrapping Key (KWK) is used to wrap the KEK as it travels throughout the array
and to the SAS controller. Similar to the KEK, the KWK is also generated using the AES Key Wrap
Algorithm.
Or when sparing a new drive. Each time a drive is bound, an entirely new, unique key will be created.
Drive’s keys are also permanently deleted as a result of unbinding drives, such as when a storage pool or
FAST Cache is deleted or FAST Cache is shrunk.
Because DEKs are permanently deleted whenever drives are unbound, simply deleting storage pools and
FAST Cache can be an effective method of rendering residual data unreadable, as the drives will never
again be able to be decrypted without their corresponding DEKs.
The keystore should be backed up whenever drives are added to or removed from a pool. The keystore
can later be restored in the event the existing on-array keystore becomes corrupted or unavailable.
Once set, the encryption state cannot be changed without completely reinitializing the array (which
removes all data and configuration).
From the Unisphere GUI, open the Settings window and select Encryption under the Management
section. From this page it is possible to verify the status of the Controller Based Encryption, perform the
manual off-array backup of the keystore file, and download audit logs and checksum. Keystore file backup
is very important to ensure the integrity of the drive’s data encryption keys.
The keystore is tied to the storage processors, so care must be taken when performing service operations
in order to preserve the encryption keys. System maintenance that involves the replacement of the
chassis and both storage processors requires some precautions to be taken. During the replacement
procedure, both storage processors should not be replaced at the same time. One of them should be
retained until the array is back online, prior to replace the second storage processor. In the case that
storage processors have already been replaced at the same time and the keystore was lost, the keystore
may be restored from an external backup.
In the unlikely event all redundant keystore copies stored securely on array become corrupted or
otherwise unavailable, the keystore backup file can be used to restore access to the data. Unity also
maintains an audit log which supports logging keystore operations such as D@RE feature activation, key
creation, key deletion, keystore backup, disk encryption completion, and IO module addition. These audit
logs can be downloaded along with its corresponding checksum information.
The Unity Data at Rest encryption feature only encrypts and decrypts data as it passes the SAS controller
level. On another words, the data is only encrypted and secure when stored on the backend drives or the
internal M.2 SATA device. Data that travels throughout the network to external hosts is not protected. To
protect data at host access level, an external data in flight encryption solution must be used such as SMB
protocol encryption or host-based encryption.
Host I/O Limits are either enabled or disabled in a Unity system. All host I/O limits are active if the feature
is active. Host I/O limits are active as soon as policies are created and assigned to the storage resources.
In the GA release for Unity, Host I/O Limits provided a system wide pause and resume control feature. For
Unity OE v4.1, the pause and resume feature was enhanced to allow users to pause and resume a
specific Host I/O limit on an individual policy level.
Limits can be set by throughput in I/Os per second (IOPS) or Bandwidth defined by Kilobytes or
Megabytes per second (KBPS or MBPS), or a combination of both limits. If both thresholds are set, the
system limits traffic according to the threshold that is reached first.
Only one I/O limit policy can be applied to a storage resource. For example, an I/O limit policy can be
applied to an individual LUN or to a group of LUNs. When an I/O limit policy is applied to a group of LUNs,
it can also be shared.
When a policy is shared, the limit applies to the combined activity from all LUNs in the group.
When a policy is not shared, the same limit applies to each LUN in the group.
Host I/O Limits provide more granularity and predictable performance in system workloads between hosts
and applications and storage resources.
Another use case is setting limits for say, run away processes or noisy neighbors. These are processes
that take resources away from other processes.
In a test and development environment, a LUN with a DB on it may be used as for testing and you want
to run some test against it. Users can create a snapshot of the LUN and mount it. Putting a limit on the
snapshot would be useful to limit I/O on the snap since it’s not a production volume.
New in Host I/O Limits is the ability to pause and resume a specific host I/O limit. Unlike previous policies
enforced with Unity GA, this feature allows each configured Host I/O policy to be paused or resumed
independently of the others, whether or not the policy is shared.
Resuming the policy will immediately start enforcement of that policy and throttle the I/O accordingly.
Policy Status can be viewed for each host I/O policy and are in one of three conditions;
System Settings are global settings and are displayed as either Active or Paused.
When the System Settings are displayed as Active, the Policy Status will be displayed as Active or Paused
depending on the status of the policy when the System Settings was changed.
For example, if the System Setting was “Active” and the user had configured three policies A, B and C. A
user could pause A, and the system would update the status of “A” to “Paused”. The other two policies B
and C would still display an “Active” status. At this point if the user decided to change the System settings
to “Pause” the Policy status will be displayed as “Global Paused” on policies B and C but “Paused” on A.
and the policy will not be enforced.
When both the System setting and Policy Setting are Paused, the Policy Status will be shown as Paused.
The example shows the Settings method. Selecting “Performance” displays the “Host I/O Limits Status”.
<click> If there are “Active” policies, users have the option to “Pause” the policies on a system wide basis.
Once a user selects the “Pause” option, they will be prompted to confirm the operation. (Not shown)
<click> The Host I/O Limits Status now displays a “Paused” Status and users have the option to “Resume”
the policy.
Navigating to the Performance > Host I/O Limits page shows the policies that were affected by the Pause.
In the example, three policies display a Status of “Global Paused” indicating a System wide enforcement
of those policies.
Select “Resume” to allow the system to continue with the throttling of the polices.
From the “More Actions” tab, users have a chance to Pause an Active session. (resume will be greyed
out). Once the Pause option is selected, a warning message will be issued to the user to confirm the
Pause operation.
Selecting Pause will start a background job and after a few seconds, causes the Status of the policy to be
displayed a “Paused”. All other policies are still “Active” since the pause was done at the Policy level, not
the System level.
Note that a “Global Paused” policy cannot be resumed, it must first be “Paused” before the “Resume”
operation is available.
When the shared box is left unchecked, each individual resource will be assigned a specific limit(s).
When the shared box is selected, the resources are treated as a group, and all resources share the limit(s)
applied in the policy.
In the example, a Host I/O policy has been created to limit the number of host IOPS to 100.
In this case, both LUN 1 and LUN 2 will share these limits. This does not guarantee the limits will be
distributed evenly, if a particular LUN is using more IOPS it will be serviced.
Also, limits are shared across Storage Processors it doesn’t matter if a LUN is owned by each SP, the
policy applies to both.
In the example, there are three LUNs under the same policy, setting an absolute policy for the LUNs would
limit each LUN to 1000 IOPS regardless of its size.
The limit is based on the resources capacity. As with other limits, the policy can be shared with other
resources. When a density policy is in place, the IOPS and Bandwidth are based on a GB value not a
maximum value as with a absolute policy.
Consider the case where a 7.2k RPM disk drive is capable of handling 50 IOPS. However there are
several drives with varying capacities that spin at 7.2k. As the size of the drive increases, the IOPS/GB
ratio worsens (i.e. you get 50 IOPS/GB with 1GB drive and 6.25 IOPS/GB with an 8TB drive).
Implementing a Host Density I/O Limit can solve this problem.
IO Density is the measurement of Host IOPS generated over a given amount of stored capacity and
expressed as IOPS/GB ratio. Another way is to say this, is I/O Density measures how much performance
can be delivered by a given amount of storage capacity. Customers with a good understanding of their
applications, can then use density policies to offer certain storage service levels based on this knowledge.
The graph on the left displays the calculations for a 200GB LUN with a Max IOPS per GB setting of 2.
The IOPS to GB ratio would be 2. (400/200=2)
The customer now expands his LUN to 400GB which increases the IOPS limit to 800. This calculates to
800/400GB = 2. The benefit to the customer is he can now maintain the performance to GB ratio even if
he chooses to expand an existing LUN/s without having to create a whole new policy.
LUN A is a 100 GB LUN, so the calculation is 100 (Resource Size) x 10 (Density Limit). This sets the
maximum number of IOPS to 1000.
LUN B is 500 GB so the calculation is 500 (Resource Size) x 10 (Density Limit). This sets the maximum
number of IOPS to 5000.
The Service Provider can simply add both LUNs under a single Density Host I/O Limit to implement the
policy. In previous Host I/O Limits implementations, the Service Provider would have had to create two
policies, one for each LUN.
In the example, LUN A is a 100 GB LUN, LUN B is 500 GB, so the calculation is 100 + 500 (combined
resource size) * 10 (Density Limit). This sets the maximum number of IOPS to 6000.
The Storage Administrator can simply add both LUNs under a single shared Density Based Host I/O Limit
to implement the policy.
If a user tries to configure a value outside the limits as shown in the example, the box will be highlighted in
Red to indicate the value is incorrect.
The Burst feature provides Service Providers with an opportunity to upsell an existing SLA. Service
Providers can afford customers the chance to use more IOPS than the original SLA called for. If
applications are constantly exceeding the SLA on a regular basis, they can go back to the customer and
sell additional usage of extra I/Os allowed.
This is also where users configure the duration and frequency of when the policy runs. This timing starts
from when the Burst policy is created or edited, it is not tied in any way to the system or NTP server time.
Having the timing run in this manner prevents several policies from running at the same time, say at the
top of the hour.
Burst settings can be changed or disabled at any time by clearing the Burst setting in Unisphere.
The “Burst” percentage option is the amount of traffic over the base I/O limit that can occur during the
burst time. This is a percentage of the base limit. This value is configurable from 1% to 100%.
The “For” option is the duration in minutes to allow burst to run. (not a strict window) Set from 1 minute to
60 minutes. This setting is not a hard limit and is used only to calculate the extra I/O operations allocated
for bursting. The actual burst time depends on I/O activity and can be longer than defined when activity is
lower than the allowed burst rate.
The “Every” option is the frequency to allow the burst to occur. Set from 1 hour to 24 hours.
The example shows a policy configured to allow a 10 % increase in IOPS and Bandwidth. The duration of
the window is 5 minutes and the policy will run every 1 hour.
The Burst allowance resets at the next interval based on the “Every: x hour(s)” setting.
In this case, the absolute limit is 1000 IOPS with a burst percentage of 20%. The policy is allowed for a
five minute period and will reset at 1 hour intervals. The policy will never allow the IOPS to go above this
20% limit.
After the additional I/O operations allocated for bursting are depleted, the limit returns to 1000 IOPS. The
policy will not be able to burst again until the 1-hour interval ends.
Note the I/Os are not allowed to happen at one time, you will only reach the burst percentage increase
over time. The duration comes down to how long it takes to reach the burst percentage.
In the first case, the Host I/O is always above the Host I/O limit and Burst limit.
So we have a Host I/O Limit and Burst Limit configured, but the incoming Host I/O continually exceed
these values.
Next, we have a case where the Host I/O is above Host I/O Limit, but below Burst Limit. The Host IOPS
generated are somewhere in between these two limits.
When a IOPS Burst policy is configured. the policy will do the following;
Throttle the Host I/O as to never allow IOPS to go above the Burst Limit ceiling. If Burst Limit is 20%, then
only 20% more IOPS allowed at any point in time.
For this scenario the duration of the extra IOPS will nearly match the “For” setting.
Once all Extra IOPS have been consumed, the burst allowance ends and any Extra I/O calculations are
refreshed.
The Host I/O Limits are configured to be a maximum of 1000 IOPS with a burst percentage of 20. (1200)
The duration of the burst is 5 minutes and will refresh every hour.
We can see the Host target IOPS is around 1500, well above the Host I/O and Burst Limit settings. This is
the I/O that the host is performing. The blue line is what the Host I/O limit is so we will try to keep the I/O at
this limit of 1000 IOPS.
The Burst Limit is the limit that was calculated from the user input and is at 1200 IOPS. The policy will
never allow the IOPS to go above the burst limit. It also means that you will match the “For” window for the
duration period since the Host I/O is always above the other limits.
The IOPS come in and are throttled by the Host I/O Limit of 1000 IOPS.
IOPS continue up until the 42 min. mark where it comes to the 5 minute window where we allow I/O to
burst to 1200 IOPS during that period.
The I/O burst period starts, and a calculation is taken between minute 39 and 40 (60 secs). In that 60
secs., we allowed and extra 200 IOPS (1200 – 1000), so 200 * 60 will produce the value of 12,000. So
every 60 sec. sample period then will allow 12,000 IOPS.
Our “For” value is 5 minutes, so in a 5 minute period we should use our 60,000 extra IOPS. (1200 * 5 =
60,000). The 12,000 is subtracted from our total of 60,000 for each 60 sec. period. (60,000 – 12,000 =
48,000)
This continues for the frequency of the burst. Every 60 sec period subtracts an additional 12,000 IOPS
until the allotted extra I/O value is depleted.
Since the Host I/O rate was always above our calculated values during the period, the extra IOPS will be
used within the 5 minute window. Once the burst frequency ends, it will start again in 1 hour as determined
by the “Every” parameter.
As Host I/O continues, we see at the 39 minute mark the start of the I/O burst that in this case, is a 10
minute period. The thing to note is the I/O does not cross the 1100 IOPS since this is all the I/O the host
was attempting to do. Also, since the number of IOPS is smaller, it will continue to run for a longer period
of time before the total Extra I/O count is reached.
The total number of IOPS calculated based on the original numbers is 60,000. So for each 60 sec period,
we allow 6,000 IOPS. Effectively, this doubles the “For” time since we throttle the IOPS at 6000.
So even though the “For” period was 5 minutes, the number of IOPS allowed were smaller, thus allowing
for a greater period of the burst than the configured time.
The tenant traffic is separated by the associated VLANs providing tenant data separation and increasing
security. The tenant traffic is separated at the Linux Kernel layer.
Each tenant can have its own network namespace: IP addresses and port numbers, VLAN domain,
routing table, firewall, DNS, administrative servers, etc.
With the deployment of IP Multi-tenancy the Service Providers are able to manage these tenant partitions
through Unisphere GUI, CLI, and REST API.
Service Providers are also able to provide tenants with their own DNS server or other administrative
servers allowing each tenant their own authentication and security validation from Protocol layer.
Once a tenant is created NAS servers can be created for each of the tenant’s VLANs. Host configurations
that provide access to hosts, subnets, and netgroups can then be created and associated with the tenants.
These Host configurations are used to control the access of NFS clients to shared file systems. (Access
to SMB file systems is controlled through a file and directory access permissions set using Windows
Directory controls.)
Each Tenant has one or multiple NAS Servers, however, each NAS server can be associated with only
one tenant.
The association of the NAS servers with each tenant provide the desired isolated network, providing each
tenant with its own IP namespaces. NAS Server’s IP addresses can then be configured without
concerning IP overlapping issues. This is very helpful in the case of many Service Providers that need to
accommodate the tenants’ Data Path IP access requirements in the Customer Storage network.
Sometimes the IPV4 addresses must conform to the tenants' desired IP address schema (which may
mean same IPs for different tenants) for management of, and access to, storage objects. Example: a
tenant can have 10.10.10.10 as NAS Server IP while other tenants can have the same.
First it is recommended to create a storage pool for each tenant. Then the tenants should be added to the
system and each one must be assigned a non-overlapping set of VLANs.
Next, a NAS server should be created for each one of the tenants. The tenant must be associated with the
NAS server and its pool selected to store the NAS server’s metadata.
A tenant’s VLAN ID is also associated with the NAS server and the network interfaces for the NAS server
can be created at this point.
After the NAS server is configured, the file systems and shares can be created for each tenant. Windows
hosts access for the tenant’s SMB shares is normal. Hosts access for the tenant's NFS shares requires
the configuration of a host profile.
The Add Tenant wizard is launched by clicking on the “add sign” link. Fill in the information on the Add
Tenant window. The fields with asterisk cannot be blank.
You can choose whether to have the system automatically generate the UUID or to enter it manually.
Manual UUID input is useful to match the tenant on both source and destination systems of replication
deployment. Leave the box unchecked to have the UUID automatically generated by the system.
You must then select one or more VLANs to associate with the tenant. Click on the add button to launch
the Add VLAN window.
Type the VLAN ID or select it from the drop-down list and click on the Add button. The VLAN ID will be
displayed on the list of VLANs associated with the tenant.
Repeat the operation to associate more than one VLAN with the tenant.
Clicking the Ok button will commit the changes and add the tenant to the Unity system.
To see the details about a tenant select it from the list and its details will be displayed on the right-pane.
VLANs can be added or removed to a tenant configuration.
From the wizard window select a tenant and the VLANs that are associated with the tenant.
The General Settings page allows the selection of the tenant to associate the NAS server with.
If a specific pool was created to store all of the tenant’s NAS servers metadata, it could be chosen from
this window as well.
It is also possible to select the SP the NAS server will run on.
NOTE: After the creation of a NAS server, it is not possible to change the tenant that is associated with it.
Also, if a NAS server was created without been associated with a tenant, it is not possible to add a tenant
to the configuration.
The interface will have an IP address, a subnet, and a gateway. The configuration will also allow the
selection of one VLAN ID used by the chosen tenant.
Some tenants might have more than one associated VLAN. These different VLANs used by the tenant can
be associated with the NAS server at its properties window later on.
From the Network tab it is possible to verify the network interface and VLAN used by the server. The
interface properties can be changed including the SP port used for communication, the network address,
and the tenant VLAN.
There is no need to create host configurations to control hosts access to SMB file systems. Access to
these storage resources is controlled though a file and directory access permissions set using Windows
directory controls. Client authentication for Windows shares access is controlled through the Active
Directory.
In this new release the wizard used for creating each host configuration allows its association with a
tenant.
After the host configuration is created it must be associated with the file system share to have the host
access level defined.
The Advanced Static Routing feature provides the capability of configuring additional static routes by
allowing each NAS Server interface to have its own routing table. This enables NAS Servers to be
accessible from different isolated IP networks.
In environments that have multiple gateways, and each gateway is used to access different subnets, the
routing tables are necessary to tell the system where to route the packets to.
For example, if an interface is created with the IP address 192.168.64.5, a 24 subnet mask and a default
gateway of 192.168.64.254, a basic routing table is automatically created. A Local route is created for all
traffic to the 192.168.64.0/24 network will use interface 192.168.64.5. All other traffic will be sent to the
gateway 192.168.64.254, and the gateway will determine where it goes.
For example, a static host route can be created to determine that the host with IP address 192.168.64.101
should be accessed through the interface with IP address 192.168.64.6. Another route can be created
ruling that subnet 192.168.65.0/24 should be accessed through the gateway 192.168.65.254.
In this example, there were three interfaces created for a NAS server. A default gateway was not provided
when creating the third interface.
Observe that routes were automatically created for the three interfaces. A Local and a default route were
created for the first interface. A similar configuration with a Local and default route was also provided for
the second interface. However, only a Local route was created for the third interface.
On the top of the table there are options to add a route, remove a selected route, refresh the list, and edit
the properties of a selected route.
First you must use the drop-down list to choose one of the existing interfaces. Up to 20 routes can be
created for each interface
Once the interface is chosen the gateway will reflect the default gateway associated with the interface (if
one was provided during its creation).
There are 3 route types to choose from. The default route which establishes a static route to a default
gateway. The host route which establishes a static route to a specific host, and the network route which
establishes a static route to a particular subnet.
In this example we are creating a static route for the network 192.168.1.0 using the 192.168.64.5 interface.
Observe that a subnet mask must be provided for the network.
Observe that in both cases, the system will calculate the subnet mask.
The page will display all the basic routes (Local and default) automatically generated for the NAS server
interfaces, and any routes added using the System Global Settings window, from the Access > Routing >
Manage Routes path as demonstrated on the previous slides.
The More Actions option on the top of the query offer the possibility to filter the list to show only Host or
Net routes that were created.
The feature can be enabled on a NAS server using the Unisphere interface or when creating a NAS server
via UEMCLI. By default, the feature is disabled. The feature start working immediately after been enabled
with no need to perform a system reboot.
Packet reflect will only work for the network traffic that is client-initiated. Any communication initiated from
the Unity system still requires consulting the route tables and ARP tables prior to sending outbound
packets.
First the a NAS client sends a request packet to the NAS server.
The ingress request packet containing local IP, remote IP and next-hop MAC address is cached. Unity
leverages this information to send the outbound packet to the proper location
The NAS server sends the egress packet reply using the cached information and the same interface used
for the request packet.
To enable the feature you must click on the pencil icon besides the feature status.
On the Change Packet Reflect Settings window select Enabled and click OK to close the window and
immediately commit the change.
The capacity of thin file systems can be extended with changes to the advertised capacity and actual
allocation.
Thin file systems can be automatically extended based on the ratio of used-to-allocated space. This
operation happens without user intervention, and does not changes the advertised the capacity.
A thin-provisioned file based storage resource may appear full when data copied or written to the storage
resource is greater than the space available at that time. When this occurs, the system begins to
automatically extend the storage space and accommodate the write operation. As long as there is enough
extension space available, this operation will complete successfully.
System automatically allocates space for a thin UFS64 file system along with space consumption. Auto
extend happens when space consumption threshold is hit. Threshold is the percentage of used space in
the whole file system allocated space (system default value is 75%). It cannot exceed the file system
visible size. Only allocated space increases, not file system provisioned size. The file system cannot auto-
extend past the provisioned size.
These shrink operations can be manually initiated by the user for both thin and thick file systems.
Automatic shrink operations are initiated only on thin file systems when the Unity system identifies
allocated but unused storage space that can be reclaimed back to the storage pool.
The file system has 250 GB of Used Space. The system performs any evacuation that is necessary in
order to allow the shrinking process on the contiguous free space.
Observe that the only space that is reclaimed is the portion of the shrink that was in the original allocated
space of 450 GB. That was because the remaining 550 GB of the original thin file system was virtual
space advertised to the client.
The properties of the File System can be launched by double-clicking the File System from the list or by
hitting the pencil icon from the menu on the top of the File Systems list.
From the General tab the size of the File System can be extended by increasing the Size field. To shrink
the File System you must decrease the Size field. The Apply button must be hit in order to commit the
changes.
The change to the File System configuration (size and percentage of allocation space) will be displayed in
the list.
In this example the Thin File System has its size manually extended by 10GB.
Limiting usage is not the only application of quotas. The quota tracking capability can be useful for tracking
and reporting usage by simply setting the quota limits to zero.
Quota limits can be designated for users, or a directory tree. Limits are stored in quota records for each
user and quota tree. Limits are also stored for users within a quota tree.
The File Size quota policy calculates the disk usage based on logical file sizes in 1K increments.
The block quota policy calculates the disk usage in file system blocks at 8KB file system blocks.
Hard and soft limits set on the amount of disk space allowed to be used.
Default quota settings can be configured for an environment where the same set of limits are applied to
many users. These parameters can be configured when following parameters can be configured at the
Manage Quota Settings window: Quota policy, and default user quota limits: soft limit, hard limit and grace
period.
The soft limit is a capacity threshold above which a countdown timer will begin. While the soft limit may be
exceeded, this timer, or grace period, will continue to count down as long as the soft limit is exceeded. If
the soft limit remains exceeded long enough for the grace period to expire, no new data may be added to
the particular directory or by the particular user associated with the quota. However if sufficient data is
removed from the file system or directory to reduce the utilization below the soft limit before the grace
period expires, access will be allowed to continue as usual.
A hard limit is also set for each quota configured. Upon reaching a hard limit, no new data will be able to
be added to the file system or directory. When this happens, the quota must be increased or data must be
removed from the file system before additional data can be added.
In this example, a user quota was configured on a file system for a particular user. The Soft Limit is 20GB,
the Grace Period is 1 day and the Hard Limit is 25GB. The user copies 16GB of files to the file system.
Since the capacity is below the user’s quota, the user can still add more files to the file system.
The storage administrator receives an alert in Unisphere stating that the soft quota for this user has been
crossed.
The Grace Period of 1 day begins to count down. At this point the user is still able to add additional data to
the file system.
However, the user must remove data prior to the expiration of the Grace Period for the usage to fall below
the soft limit.
The transfer of additional data to the file system is interrupted until usage drops below the allowed soft
limit.
Note: Snapshots are not full copies of the original data. It is recommended that you do not rely on
snapshots for mirrors, disaster recovery, or high-availability tools. Because snapshots of storage
resources are partially derived from the real-time data in the relevant storage resource, snapshots can
become inaccessible (not readable) if the primary storage becomes inaccessible.
If the snapshot is writable, any writes are handled in a similar manner; stripe space is allocated from the
parent pool and the writes are redirected in 8 kilobyte chunks to the new space. Reads of newly written
data are also services from the new space.
Storage space is needed in the pool to support snapshots as stripes are allocated for redirected writes.
Because of the on-demand stripe allocation from the pool, snapped thick file systems will transition to thin
file system performance characteristics.
It is also possible to copy a snapshot. In this example, the 4 o’clock snapshot is copied and other than
having a unique name, the copy will be indistinguishable from the source snapshot and both capture
identical data states.
It is also possible to nest copied read/write attached snapshots that form a hierarchy of snapshots to a
maximum of 10 levels deep.
Snapshots of a file system can be created either as read-only or read/write and are accessed in different
manners which will be covered later.
Copies of snapshots are always created as read/write snapshots. The read/write snapshots can be shared
by creating an NFS export or SMB share to them. When shared, they are marked as modified to indicate
their data state is different from the parent object.
It is also possible to nest copied and shared snapshots that form a hierarchy of snapshots to a maximum
of 10 levels deep.
For Block, the snapshot is created on a LUN or a group of LUNs in the case of a Consistency Group. For
File, the snapshot is configured on a file system. For VMware the storage resource is either going to be a
LUN for a VMFS datastore or a file system for an NFS datastore. When creating each of these storage
resources, the Unity system provides a wizard for their creation. Each wizard provides an option to
automatically create snapshots on the storage resource. Each resource snapshot creation is nearly
identical to the other resources.
For storage resources already created, snapshots can be manually created for them from their Properties
page. As with the wizard, the snapshot creation from the storage resource Properties page is nearly
identical to the other resources. The following few slides will show snapshot creation within the Block
storage LUN creation wizard and the File storage file system creation wizard. It will also show creating
manual snapshots from the LUN and file system Properties pages.
Video demonstrations will be provided showing all forms of storage resource snapshot creation.
The hosts can now be reconnected to the resources they were connected to prior to the restore and
resume normal operations.
Before performing a restore operation, disconnect clients from any of the file system snapshots. Also
quiesce IO to the file system being restored.
The connections and IO to the resources can now be resumed for normal operations.
Before performing an Attach to host operation, the host being attached will need to have connectivity to
the Unity array.
Now the Attach operation can be performed. The first step is to select a snapshot to attach to. The next
step is to select an Access Type, either Read-Only or Read/Write. Then the host or hosts are selected to
be attached. Next, the system will optionally create a copy of the snapshot if a Read/Write Access Type
was selected. Thus preserving the data state of the snapshot prior to attach.
Before performing an Detach operation, allow any outstanding read/write operations of the snapshot
attached host to complete.
A host has to have connectivity to the storage, either via fibre channel or iSCSI, and be registered. Next,
from the Snapshots tab, a snapshot is selected and the snapshot operation Attach to host needs to be
performed.
Now tasks from the host will need to be done. The host will need to discover the disk device that the
snapshot presents to it. Once discovered, then the host can access the snapshot as a disk device.
On the storage system an NFS and/or SMB share will have to be configured on the read/write snapshot of
the file system. This task is done from their respective pages.
Now tasks from the client will need to be done. The client will need to be connected to the NFS/SMB share
of the snapshot. Once connected, then the client can access the snapshot shared resource.
A short video follows that demonstrates client access to a file system snapshot.
The first task for an NFS client is to connect to an NFS share on the file system. Access to the read-only
snapshot is established by accessing the snapshot’s hidden .ckpt data path. This path will redirect the
client to the point-in-time view that the read-only snapshot captures.
Similarly, the first task for an SMB client is to connect to an SMB share on the file system. Access to the
read-only snapshot is established by the SMB client accessing the SMB share’s Previous Versions tab.
This will redirect the client to the point-in-time view that the read-only snapshot captures.
Because the read-only snapshot is exposed to the clients through the CVFS mechanism, the clients are
able to directly recover data from the snapshot without any administrator intervention. For example, if a
user either corrupted or deleted a file by mistake, that user could directly access the read-only snapshot
and get an earlier version of the file from the snapshot and copy it to the file system to recover from.
A short video follows that demonstrates client access to a file system read-only snapshot.
Remote Replication is one method that enables data centers to avoid disruptions in operations. In a
disaster recovery scenario, if the source site becomes unavailable, the replicated data will still be available
for access from the remote site. Remote Replication uses a Recovery Point Objective (RPO) which is an
amount of data, measured in units of time to perform automatic data synchronization between the source
and remote systems. The RPO for asynchronous replication is configurable. The RPO for synchronous
replication is set to zero. The RPO value represents the acceptable amount of data that may be lost in a
disaster situation. The remote data will be consistent to the configured RPO value. The minimum and
maximum RPO values are 5 minutes and 1440 minutes (24 hours).
Remote Replication is also beneficial for keeping data available during planned downtime scenarios. If a
production site has to be brought down for maintenance or testing the replica data can be made available
for access from the remote site. In a planned downtime situation, the remote data is synchronized to the
source before being made available and there is no data loss.
The main focus of this training is with remote replication since it has more elements to configure, create
and manage.
Fundamental to remote replication is connectivity and communication between the source and destination
systems. A data connection is required to carry the replicated data and it is formed from Replication
Interfaces. They are IP-based connections established on each system. A communication channel is also
required to manage the replication session. The management channel is established on Replication
Connections. It defines the management interfaces and credentials for the source and destination
systems.
Asynchronous Replication architecture utilizes Unified Snapshots. The system creates two snapshots for
the source storage resource and two corresponding snapshots on the destination storage resource. These
system created snapshots cannot be modified. Based on the replication RPO value the source snapshots
are updated in an alternating fashion to capture the source data state differences, known as deltas. The
data delta for the RPO timeframe is replicated to the destination. After the data is replicated the
corresponding destination snapshot is updated. The two corresponding snapshots capture a common data
state, known as a common base. The common base can be used to restart a stopped or interrupted
replication session.
The first step of the initial process for asynchronous replication is to create a storage resource of the
exact same capacity on the destination system. The storage resource is created automatically by the
system and contains no data.
In the next step, corresponding snapshot pairs are created automatically on the source and destination
systems. They capture point-in-time data states of their storage resource.
The first snapshot on the source system is used to perform an initial copy of its point-in-time data state to
the destination storage resource. This initial copy can take a significant amount of time if the source
storage resource contains a large amount of existing data.
Once the initial copy is complete, the first snapshot on the destination system is updated. The data states
captured on the first snapshots are now identical and form a common base.
In the synchronization process, the second snapshot on the source system is updated, capturing the
current data state of the source.
A data difference, or delta is calculated from the two source system snapshots and a differential copy is
made from the second snapshot to the destination storage resource.
After the differential copy is complete, the second snapshot on the destination system is updated to form a
common base with its corresponding source system snapshot.
The cycle of differential copies continues for the session by alternating between the first and second
snapshot pairs based on the RPO value. The first source snapshot is updated, the data delta is calculated
and copied to the destination, the first destination snapshot is updated forming a new common base. The
cycle repeats using the second snapshot pair.
The same fundamental remote replication connectivity and communication between the source and
destination systems seen earlier for asynchronous remote replication are also required for synchronous
replication. A data connection to carry the replicated data is required and is formed using fibre channel
connections between the replicating systems. A communication channel is also required to manage the
replication session. For synchronous replication, part of the management is provided using Replication
Interfaces that are IP based interfaces for SPA and SPB using specific Sync Replication Management
Ports. The management communication between the replicating systems is established on a Replication
Connection. It defines the management interfaces and credentials for the source and destination systems.
Synchronous Replication architecture utilizes Write Intent Logs (WIL) on each of the systems involved in
the replication. These are internal LUNs created automatically by each system. There is a WIL for SPA
and one for SPB on each system. They hold fracture logs that are designed to track changes to the source
LUN should the destination LUN become unreachable. When the destination becomes reachable again it
will automatically recover synchronization to the source using the fracture log, thus avoiding the need for a
full synchronization.
The first step of the initial process for synchronous replication is to create a storage resource of the exact
same capacity on the destination system. The storage resource is created automatically by the system
and contains no data.
In the next step, SPA and SPB Write Intent Logs are automatically created on the source and destination
systems.
An initial synchronization of the source data is then performed. It copies all of the existing data from the
source to the destination. The source resource is available to production during the initial synchronization
but the destination will be unusable until the synchronization completes.
Once the initial synchronization is complete, the process to maintain synchronization begins. When a
primary host writes to the source the system delays the write acknowledgement back to the host. The
write is replicated to the destination system. Once the destination system has verified the integrity of the
data write it sends an acknowledgement back to the source system. At that point, the source system
sends the acknowledgement of the write back to the host. The data state is synchronized between the
source and destination. Should recovery be needed from the destination, its RPO would be zero.
Should the destination become unreachable, the replication session will be out of synchronization. The
source Write Intent Log for the SP owning the resource will track the changes. When the destination
becomes available the systems will automatically recover synchronization using the WIL.
An Active session state indicates normal operations and the source and destination are In Sync.
A Paused session state indicates the replication has been stopped and will have a Sync State of
Consistent indicating the WIL will be used to perform synchronization of the destination.
A Failed Over session will have one of two Sync States. It can show Inconsistent meaning the Sync State
was not In Sync or Consistent prior to the Failover. If the Sync State was In Sync prior to Failover, it will be
Out of Sync after session Failover.
A Lost Sync Communications session state indicates the destination is unreachable. It can have any of the
following Sync States: Out of Sync, Consistent or Inconsistent.
A Sync State of Syncing indicates a transition from Out of Sync, Consistent or Inconsistent due to the
session changing to an Active state from one of its other states; for example if the system has been
recovered from the Lost Sync Communications state.
For Block, the replication is created on a LUN or a group of LUNs in the case of a Consistency Group. For
File, the replication is configured on a NAS server and file systems. For VMware the storage resource is
either going to be a LUN-based VMFS datastore or a file system-based NFS datastore. When creating
each of these storage resources, the Unity system provides a wizard for their creation. Each wizard
provides an option to automatically create the replication on the resource. Each resource replication
creation is nearly identical to the other resources.
For resources already created, replications can be created manually from their Properties page. As with
the wizard, the replication creation from the resource Properties page is nearly identical to the other
resources. The following few slides will show replication creation within the Block storage LUN creation
wizard and the File storage file system creation wizard. It will also show replications created manually
from the LUN and file system Properties pages.
Video demonstrations will also be provided for the resource replication creation.
For Asynchronous Replication, the Replication Interfaces are dedicated IP-based connections between
the systems that will carry the replicated data. The interfaces are defined on each SP using IPv4 or IPv6
addressing and will establish the required network connectivity between the corresponding SPs of the
source and destination systems. The Replication Connection pairs together the Replication Interfaces
between the source and destination systems. It also defines the replication mode between the systems;
Asynchronous, Synchronous or both. The connection is also configured with the management interface
and credentials for both of the replicating systems.
The CLI console command can be used to verify the Fibre Channel port that the system has specified as
the Synchronous FC Ports on the SPs. The slide shows an example of running the UEMCLI command
“/remote/sys show –detail” command. In the abbreviated example output, Fibre Channel Port 4 is
specified by the system as the Synchronous FC port for SPA and SPB.
Once the Synchronous FC Ports on the source and destination systems have been verified, Fibre Channel
connectivity can be established between the corresponding ports on the SPs of each system. Direct
connect or zoned fabric connectivity is supported.
Although the Synchronous FC ports will also support host connectivity, it is recommended that they be
dedicated to Synchronous Replication.
https://edutube.emc.com/Player.aspx?vno=Ad0hF2+GaFno6QWhy1U3MQ==&autoplay=true
https://edutube.emc.com/Player.aspx?vno=IfZJCQAp4dEjP7GUYOL4XA==&autoplay=true
The example is from an asynchronous file system replication that is in a normal state. From the source it is
possible to perform session Pause, Sync or Failover with Sync operations. From the destination it is only
possible to perform a session Failover operation.
The example illustrates the order of failover for a NAS Server and an associated file system. Failover must
be done first to the NAS Server, then to its associated file system. The same is true for the Resume
operation after Failover. The Resume operation is initiated first on the NAS Server then the associated file
system. Failback is done in the same order; first to the NAS Server then to the associated file system.
The example illustrates the process of the operation. It starts with issuing the Failover with Sync operation
from Site A which is the primary production site.
The example illustrates the process of the operation. The primary production site becomes unavailable
and all its operations cease. Data is not available and replication between the sites can no longer proceed.
The example illustrates the process of a Resume operation for a session that is failed over. The Site A
replicated object must be available before the replication session can be resumed.
The example illustrates the process of a Failback operation. The Site A replicated object must be available
before the Failback operation can be initiated on a session.
AppSync creates write-consistent/application-consistent snapshots on the Unity array for each application
you add to a service plan.
Once this process is complete you will login to the server and register resources such as vCenter Servers,
the Unity array, and RecoverPoint Appliances. In most cases the plug-in is pushed to the host, but the
plug-in installation can be performed directly on the host if required in a Windows environment.
The client then reports back to the applications resident on Unity array and eligible to be replicated by
AppSync. Once the clients have been added, they can be subscribed to a Bronze, (local), Silver (remote),
or Gold (local & remote) service plan. From that point forward replication will automatically take place at
the interval specified in the AppSync Service Plan settings. The resultant replicas may then be used for
reporting, backup, or test development environments.
The four user roles are security administrator, resource administrator, service plan administrator, and data
administrator. User roles are cumulative, not hierarchical. The included “admin” administrator account has
all these roles. In an application driven scenario, DBA and messaging admins could be given the “Data
Administrator” privilege to run service plans and act on the resultant replicas. The Storage Administrator
would retain control over AppSync server and storage arrays by having the Security and Resource
Administrator roles. The Service plan Administrator would be given to whichever type of admin would be
registering the applications for protection and configuring their RPOs.
• Starter Bundle license - Allows you to use unlimited amount of storage on a specific array with some
restrictions on the features supported by AppSync.
• Volume Software License (VSL) - Allows you to use a specific amount of storage regardless of the
array type.
• Unlimited license for DPS - This is similar to a VSL license without any enforcement on capacity or
utilization. Allows you to use storage from any type of array configured in AppSync with no limitation on
the amount of storage from each array.
For RecoverPoint and VPLEX, no separate license is required. License checks are performed on the
back-end array.
There are several use cases for the feature. It provides the ability to load balance between pools. For
example, if one pool is reaching capacity, the feature can be used to move LUNs to a pool that has more
capacity. It can also be used to change the storage characteristics for a LUN. For example, a LUN could
be moved from a pool comprised of a disk type and RAID scheme to a pool having a different disk type
and RAID scheme.
Another use of the feature is for data compression of an existing LUN. For example, an existing
uncompressed LUN on an all Flash pool can be moved within the same pool to use the inline compression
feature which will compress the LUN’s data during the move.
Also, if Replication is configured on a LUN, after the move that replication configuration will not be present
on the moved LUN. The graphic here details the LUN attributes and configurations that will and will not be
imported.
Another aspect of a LUN move is the LUN type, Thick or Thin. If a Thick LUN is moved with the GUI, the
resultant moved LUN will be a Thin LUN. If a Thick LUN is moved with UEMCLI, the moved LUN will be
maintained as Thick unless the –thin yes option is specified.
The Local LUN Move feature utilizes Transparent Data Transfer (TDX) technology. It was introduced with
the initial release of Unity for VVol operations. TDX is a transparent data copy engine that is multi-
threaded and supports online data transfers. The data transfer is designed so its impact to host access
performance is minimal. TDX makes the LUN move operation transparent to a host.
As TDX transfers the data to move the LUN, the move operation is transparent to the host. Even though
TDX is transferring data to move the LUN, the host still has access to the whole LUN as a single entity.
The feature supports moving a standalone LUN or LUNs within a Consistency Group.
Additionally, if a LUN Move session is ongoing, the Unity system cannot be upgraded.
Move sessions have Priority settings defined when the session is created. The possible priority settings
are: Idle, Low, Below Normal, Normal, Above Normal, and High.
The TDX resources used by the move operations are multi-threaded. Of the possible 16 active sessions,
TDX multiplexes them into 10 concurrent sessions based on session priority.
The session configuration simply involves selecting the target Pool to move the LUN into and selecting a
Priority for the move.
Once initiated, the move operation starts automatically. The operation will then continue to completion and
cannot be paused or resumed.
The move is completely transparent to the host. There are no actions or tasks to be performed on the host
for the move. Once the move is completed, the session is automatically cutover and the host data access
to the LUN continues normally.
The other method is only available to LUNs built on Flash disks and involves Inline Compression. For a
LUN built on Flash disks, from the General tab of the LUN Properties page there is an option for
Compression. When the option is enabled, this adds a Compress Now option to the More Actions
dropdown list for the LUN. When the Compress Now option is selected, the Compress Now confirmation
screen is displayed with information that the LUN will be moved within the same pool and its data will be
compressed during the move operation.
From the LUNs page, with the LUN selected that is being moved, the right side pane displays move
session information. The move Status and Progress are displayed. The Move Session State, its Transfer
Rate and Priority are also shown.
From the General tab of the LUN’s Properties page, the same information is displayed. The page does
provide the added ability to edit the session Priority setting.
In the example we have launched Unisphere and selected “Service” from the available “System” options.
Menu options include Overview (default) Service Tasks, Technical Advisories, and Logs.
We can tell from this view that all Storage Processors are running in normal mode, and we can see the
software version and serial number of the Unity system.
Tasks available here include Test or Change ESRS, Refresh or Review Support Contracts, change
Support Credentials, and change Contact Information. These tasks can also be completed in the settings
window.
Let’s first take a look at the various storage system tasks. To assist with diagnosing and resolving
problems with your system, users should collect service information about the system and save it to a file.
The file can be used by your service provider to analyze your system.
The example shows the “Collect Service Information” task has been selected. (highlighted in blue) To view
additional information, select the “More information” which will launch a help page in Unisphere. To run the
service task, select the blue “Execute” button.
Select the “+” to create a new service data collection file. The “Collect Service Information” window will
open and a job will created and run. It will take up to 10 minutes for the data collection to complete. Once
completed, you can then choose to open or Save the file to a location of your choice. By default, selecting
Save will save the file to the logged in users “Downloads” directory. For example, if I’m logged in as
Administrator the file is saved to Administrator > Downloads directory.
To download an existing file, highlight the file you want to download and select the “download” icon in the
upper left hand corner. You can then choose to open or Save the file to a location of your choice.
It is a best practice to save the configuration settings after each major configuration change to ensure you
have a current copy of the storage system configuration settings. It is also recommended that you save
the file to a remote location as a backup against possible failures. Be aware you can only request one
save configuration at a time. When attempting multiple save configuration request, allow a request to
complete before initiating the next request. Note only details about your system configuration are saved to
the file and you cannot restore your system from this file. The save process allows a maximum of 120
minutes in which to complete the capture. The system will abort uncompleted capture requests upon
reaching this time limit.
• System specifications
• Users
• Installed licenses
• Storage resources
• Storage servers
• Hosts
In this example, the “storage” tab has been selected and the various sub-menus displayed. Configuration
files a key for service providers for analyzing system configuration.
You cannot restart the management software when both Storage Processors (SPs) are in Service Mode.
Important: Reinitializing will destroy all system configuration settings and stored data on the storage
system. It is recommended that you back up all the data and configuration settings to an external storage
system. Once the system is reinitialized, copy or restore all the data back to it. The Unisphere may get
timeout when the system start reinitializing. Wait 90 to 120 minutes while the system is being reinitialized.
The shutdown process can take between 10 to 20 minutes to complete. During this time, the connection to
the system will be lost and you will not have access to Unisphere or the online help. It is important that you
print the power up instructions from the help menu to be sure you have all of the information you need to
power up the system. You will need to physically remove power from the SPs and then reconnect them to
power the system back up. The system has shutdown and the power can be removed from the SPs when
the Status Fault LED is blinking amber and the power LED is solid Green.
Important: Make sure to read all instructions before performing this service task. Users will have to enter
the service password to complete the power down operation.
Note the power up procedure must be performed in a particular order. Click Help to review and print the
power up procedure prior to shutting down the system.
Important: Hardware upgrade procedures may take up to 120 minutes. This procedure will involve full
shutdown of the storage system, halting of all I/O services, and physically replacing hardware
components.
The example shows the “Hardware Upgrade” wizard launched on a Unity 300F storage system. Available
options include any Unity storage system with a higher capacity. Remember this is a an offline procedure.
I/O will be interrupted and data will become unavailable!
Once you select the task and click Execute, you will be prompted for the service password. Wait at least
10 minutes for the SP to enter service mode and do not attempt any actions in Unisphere until it has
completed.
To verify that the SP is in service mode, check that the Mode field displays Service. Here, both SP Modes
are reported as Normal. Note that Unisphere may not refresh automatically. If prompted, reload
Unisphere. If not, refresh the browser manually. To physically confirm the SP is in service mode, ensure
the SP fault LED flashes alternating amber and blue.
Once you highlight the task and click Execute, you will be prompted for the service password. After waiting
a few minutes and refreshing Unisphere, confirm the SP reboot has completed by noting the SP Mode
Field. It should display a Normal Mode.
Please refer to the EMC IPMI Tool Technical Notes document available on support.emc.com for the
complete details.
When an SP is in service mode it stops servicing I/O to hosts. In physical deployments, all NAS servers on
the SP fail over to the other SP, if it is healthy. By default, when reimaging has completed, the NAS
servers fail back to the SP. If the Failback Policy is disabled, all NAS servers on the SP will not fail back
automatically and will remain on a single SP. Performance can degrade significantly when all NAS servers
reside on a single SP. You can fail back the NAS servers manually.
While in the “Reset and Hold” state, the SP stops I/O services. All storage resources and NAS servers on
the SP fail over to the other SP, if it is healthy. When the SP returns to the Normal Mode, by default the
storage resources and NAS servers fail back to it with minimal disruption to hosts and I/O services
resume.
An SP that is held in reset cannot be rebooted from Unisphere unless Unisphere can communicate with
the peer SP. If the peer SP is not running, the SP that is held in reset would need to be physically power
cycled in order to reboot it.
Note that you can customize the view as well as sort and filter data by date and time, user, source SP,
Category and Message.
You can also configure remote logging. By selecting the “Manage Remote Logging” link to navigate
directly to the Specify Remote Logging Configuration section in the System Settings menu.
To configure remote logging, the remote host must be accessible from the storage system. By default, the
storage system transfers log information on port 514 using the UDP protocol. For more information on
setting up and running a remote syslog server, refer to the documentation for the operating system
running on the remote system.
For physical and virtual deployments, you can access support options through your product's support
website. To register your system, download licenses for your storage system, or obtain update software,
you must first establish a support account.
If you are using a Community Edition, go to the EMC Community Network website.
The EMC Community Network website includes product-specific communities that include relevant
discussions, links to documentation and videos, events, and more. The community not only provides you
more information on the products but also helps guide you through specific issues you may be
experiencing.
There are several ways for viewing alerts on a Unity storage system. On the left Dashboard menu,
navigate to the EVENTS section and select “Alerts”. The system displays all alerts and a brief description
of the alert in the right window.
Another method is to click on the “bell” from the top menu. A window will appear displaying recent alerts
and provide links to “Search the Knowledge Base” or “View All Alerts”.
Users also have the option to “Customize” the Dashboard view and add a “system Alerts” view block (not
shown). With this option, when the Unisphere Dashboard is selected, the view block on the dashboard
shows these alerts categorized by critical, error, and warning. (see next slide). Clicking one of the icons
will open the Alerts page showing the records filtered by the chosen severity level.
The view block on the dashboard shows these alerts categorized by critical, error, and warning which will
be better explained on the next slide. Clicking one of the icons will open the Alerts page showing the
records filtered by the chosen severity level.
Shown here is a table providing an explanation about the alert severity levels from least to most severe.
Logging levels are not configurable.
Details about the selected alert record will be displayed in the right pane. The information will include the
time the event was logged, severity level, alert message, description of the event, acknowledgement flag,
the component that was affected by the event, and the current status of the component.
One selected, the Software and Licenses menu is displayed. Use the bar to scroll down and view all
licenses. A “green” icon indicates an installed license for the respective feature/function. A “red” icon
indicates the license for the feature/function is not installed or valid.
Users have the option of selecting the blue text to “Install License” of “Get License Online”
In the example we see the issued license does not include Data at Rest Encryption. Unity storage systems
are orderable as either encrypted or non-encrypted. The encryption state is set the first time a license is
applied and you cannot apply another license at a later time to enable or disable. A destructive re-
initialization would be required to change the encryption state.
Select “Install License” and follow the wizard to locate and install the requested license.
To “Get License Online” users must have a valid support account to download the license.
Selecting “Start Upgrade” prompts the user to select a file to upload to the server. Users can use the
”Browse” option to search for the location of the file. When you upload a new upgrade file onto your
system, it replaces the previous version since there can only be one upgrade candidate on the system at a
time. In the example, the current version of code is displayed along with the release date of the version.
Selecting “Download New Software” brings the user to the support page where the latest released version
of the Unity OE upgrade file is located.
You will be prevented from using Unisphere or the CLI to make configuration changes to the system while
the upgrade is in progress. Also note, that Unisphere is temporarily disconnected during the upgrade when
the primary storage processor reboots, and may take a few minutes to be automatically reconnected.
Note: Automatic Reboot of SPs: The default option during a software upgrade is to automatically reboot
both storage processors, one-at-a-time, as soon as the software upgrade image is staged and the system
is prepared for upgrade. If you like tighter control over when the reboots happen, you can clear this option
so that upgrade can be started and staged, but neither storage processor will reboot until you are ready.
Doing so reduces the duration of the window (approximately by 10-20%) when the storage processors
could be rebooting, which makes it easier to plan for a time of reduced activity during the upgrade. If that
window is not a factor during your upgrade, then leave the default option of rebooting the storage
processors automatically to avoid delays with the upgrade completing.
To summarize, selecting this option will automatically reboot your storage processors during the upgrade
and finalize the new software. Unselecting this option will pause the upgrade after all non-disruptive tasks
have completed. User input is required to manually reboot the storage processors and finish the upgrade.
If a new version is available, use the “Obtain Drive Firmware Online” option shown on the bottom.
This option will link to the support page and display any new drive packages.
In the example, we see a new “Unity_Drive_Firmware_Package-V2” is available for the existing drive
drives in the array. Click on the package and download to a directory of your choice, then use the “Install
Drive Firmware” option to load it.
This option will link to the support page and display any available Language packages.
In the example, we see several “Language Packs” available. Click on the package you want to download
and choose a directory of your choice, then use the “Install Language Pack” option to load it.
By selecting the “Top 10 Service Topics” > Unity from the top menu, users can find other useful
information on the Unity product.
When performing any type of maintenance procedure its always a good idea to downloaded and read the
procedure to be performed before going on-site.
Select the type of procedure, Hardware Replacement Procedures are shown here.
Finally, select the hardware from the list to generate the document. Read all the instructions and perform
the procedures In the steps shown in the document. Note that SolVe is updated periodically so it’s a good
idea to load the latest version when prompted to do so. Always try to maintain the most current version.
Conversions allow for a Unity array to be upgraded to any of the higher models. DIP conversions are
supported for Unity physical systems only and not UnityVSAs.
Conversions include both All-Flash and Hybrid systems, All-Flash > All-Flash and Hybrid > Hybrid
systems.
A DIP conversion is a offline CUSTOMER procedure that steps users through a wizard.
Also, for customers with D@RE enabled, the procedure does not affect the ability to perform DIP
conversion.
Note: The Data in Place conversion is an offline procedure. Before starting the conversion, you should
disconnect all network shares, LUNs, and VMware datastores from each host to prevent data loss. When
the system is fully powered up, you can reconnect the hosts to these storage resources.
DIP upgrades are applicable across the entire Unity product line. Both hybrid and All-Flash.
Any CNA SFPs or I/O Modules in the SP need to be moved over during SP swap. No need to replace
internal components such as the DIMMs, M.2 SSD device or Fans etc.
Hybrid model to All-Flash model or vice versa is not supported. Also as mentioned in the previous slide,
DIP conversions are only performed on Unity systems from lower to higher models. DIP not supported on
UnityVSA models (no hardware)
The Upgrade Wizard displays the current hardware model and prompts the user to select a Unity model
for the upgrade. Note that only higher performing models will be displayed as available options.
Once a target model is selected, It is recommended that you perform a system health check prior to
starting the upgrade. This ensures the system is in a healthy and upgradable state. If there are issues that
need attention, "Warning" and/or "Error" messages may be displayed. Note that Warnings may typically do
not prevent the system from performing the upgrade whereas Errors” need to be resolved before
continuing. Read the Summary page and then continue the upgrade.
Start the upgrade process. The process now goes through a series of steps, it will perform a health check,
prepare the system, halt the system so the CE can swap SPs and power cables. Depending on number of
IO Modules moved over, number of SFPs, whether new blades are out of the shipping package, etc.
Once completed the upgrade will continue by updating the firmware, reimaging the M.2 device, start the
system services (Stack startup time depends on the number of configured storage resources and other
configurations) and perform a cleanup. The result should now display the target system personality. The
total time to complete the process is anywhere from 48-123 minutes. Note the Unisphere GUI displays the
upgrade time of 90 mins.
• To ensure that the system is halted, check that the fault and power lights on both SPs are off, and that
the amber fault LEDs on both power supplies are lit. The solid green AC/DC power indicator LED will
still be lit on the power supplies..
• Put cable clamps on I/O module, CNA, and onboard port cables for both SPs
• Remove the power cords for SPA and SPB and wait for the DAE to power off.
• Insert CNA SFPs, I/O Modules from old SPA, and insert all cables except for power for new SPA
• Put new part number sticker over PSNT tag for new model. Note the Serial number does not change.
Students were shown how to gather documents in preparation for performing any maintenance using the
support page or the SolVe desktop application. I
In addition, the student was shown how to recognize faults in the storage system by understanding the
various alert levels and logs. Once a failed CRU is diagnosed, student should be able to perform the
removal and replacement of that component. Lastly, the module covered the available Data-in-Place
conversion paths for a Unity storage system should a customer want to upgrade his current Unity to a
higher performing model.