0% found this document useful (0 votes)
306 views474 pages

ViPR Student Guide EMC

ViPR Student guide EMC

Uploaded by

Mohit Gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
306 views474 pages

ViPR Student Guide EMC

ViPR Student guide EMC

Uploaded by

Mohit Gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 474

ViPR Implementation and

Management
Student Guide

Education Services
May 2015
Welcome to ViPR Implementation and Management.

Copyright © 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014,
2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its
publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION,
AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software
license.

EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC
Snap, EMC SourceOne, EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor,
ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager,
AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera,
CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation Technology, Common
Information Model, Configuration Intelligence, Configuresoft, Connectrix, CopyCross, CopyPoint, Dantz,
DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum,
elnput, E-Lab, EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE,
FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase,
InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever, MediaStor,
MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale, PixTools, Powerlink, PowerPath, PowerSnap,
QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor,
SAN Copy, SAN Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI,
SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale,
Unisphere, VMAX, Vblock, Viewlets, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN,
VisualSRM, Voyence, VPLEX, VSAM-Assist, WebXtender, xPression, xPresso, YottaYotta, the EMC logo, and where
information lives, are registered trademarks or trademarks of EMC Corporation in the United States and other
countries.

All other trademarks used herein are the property of their respective owners.

© Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Revision Date: May 2015

Revision Number: MR-1CN-VIPRIM.2.2

Copyright 2014 EMC Corporation. All rights reserved. Course Introduction 1


This course covers how to manage and integrate ViPR into a customer environment.

Copyright 2014 EMC Corporation. All rights reserved. Course Introduction 2


Copyright 2014 EMC Corporation. All rights reserved. Course Introduction 3
Copyright 2014 EMC Corporation. All rights reserved. Course Introduction 4
Copyright 2014 EMC Corporation. All rights reserved. Course Introduction 5
Copyright 2014 EMC Corporation. All rights reserved. Course Introduction 6
This module focuses on ViPR as a solution for modern environments that require
software-defined storage and the internal workings of the product. Several use cases for
ViPR are identified.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 1


This lesson covers different ViPR components, the control and data planes, virtual
abstractions, and provides an introduction to the Service Catalog.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 2


The Software Defined Data Center (SDDC) breaks down data center silos and their
associated complexity. But the value of SDDC is broader than simply compute server
virtualization. The Software Defined Data Center abstracts the functionality of all the
hardware components and pools compute, networking, and storage resources. The
Software Defined Data Center provides the ability to truly realize end-to-end automation
through the entire data center. The less human interaction, the lower the cost and the less
room there is for error.

Analysts estimate that enterprises have virtualized between 30 to 75 percent of their


compute infrastructure and 20 percent of their network infrastructure, but only 5 to 10
percent of their storage infrastructure. This is due in part to the fact that, unlike network
and compute, storage lacks a set of clearly defined protocols and standardization.

To realize the full value of the Software Defined Data Center, compute, network, and
storage must all be virtualized.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 3


The abstraction model of the SDDC requires software-defined storage. From the EMC
perspective, there are three fundamental characteristics to software-defined storage:
simple, extensible, and open.
• To simplify management, the entire storage infrastructure must provide a single
control point, so it can be managed through automation and policies.

• The storage infrastructure must be easy to extend so that new storage capabilities can
be added to the underlying arrays in software.

• The platform must be built in an open manner, so that customers, other vendors,
partners, or startups can write new data services and build a community around it.

EMC ViPR is the software-defined storage solution that was built with all of these
requirements in mind. It transforms existing heterogeneous physical storage into a simple,
extensible, and open storage platform. ViPR was built from the ground up to provide a
policy-based storage management system for automating standardized storage offerings in
a multi-tenant environment across heterogeneous storage infrastructure.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 4


ViPR has four major components and although the storage arrays themselves are central to
the use of ViPR, they are not a component of ViPR. The major components are the ViPR
Controller, ViPR Integration, and ViPR Monitoring and Reporting (M&R).

ViPR Controller abstracts, pools, and automates physical storage resources into policy-
based virtual pools with self-service access to a catalog of resources. It also provides block
and file control services. This is the first, and most fundamental component of ViPR. There
can be no ViPR deployment without the ViPR Controller.

ViPR Integration with cloud platforms provides an alternative to provisioning storage from
the ViPR UI. The ViPR integration features allow users of VMware, Microsoft and OpenStack
to stay in their preferred management tool and initiate storage provisioning without
switching to another UI. These components are optional and they cannot exist without the
ViPR Controller. Which of these integration components are installed will depend entirely on
the tools in use in the customer environment.

With ViPR monitoring and reporting, the ViPR software distribution includes the Solution
Pack for EMC ViPR. The Solution Pack leverages the ViPR monitoring and metering REST
API bulk feeds to expose availability and usage of ViPR managed storage. When used in
combination with Storage Resource Management (SRM) Suite, this Solution Pack connects
the dots from physical to ViPR managed volumes.

Each of these components is installed separately.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 5


There are several sub-components for each of the four major components.

The ViPR Controller component provides: the block and file control service, a load balancer,
the REST API, the command line interface, and the UI. Installing this component requires
installing multiple VMs. The Controller is delivered as a virtual appliance (OVA file) that
must be installed on VMware ESXi. The load balancer included with the ViPR Controller
provides a means to balance load across all controller VMs with a single Virtual IP address
for the Controller Appliance.

The ViPR CLI is a command line interface that leverages the ViPR REST API. The ViPR CLI is
implemented as a collection of Python scripts taking parameter values from command line
arguments and formatting them into the REST API. This is useful in two ways. First, the CLI
can be used to script ViPR actions. Second, the python scripts that implement each CLI
command serve as examples of how the REST API is used. The CLI should not be used from
any of the ViPR Controller VMs, it should be installed on another machine in the
environment and used from there.

The ViPR User Interface is browser based. Any machine that can reach one of the ViPR
Controllers can open the ViPR UI with a supported browser and browser version.

ViPR monitoring and reporting is accomplished through the EMC ViPR SolutionPack. If the
customer already has SRM Suite or Software Assurance Suite installed and would like
reporting that connects the ViPR virtualized storage back to the physical storage or
network, they can install the ViPR SolutionPack into their existing instance. In an
environment where SRM Suite or SAS has not been installed (or when the user wants to
keep ViPR reporting separate), the ViPR SolutionPack can be a standalone implementation.
When adding the SolutionPack for ViPR to an existing installation, make sure that the
respective versions are qualified for compatibility.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 6


There are multiple ViPR provided plug-ins. Since the integration features are only installed
when and where they are desired by the customer, each is separate, and installed
separately. The sub-components are each delivered as a plug-in, add-in, or driver. Unlike
the Controller they are not packaged as a virtual appliance or VM.

The VMware integration provides both VASA and VSI Plug-ins to enable vCenter’s vSphere
API for Storage Awareness and Virtual Storage Integrator to leverage ViPR features that
inspect, report, and provision ViPR storage. Unlike all the other integration components, the
VASA provider is built into the ViPR Controller and is installed when the Controller is
installed. The vCOps Analytics Pack enables vCenter Operations to collect data from ViPR to
incorporate into its analytics reporting. The vCO and vCAC plug-ins allow VMware vCenter
Orchestrator and VMware vCenter Automation Center to include ViPR operations within their
scripts.

The Microsoft SCVMM add-in enables Microsoft System Center Virtual Machine Manager to
leverage ViPR features to inspect, report, and provision ViPR storage.

The OpenStack Cinder Driver enables OpenStack to leverage ViPR features to inspect,
report, and provision ViPR storage.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 7


Virtualization within ViPR Controller occurs on three levels:
• At the highest level groups of managed systems create the Virtual Data Center. Unlike
previous terms for the VDC, with ViPR we can span greater distances and bring logical
functionality within the infrastructure. Limitations on distance and connections are
based on the devices managed.
• The Virtual Array, or vArray, allows for a larger pool of devices to be used for
provisioning and protection services. Limitations of the physical frames become less
important while service level and business functionality become the deciding factors in
creating the vArrays.
• The Virtual Pool, or vPool, can exist within a vArray or cross between multiple vArrays.
Grouping similarly used and specified devices becomes the reasoning for creating
vPools. You can also create vPools based on tenancy requirements or other required
services such as chargeback or local resource provisioning.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 8


Having now defined virtual arrays and virtual pools, we can now define the virtual data
center. In ViPR, the virtual data center identifies the virtual arrays that are within metro
distance of one another that can be managed by a single instance of ViPR. For example, we
start with a physical data center that has four arrays in it, each with redundant paths from
hosts to arrays.

Within metro distance we have a similar physical data center connected using RecoverPoint
and a VPLEX for distributed volumes. In ViPR parlance, this defines the virtual data center.
Storage systems within metro distance proximity to one another, connected by
RecoverPoint or VPLEX, can be managed as part of a single virtual data center, by a single
ViPR instance.

Additional data centers beyond metro distance would require their own ViPR instance.

The components that combine to enable connectivity from host to physical array represent
a virtual array. Within the physical arrays we have disks that have different performance
and protection characteristics. We combine disks that have similar performance and
protection characteristics into virtual pools.

So, now that we have an understanding of how ViPR abstracts physical storage, let’s
explore what ViPR can do with that storage.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 9


The virtual array is a ViPR abstraction for the physical arrays and the network connectivity
between hosts and these arrays. The virtual array provides a more abstract view of the
storage environment for use in either applying policy or provisioning.

All physical arrays participating in a virtual array should be connected to the same fabrics or
virtual storage area networks (VSANs) to ensure that they all have equivalent network
connectivity to the environment. When a storage administrator adds physical arrays to
ViPR, ViPR discovers their storage pools, ports and configuration. And after FC switches are
added, ViPR automatically discovers and maps the FC networks. When populating a virtual
array with physical arrays and networks, the administrator must ensure that when storage
is presented from the virtual array to a host, that the host will be able to physically reach
the storage presented to it.

Having examined the connectivity between hosts and arrays the administrator can build the
virtual arrays. When all hosts can reach all arrays, the entire storage infrastructure can be
grouped into a single virtual array; however, physical arrays may need to be placed into
separate virtual arrays to accommodate different physical configurations and different
requirements for fault tolerance, network isolation or tenant isolation.

In the typical physical environment there are multiple arrays, each with its own
management tools, processes and best practices. With the ViPR virtual array, all of the
unique capabilities of the physical arrays are available, but ViPR automates the operations
of the tools, processes and best practices to simplify provisioning storage across a
heterogeneous storage infrastructure. In this way ViPR can make a multi-vendor storage
environment look like one big virtual array. ViPR can accomplish these tasks for specific
types of block and file storage, including: EMC VMAX, EMC VNX, EMC Isilon, EMC VPLEX,
and NetApp.

With the physical arrays configured into ViPR virtual arrays, the administrator can now build
ViPR policies that are automatically applied across heterogeneous arrays.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 10


With physical storage each switch, array, and connection must be individually managed.
Note that most Enterprise IT and Managed Service Provider environments contain many of
each, and often with multiple models from multiple manufacturers. Managing each resource
individually is, therefore, time consuming and error prone.

The types of storage may be in grouped tiers or “pools” already based on type specification
and capabilities. However due to the physical constraints of the array, the size of the
potential pools is limited.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 11


By abstracting storage from the Physical Arrays, ViPR does much of the management of the
individual components, allowing administrators and users to treat storage as a large
resource focusing just on the amount of storage needed and the performance and
protection characteristics required. ViPR exposes the storage infrastructure within its control
through a simplified model, hiding and handling the details of array and disk selection, LUN
creation, SAN zoning, LUN masking and the differences between one storage device and
another. Note that ViPR is aware of, and leverages intelligence such as FAST, snapshots and
cloning capabilities within individual models of storage arrays. The same applies to
protection technologies such as RecoverPoint and VPLEX – ViPR provides abstractions to
take full advantage of these technologies when they are already in place.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 12


ViPR employs a policy-based placement algorithm to find the best fit in the infrastructure
for the request given the target virtual pool and virtual array to:

• Select a physical array that matches the policies specified in the virtual array and virtual
pool, then select a disk and create a LUN or file system.

• Establish the data path between the host and the storage by selecting ports and fabrics
that connect the devices at an optimal level of performance setting up zones as needed.

• Place the device into disk groups and apply disk policies such as FAST.

• Perform the Initiator-Target-LUN (ITL), export workflow, to export the storage from the
array.

• Connect the storage to the host.

Because all this work is performed by ViPR without requiring administrator or user activities,
ViPR can expose a self-service user interface (UI) for end-users so they can provision their
storage on their own.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 13


The Service Catalog can be modified or ACLs configured to provide more granularity in the
user’s capabilities and needs. Notice we see categories such as Block Storage Services,
when we log in to this category we see different options, listed here is the option to create
a block volume for a host. When this option is selected a list of options to configure the
Block Volumes are present, more of the service catalog will be discussed in upcoming
modules.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 14


This lesson covered different ViPR components, the control and data planes, virtual
abstractions, and provided an introduction to the Service Catalog.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 15


This lesson covers the available ViPR configuration options.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 16


EMC is giving customers the option to deploy ViPR Services on their own commodity
infrastructure or to purchase ViPR Services on an integrated, scale-out commodity
appliance. Each features the economics of COTS, however the ECS Appliance packages
everything together will full management and support from EMC.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 17


Again, the customer has a choice – certified 3rd party commodity infrastructure or EMC
commodity.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 18


So, if you can get superior characteristics at scale on commodity, is there any need for
hardware innovation? The idea of “Commodity Innovation” sounds like something of an
oxymoron. However, innovation is in how the components are put together to enable RAS
(Reliability, Availability, Serviceability).

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 19


The ECS Appliance embodies these characteristics of hyperscale based on commodity
components.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 20


Both the unstructured and block engine deployments are designed to scale across racks as
part of a single logical server cluster. The system aggregates all the unstructured nodes
from all of the available ECS Appliance resources and groups them into one single, logical
VARRAY, and aggregates all the structured nodes from the available resources and
aggregates them into a second, logical VARRAY. Both the structured and unstructured
logical VARRAYs are controller and managed by a single ViPR Controller instance within a
zone, regardless of the number of ECS Appliance units and associated resources provided
by those units within a zone. The configurations available at GA include:

• Unstructured Data Services Configurations


• Small = 360TB
• Medium = 1.4PB
• Large = 2.9PB

• Block Data Services Configurations


• Small = 120TB (12K IOPS)
• Medium = 240TB (24K IOPS)

• Mixed Workload Configurations


• Small = 360TB (Unstructured Data Services) & 120TB (Block Data Services)
• Medium = 1.4PB (Unstructured Data Services) and 120TB (Block Data Services)

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 21


The goal in presenting the Starter Pack is to reduce financial barriers to price sensitive
customers who are interested in purchasing ViPR. Provide simple bundled approach for
sales to position, quote and sell. Enable upsell opportunities at scale and increase the value
and attach rate of ViPR to EMC arrays. Drive to ViPR Controller goal of 1,000 paid
customers.

For additional information, go to:


• ViPR Controller Starter Pack Inside EMC Page
• InsideEMC ViPR Community

Questions?
• ASD Help Desk ([email protected])

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 22


ViPR Controller may be installed on either VMware or Microsoft Hyper-V.

Minimum resource requirements have reduced


• All versions can be scaled up
– vCPU can be increased
– Memory can be increased
– Disk size can NOT be changed
– Nodes can NOT be changed
• Other possibilities
– Thin Provisioning of Memory
 Scale up from less RAM to more RAM
– Thin Provisioning of Disk Space (at your own risk)
 If hypervisor is not able to allocate more space when needed, system may
stall and data loss is possible.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 23


Correct operation of ViPR Data and ViPR Control requires certain ports to be open in the
firewall. When installed, these ports are automatically configured correctly. ViPR Controller
VMs are deployed with the firewall enabled by default, with these ports open.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 24


Depending on its type each VM requires incoming and outgoing access on specific TCP
and UDP ports. These ports are documented in the installation guide. If a network
firewall is in place outside of ViPR, ensure that it does not restrict access to these ports.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 25


This lesson covered the available ViPR configuration options.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 26


This module covered ViPR as a solution for modern environments that require software-
defined storage and the internal workings of the product. Several use cases for ViPR were
identified.

Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 27


Copyright 2015 EMC Corporation. All rights reserved. Introduction to ViPR 28
This module focuses on identifying the hardware supported by ViPR. The pre-requisites for
configuring ViPR to manage the supported physical resources is also covered as well as the
information that must be gathered before an implementation.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 1
This lesson covers the devices that are supported by ViPR.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 2
EMC’s support website is the best source of ViPR documentation. You will always find the
latest documentation and whitepapers. In the EMC Support site you will find the ViPR 2.2
Product Documentation Index which contains links to ViPR Resources and useful
configuration information. The live document will re-direct you to the EMC Community
where you can find all the additional documentation.

The ViPR 2.2 Product Documentation Index is located at


https://community.emc.com/docs/DOC-41200

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 3
The block storage arrays and appliances supported are listed here. For VNX Block and
VMAX, ViPR communicates through an SMI-S provider. The SMI-S provider is bundled
together with Solutions Enabler. A single host with the SMI-S provider can control both VNX
and VMAX.

For latest support information always refer to the ViPR Data Sheet and Compatibility Matrix,
located in the EMC Community Network.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 4
ViPR continues to support Isilon, VNX, and NetApp file storage.

Additional third-party arrays can be supported through the OpenStack Cinder driver. Cinder
driver must be the Icehouse or Juno version. Currently supported are NetApp in cluster
mode for iSCSI only and IBM SVC for FC and iSCSI.

For latest support information always refer to the ViPR Data Sheet and Compatibility Matrix,
located in the EMC Community Network.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 5
This list shows the minimum supported versions for the specified hosts. Different types of
hosts might be added as “Other”, but a rescan to check for new storage will not be
performed. The versions specified here are those that exist at the time this course was
written, for the latest support information always refer to the ViPR Data Sheet and
Compatibility Matrix, located in support.emc.com

Note that a traditional installation of Windows Server 2008 has WinRM; ViPR requires a
hostfix to Windows Server to install WinRM 2.0.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 6
This list shows the supported component plug-ins and required versions.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 7
ViPR supports Brocade and Cisco fabric managers for FC/FCoE and iSCSI connectivity. In
Brocade it relies on the use of Connectrix Manager Converged Network Edition (CMCNE). To
determine the list of switches and firmware supported, refer to the CMCNE Release Notes
for the versions listed above. Notice ViPR uses SMI-S to communicate with the switch.
CMCNE includes an SMI-S Provider in the Professional Plus or Enterprise versions only, not
on the free Professional version. Alternatively you can use the CMCNE SMI Agent only
install.

With Cisco, ViPR supports both Nexus and MDS switches and directors. The physical
switches supported must support the specified version of NX-OS. ViPR communicates with
Cisco through SSH.

The requirements for ViPR CLI are also listed here. Notice python must be installed together
with the python setup tool. If the setup tools are not installed, python will not work
properly.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 8
This lesson covered the devices that are supported by ViPR.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 9
This lesson covers the hardware prerequisites for the ViPR installation. The lesson will cover
how to install software on hardware that will be managed by ViPR to ensure proper
functionality.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 10
Before adding VMAX arrays, make sure the following parameters are configured in the
storage array.

Make sure to create a sufficient amount of storage pools for storage provisioning with ViPR
on VMAX storage system.

If you are planning on implementing storage tiering, make sure to configure it on the
VMAX. All names should be consistent across arrays managed by ViPR. The same is true for
FAST policies.

It is not required to create any LUNs, storage groups, port groups, initiator groups, or
masking views.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 11
Solutions Enabler can be installed in a Windows or Linux host or deployed as a vApp. In our
example we are deploying Solutions Enabler in a stand-alone Windows host. It is important
to note that six (6) gatekeeper devices are required. Gatekeeper devices allow in-band
communication between Solutions Enabler and the VMAX array.

Also, when performing the installation select Array Provider under SMI-S Provider. At the
end of the installation a list of services are displayed. Make sure to select the SYMAPI
Server Daemon. This starts the service required for the SMI-S Provider. It is worth verifying
the services are up and running in Windows. Another important service to start is ECOM.

Once Solutions Enabler is installed and licensed open the CLI and run the symcfg discover
command. This queries for VMAX arrays that are reachable by the host where Solutions
Enabler is installed. Once the discovery is complete use symcfg list command to verify
the VMAX was discovered successfully.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 12
Before adding VMAX or VNX for Block storage to ViPR, login to your SMI-S provider to
ensure SMI-S meets the following configuration requirements.

The host server running Solutions Enabler (SYMAPI Server) and SMI-S provider (ECOM)
differs from the server where the VMAX service processors or VNX for Block storage
processors are running. The storage system is discovered in the SMI-S provider.

When the storage provider is added to ViPR, all the storage systems managed by the
storage provider will be added to ViPR. If you do not want all the storage systems on
an SMI-S provider to be managed by ViPR, configure the SMI-S provider to only manage
the storage systems that will be added to ViPR, before adding the SMI-S provider to ViPR.

The remote host, SMI-S provider (Solutions Enabler (SYMAPI Server) and EMC CIM Server
(ECOM)) are configured to accept SSL connections. The EMC storsrvd daemon is installed
and running.

The SYMAPI Server and the ViPR server hosts are configured in the local DNS server and
that their names are resolvable by each other, for proper communication between the two.
If DNS is not used in the environment, be sure to use the hosts files for name resolution
(/etc/hosts or c:/Windows/System32/drivers/etc/hosts).

The EMC CIM Server (ECOM) default user login, password expiration option is set to
"Password never expires.“

For VNX, the SMI-S provider host needs IP connectivity over the IP network with
connections to both VNX storage processors. For VMAX, the SMI-S provider host is able to
see the gatekeepers (six minimum).

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 13
Listed are the ViPR pre-configuration requirements for VNX block storage. As discussed
before, you can use the Solutions Enabler host with EMC SMI-S Provider. Additional
configuration on the SMI-S is required to authorize and discover the VNX Block. Details on
how to perform this configuration are provided later.

Additional configuration includes the creation of pools or RAID groups. If volume full copies
are required, SAN Copy enabler software must be installed on the array.

If volume continuous-native copies are required, clone private LUNs must be configured on
the array.

The ViPR pre-configuration requirements for VNX File Storage are also listed here. It is
important to ensure that these requirements are met for all VNX File Storage arrays that
will be discovered and managed by ViPR.

Fibre Channel networks for VNX for Block storage systems require an SP-A and SP-B port
pair in each network, otherwise virtual pools cannot be created for the VNX for Block
storage system.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 14
Solutions Enabler can also include an SMI-S Provider to communicate with the VMAX and
VNX arrays. There are two procedures which can be used to discover the VNX in Solutions
Enabler, the first procedure uses Solutions Enabler commands to discover VNX.

The second procedure discovers the VNX directly from the SMI-S. ViPR recommends the
second procedure be used for VNX discovery.

On the host with solutions enabler, navigate to the bin folder of the ECOM directory, as
shown above and run the TestSmiProvider.exe file. This program allows you to execute
multiple commands using SMI-S to communicate with the devices. When the program is
executed it asks for connection information. Leave all the defaults to connect to the SMI-
Provider, as shown above.

Once you are connected to the SMI-S Provider, run the addsys command, this command is
used to add an EMC System to the SMI-S provider. It is very important from this point
forward to manually type in all the parameters which are required, even if they are listed as
the default options. First, select the type of array, in our case it will be a CLARiiON. Next
enter the IP address for SP A and SP B. The SMI-S Provider will ask you what type of
address was entered, in our case it’s an IP address so select option 2. This is asked twice
because we added both SP-A and SP-B. Finally enter the username and password for the
VNX Block. The SMI-S provider will execute the add system command. Pay special attention
to the output of the command, an output of 0 means the discovery was a success.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 15
Listed are the ViPR pre-configuration requirements for Isilon File Storage. Make sure
SmartConnect is configured per Isilon documentation. The names of SmartConnect zones
are set to the appropriate delegate domain. It’s important to use DNS for ViPR and
provisioned hosts are delegating requests for SmartConnect zones to SmartConnect IP.

A minimum of 3 nodes must be configured in an Isilon Cluster. The Isilon clusters and
zones need to be reached from the ViPR Controller VMs.

Before configuring an Isilon array you need to gather the SmartConnect IP address, port,
and login credentials. In order to configure ViPR you need to use the root account or create
a user with administrative privileges.

There are several useful commands that can aid you when configuring an Isilon array for
use with ViPR:

• isi version - List the Isilon version.

• isi smartpools health -v - Check for presence of smart pools

• isi status –d - Check Isilon cluster status

• isi statistics list nodes - Note the number of nodes in the cluster

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 16
ScaleIO storage systems can be integrated with ViPR. ScaleIO must be at version 1.21 or
1.30. ViPR 2.2 SP1 support was added for version 1.31 of ScaleIO. Before discovering the
system, make sure to pre-configure protection domains and storage pools. The primary
MDM IP address, SSH port, and credentials with storage administrator are required.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 17
ScaleIO is a commodity based, software-only block storage solution. It leverages the
existing local disks of a host and exposes them as block storage that can be consumed by
clients over the LAN.

The ScaleIO solution consists of software components that are installed on the application
hosts. These hosts communicate with each other over the LAN to handle the clients IO
request. These components can be installed on existing application hosts or in a greenfield
environment.

When ViPR is pointed to the primary MDM, if the ScaleIO is stand-alone, a storage provider
is created which maps to the Primary MDM.

The protection domains are discovered and one storage system is created in ViPR for each
ScaleIO protection domain.

Storage ports are created in ViPR: one storage port in a ViPR storage system for each
discovered SDS that is part of the ScaleIO protection domain that is mapped to the storage
system. The name of the storage port maps to the name of the SDS ID.

ViPR automatically creates a network for the ScaleIO using the SDCs, and the storage ports
that were created from all of the discovered SDSs.

ViPR automatically creates hosts and host initiators: one host for each SDC.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 18
Before adding XtremIO storage to ViPR, ensure that the following version and
preconfiguration requirements are met.

For supported versions, see the EMC ViPR Support Matrix on the EMC Community Network.

Pre-configuration requirements for XtremIO are very simple, physical connectivity between
hosts, fabrics and the array(s) must exist.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 19
Hitachi HiCommand Device Manager is required to use HDS storage with ViPR. You need to
obtain the following information to configure and add the Hitachi HiCommand Device
manager to ViPR. The first requirement is a host or VM where HiCommand Device Manager
will be setup. The installation must be licensed. Information about the HiCommand host IP
address, login credentials and port are required in order to discover HDS storage in ViPR.

Before discovering HDS storage systems, HiCommand Device Manager must be installed
and licensed. HiCommand device manager can discover multiple HDS systems. When the
device manager is discovered in ViPR all the storage arrays it contains will be automatically
added to the ViPR controller.

If you want to prevent storage systems to be added to ViPR, remove the storage system
from Hi Command Device Manager or deregister or delete the storage system from ViPR.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 20
Shown are the ViPR pre-configuration requirements for NetApp File Storage. NetApp
aggregates are analogous of pools. ViPR requires these to configure file systems.

ONTAP should exist in a 7-mode configuration, however it cannot be configured as a cluster.

NFS and CIFS are required depending on the type of files system that will be created. If
protection is required snapshots should also be licensed. After CIFS is licensed you must
run the cifs setup command for the initial setup.

There are several useful commands that can aid you when configuring a NetApp array for
use with ViPR.

• version - List the ONTAP version.

• aggr status - Check for presence of aggregates

• aggr show_space – Checks for usable aggregate space greater than 0

• cifs setup – Performs initial configuration for CIFS

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 21
Review the list of ViPR pre-configuration requirements for VPLEX. From vplexcli you can
verify the metadata devices by looking at the system volumes within the clusters. To verify
the logging volumes find the extents within the storage elements. Additionally you can find
port WWNs looking at the ports section.

To verify the ports are in distinct fabrics and storage arrays are zoned to VPLEX back-end
ports verify the switch configuration. Also, notice that hosts don’t need to be zoned, they
just need to be in the same network as the front-end ports.

When creating virtual arrays, manually assign the VPLEX front-end and back-end ports of
the cluster to a virtual array, so that each VPLEX cluster is in its own ViPR virtual array.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 22
Shown here are the ViPR pre-configuration requirements for RecoverPoint data protection
system. Note that initial manual zoning of RecoverPoint appliances to all ViPR managed
storage arrays are required if ViPR will not be managing the Fabric.

There are several useful commands that can aid you when configuring RecoverPoint for use
with ViPR.

• get_version – Check for operational clusters, display the RecoverPoint version

• get_account_settings – Check for licenses, local and/or remote replication should be


licensed depending on the needs.

• get_san_splitter_view – Display every array-based splitter and its state.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 23
The way ViPR communicates with the fabrics is very different between Brocade and Cisco.
To communicate with Brocade ViPR relies on the SMI-S Provider which is included in
Connectrix Manager Converged Network Edition Professional Plus or Enterprise. It is very
important that the switches are discovered with admin user to have full privileges on the
switch.

With both Cisco and Brocade the infrastructure that allows communication between devices
must be setup. This means the virtual fabrics and virtual SANs must be created and
enabled, the ISLs combining multiple fabrics must also be configured and the ports with the
devices attached enabled. ViPR uses this information to identify the topology and
connectivity available. Zoning is not required as it will be performed by ViPR.

On the Cisco side ViPR administers the fabric using SSH. The commands listed allow the
verification of SSH, VSAN, the ports that belong to the VSAN and the fabric topology. It is
highly recommended enhanced zones are configured.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 24
Review the ViPR pre-configuration requirements for Linux hosts. Note that ViPR performs
operations on the Linux host using SSH. The host must be configured to enable SSH login
with root access to the system. Time synchronization must be configured in the host. In
some cases, it may be necessary to install lsb_release. If host discovery fails due to
compatibility, and logs indicate that the lsb_release command is not found, the package
that includes that command must be installed.

ViPR has been qualified with several versions of Red Hat Enterprise Linux, Oracle UEK and
SuSE Linux Enterprise Server. Check the Support Matrix for details.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 25
Listed are the ViPR pre-configuration requirements for Windows hosts. Note that ViPR
performs operations on the Windows host using remote windows PowerShell; the host must
be configured to enable this.

Native MPIO or PowerPath must be configured. If using LDAP or Active Directory domain
account credentials, the domain user credentials must be in the same domain where the
Windows host is located; otherwise the Windows host discovery will fail.

The winrm quickconfig command starts up a listener on port 5985. The port on which you
start the listener must be consistent with the port that you configure for the host in the
host asset page. Basic Authentication and AllowUnencrypted must be set to true. To do
so follow the commands on the slide.

If you want ViPR to connect to the host as a domain user, you need to set Kerberos
authentication to true. Ensure that your domain has been configured as an authentication
provider in ViPR by a System Administrator. When discovering the host in ViPR make sure
it’s listed as domain\username.

Make sure to check the Support Matrix for supported Windows Server versions.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 26
The explosive growth of Openstack for instance hosting raises a need for flexible and open
storage platform management, EMC’s ViPR product abstracts the storage control path from
the underlying hardware arrays so that access and management of multi-vendor storage
infrastructures can be centrally executed in software. Using ViPR and Openstack together,
users can create “single-pane-of-glass” management portals from both the storage and
instance viewpoints, easily providing the right resource management tool for either group.

A host with OpenStack and cinder require a minimum dual core processor, 16 GB of RAM
and 32 GB of disk space. This is dependent on the arrays that will be integrated with
OpenStack.

The OpenStack host console should be accessible in case the network is misconfigured. A
minimum version of ViPR 2.0 is required when integrating with OpenStack. A local area
network must be setup to connect all components. Sufficient IP addresses should be
allocated for host assignment.

DNS and NTP servers must be configured in OpenStack.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 27
ViPR is a product that manages multiple storage arrays both block and file, together with
data protection appliances, hosts, and connectivity. Since ViPR covers a broad spectrum of
devices it is very important to follow this checklist which verifies your environment is ready
to install ViPR.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 28
This is a continuation of the ViPR installation checklist.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 29
The ViPR Installation and Configuration Roadmap is the best source for ViPR installation
procedure. This document can be found in the ViPR Community. It covers three basic topics
spread across three use cases. The topics are installation, configuration of users and
projects and configuration of the virtual data center.

The different use cases include basic block and file storage management, block and file with
object storage and block and file with object storage on commodity nodes.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 30
When deploying ViPR in a customer environment, it’s very useful to follow the guide
provided in the TS Kits for ViPR. Listed in the table are the different documents available for
ViPR.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 31
The ViPR Controller is distributed as a virtual appliance and is deployed on a VMware
Cluster. ViPR offers the choice of two controller configurations:

A ViPR 2+1 comprising three virtual machines (VMs), which can tolerate the failure of a
single VM.

A ViPR 3+2 comprising five VMs, which can tolerate the failure of two VMs.

Note that both configurations are capable of supporting the targeted scalability and
performance requirements. The only difference is in the failure tolerance.

Note that every ViPR VM is a critical operational component of a production virtual


appliance. Customers should avoid directly logging in on these VMs for performing routine
administrative tasks. After initial deployment, a separate host must be used to manage the
ViPR instance. From this separate machine, which could be just another VM or service
laptop in the environment, the user can invoke the ViPR UI with a standard web browser.
ViPR also provides a command line utility that can be installed and used on Linux or
Windows hosts. A third option is to use the ViPR REST API for management and integration.

This course will focus on initial deployment of the ViPR Controller including fundamental
operations that can be performed with the UI. Management and Monitoring, with exhaustive
coverage of all available user interfaces for ViPR, are beyond the scope of this course.

Note that there are several other software components that can add value in a ViPR-
managed environment. For example, ViPR will provide plug-ins for VMware, an add-in for
Microsoft SCVMM and a Solution Pack for monitoring. Installation, configuration and use of
these auxiliary tools are beyond the scope of this implementation course.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 32
Prior to deploying the ViPR virtual appliance, the installer must ensure that all the pre-
required information is available. The first step is to identify the type of ViPR deployment.

The ViPR Controller can be deployed as a 2+1 or 3+2 virtual machine configuration. Ensure
that all required IP addresses for the ViPR virtual appliance are available. Every Controller
VM will require one IP address. In addition, the ViPR Controller comes with an embedded
load balancer; therefore, the installer will have to supply one additional virtual IP address
for this component.

Note that the ViPR Controller embedded load balancer only operates with the ViPR controller
VMs.

Also required are the IP addresses of DNS and NTP servers that will be configured into ViPR.

The ViPR installation files for the selected ViPR controller type (2+1 or 3+2) and the
customer license file must also be available. For detailed information on how ViPR is
licensed and how to generate the license file from your customer’s License Authorization
Code (LAC).

Once the product goes beyond the expiration period, the application will continue to work in
it's entirety. A 'license expiration' event will be sent to SYR upon expiration and will
continue to be sent once per every two weeks until the product is licensed again.

Once the product exceeds licensed capacity, the application will continue to work in its
entirety. A 'capacity exceeded' event will be sent to SYR upon and will continue to be sent
once per every two weeks until the product is under licensed capacity.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 33
The ViPR platform requirements are shown here. Full details are listed in the ViPR 2.2
Release Notes.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 34
Minimum resource requirements have reduced
• All versions can be scaled up
– vCPU can be increased
– Memory can be increased
– Disk size can NOT be changed
– Nodes can NOT be changed
• Other possibilities
– Thin Provisioning of Memory
 Scale up from less RAM to more RAM
– Thin Provisioning of Disk Space (at your own risk)
 If hypervisor is not able to allocate more space when needed, system may
stall and data loss is possible

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 35
Depending on its type each VM requires incoming and outgoing access on specific TCP and
UDP ports. These ports are documented in the installation guide. If a network firewall is in
place outside of ViPR, ensure that it does not restrict access to these ports.

Collection of metering information from distant (50-100ms) storage providers may result in
overloading of the network bandwidth, especially when the available bandwidth between
data centers is low (e.g. 10 Mbps). The network payload of a single volume takes
approximately 8K Bytes per metering cycle. Use this information to estimate network
bandwidth utilization due to collection of metering. Network latency of 150ms has been
tested successfully for provisioning operations only.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 36
The ViPR UI can be invoked from standard web browsers including Firefox, Chrome and
Internet Explorer. Refer to the Data Sheet and Compatibility Matrix for supported browser
versions.

The CLI should be installed on a separate management. This can be a VM or a service


laptop. Note the specific versions of OS and Python required for ViPR CLI to function
properly.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 37
This lesson covered the hardware pre-requisites for ViPR installation. It covered how to
install software on hardware that will be managed by ViPR to ensure proper functionality.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 38
This lab covers environment validation and pre-configuration. The purpose of this lab is to
test the environment that will be managed by ViPR.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 39
This lab covered environment validation and pre-configuration. The purpose of this lab was
to test the environment managed by ViPR.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 40
This module covered how to identify the hardware supported by ViPR. It covered the pre-
requisites for configuring ViPR to manage the supported physical resources. The module
detailed the information that must be gathered before an implementation.

Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 41
Copyright 2015 EMC Corporation. All rights reserved. Prepare the Site for ViPR Discovery 42
This module focuses on ViPR projects, tenants, and users where we create and assign ViPR
roles to users and integrate ViPR with AD and LDAP. ViPR physical asset discovery will also
be performed.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 1
This lesson covers the concepts of projects, tenants, and users. It demonstrates how to
perform authentication with AD/LDAP.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 2
Projects enable storage resources (block volumes, file systems, and objects) provisioned
using ViPR to be grouped logically, and for authorization to perform operations on resources
to be based on project membership. All provisioned resources are owned by a project.

For an end-user to be able to use a storage provisioning service, the user must belong to
the project that will own the provisioned resource.

At the UI, Tenant Administrators and Project Administrators are responsible for creating
projects and using an access control list (ACL) to assign users to projects and assign
permissions to them. Projects have the concept of a Project Owner, which conveys certain
administrator rights to a user, and enables a Tenant Administrator to delegate
administrator rights for a project to a Project Administrator.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 3
To access the ViPR UI, an individual must have login access. ViPR does not provide its own
login and password controls. Instead, ViPR must use a customer supplied AD/LDAP security
domain for authentication.

For this purpose, a ViPR administrator must first create a ViPR Authentication Provider
definition.

Each ViPR user must be assigned a role in order to use ViPR services. There is no explicit
user role defined in ViPR. All users who have been configured in the AD/LDAP security
domain, and are able to log in to the ViPR UI, are ViPR users. However, unless a user is
assigned a ViPR role, they cannot use any ViPR configured services. As a best practice, the
Active Directory filter should be configured so only a system administrator can login.

*Lock graphic indicates secured with ACLs.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 4
Listed here are the role permissions for the tenant administrator.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 5
Listed here are the role permissions for the system administrator.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 6
Listed here are the role permissions for the project administrator and tenant approver.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 7
Listed here are the role permissions for the security administrator, system monitor, and
system auditor.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 8
ViPR can be configured with multiple tenants, where each tenant has its own environment
for creating and managing storage which cannot be accessed by users from other tenants.
The default or root tenant is referred to as the provider tenant and a single level of tenants
can be created beneath it. This article describes the ViPR tenant model and describes how
ViPR can be configured to use multiple tenants.

Each tenant is created and configured from resources available to the virtual data center in
order to provide a custom environment that can be managed and further customized at the
tenant level.

At the VDC level the tenant-specific configuration provides the ability to map users into the
tenant based on their AD/LDAP domain, groups to which they are assigned, and attributes
associated with their user account. Restrict access to provisioning resources based on
tenant. For example, certain virtual arrays and/or virtual pools might only be accessible to a
specific tenant. Assign a Data Services namespace so that access to object buckets and the
objects within the buckets can only be assigned to members of the tenant.

When a provider tenant (single-tenant) exists, all tenant users have the same access to the
ViPR VDC storage provisioning environment and to the ViPR Data Services. All users
associated with an authentication provider are mapped to the tenant.

Multi-tenant configuration allows each tenant to manage its own version of the service
catalog and restrict access for tenant members to specific services and service categories.
Add hosts, clusters, and vCenters for which only tenant members can provision and manage
storage. Assign resources to projects and control access to the projects. Create consistency
groups and execution windows.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 9
The ViPR tenant model allows several tenant scenarios.

An enterprise single tenant configuration provides the same storage provisioning and
management environment to all of its users. In this configuration all the users belong to the
provider tenant.

An enterprise multi-tenant environment allows an organization to create additional tenants


for different departments. Every department will have its own set of resources.

An enterprise multi-tenant as managed service provider scenario is where a company


outsources its storage and compute requirements to a managed service provider. The
service provider uses ViPR to create an environment in which the company can create
storage and attach it to hosts located in the data center of the service provider.

The service provider can offer this service to a number of companies, each one assigned to
its own ViPR tenant.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 10
The administration of a tenant is the responsibility of the Tenant Administrator who can
assign users to tenant roles, customize the service catalog, add hosts, clusters, and
vCenters, create projects, create consistency groups, etc.

The Tenant Administrator for the provider tenant can access the Tenants page. Therefore
the provider tenant is the only one that can create subtenants. In our example HR and
Finance sub-tenants cannot create any tenants under them.

The Tenant Administrator that creates a sub-tenant is, by default, assigned to the tenant
administrator role for the new tenant. In our diagram we see how [email protected]
created the finance sub-tenant. This converts the user in the tenant administrator for the
finance tenant.

Since both users are tenant administrator for more than one tenant, when they access the
ViPR UI administration view a drop down appears which allows them to select which tenant
they want to use.

In some occasions it is beneficial to have the provider tenant administrator, administer all
the subtenants. It allows a single user to perform administration tasks for all tenants.
However, the user environment is not changed. A user will see the service catalog, projects,
consistency group, execution windows and hosts for the provider tenant only. They will not
see this for the other sub-tenants.

A tenant administrator role can be assigned to a sub-tenant. The Security Administrator for
the virtual data center can assign the Tenant Administrator role for a sub-tenant to a
member of the sub-tenant, or to the Tenant Administrator for the provider tenant. Any user
who is a Tenant Administrator for the sub-tenant can assign the Tenant Administrator role
to a user of the sub-tenant. This user could be a Tenant Administrator for the provider
Tenant who is also a Tenant Administrator for the sub-tenant.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 11
Users are added to ViPR using authentication providers. When an authentication provider is
created in ViPR, one or more AD/LDAP domains are supplied and are used to provide ViPR
users. A domain can be mapped to a single tenant or can provide users for multiple
tenants.

An authentication provider usually specifies a whitelist group which defines the default
group of users who will be available as ViPR users to the whole VDC. In addition to the
whitelisted group, the available domain users can be mapped based on their group
membership or based on attributes defined in their AD/LDAP entry.

By default, the provider tenant assumes that you want all users made available by the
authentication provider. If that is not true you can use mappings. Sub-tenants below the
provider tenant must specify user mapping; at a minimum, a domain must be specified.

From the ViPR UI, the user mappings for a tenant are specified when you create or edit a
tenant. To modify mappings in a subtenant, you must be Tenant Administrator in that sub-
tenant and Security Administrator.

To create or edit a tenant from the UI, you must be a Tenant Administrator for the provider
tenant and you must be a Security Administrator because the operations needs to obtain a
list of domains.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 12
If ViPR sites are linked, a tenant created in one ViPR site will automatically be available in
other linked ViPR sites.

Because the tenant is replicated across sites, a logged in user moving between the linked
sites has access to the same tenant resources: the service catalog, projects, consistency
groups, etc. In addition, users assigned to tenant roles have those roles when logged in at
any of the linked VDCs.

Virtual arrays and the block volumes and file systems created as a result of performing
provisioning operations are VDC resources and so are not visible across sites.

Object data can be geo-replicated by using global virtual pools. When replication is enabled,
users assigned to a tenant will be able to access buckets created in virtual pools located at
any site.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 13
The administrator role to which a ViPR user is assigned determines what administration
operations they can perform. The areas of the UI and menu items that are visible to a user
are similarly determined by the user's role.

There are two types of ViPR users: end-users and administrators. End-users can be divided
into provisioning users and object storage users. Provisioning users create and manage file
and block storage, mainly using the services in the service catalog. Object storage users are
consumers of ViPR object storage. That is, users who authenticate with ViPR in order to be
allowed to create and access ViPR object storage.

There is no explicit role assigned to end-users in ViPR. All users who belong to a domain
contributed by an authentication provider, and have been mapped into a tenant, can
perform end-user operations and can access the User view at the UI. For end-users, access
to certain ViPR functions is restricted using an access control list (ACL). ViPR administration
operations are controlled by a set of administrator roles. Roles can be specific to a tenant or
can be applicable to all tenants within a virtual data center.

The Security Administrator is responsible for adding users into ViPR virtual data center roles
and can assign users into tenant roles. The Tenant Administrator can assign users to the
tenant roles.

After ViPR is deployed, there is a root user (superuser) which includes all role privileges,
including Security Administrator privileges. The root user can act as the bootstrap user for
the system by assigning one or more users to the Security Administrator role. The Security
Administrator can then assign other user roles.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 14
This slide shows the Server Manager view of a Windows 2008 server that has the active
directory role installed. In the active directory role from the users folder notice there is a
group for ViPR admins and a group for ViPR users.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 15
User authentication is done through an authentication provider that you have added to
ViPR.

Except for the special built-in administrative users (root, sysmonitor, svcuser, and
proxyuser) there are no local users in ViPR. Users who can log in, and who are assigned
roles or ACLs, must be found through an authentication provider added to ViPR.

You need to add at least one authentication provider to ViPR in order to perform operations
using accounts other than the built-in administrative accounts. From the ViPR UI, select
security and under authentication providers add a new provider. Authentication providers
can be added by users with security administrator role.

The authentication provider needs the following information:


• Descriptive Name for the Provider
• Type of provider AD or LDAP
• Description
• Domains – In AD and LDAP you have organizational units grouped together into domains,
multiple domains are grouped together into a forest. ViPR requires the domain that has
the required authentication. Domains are a collection of administratively defined objects
that share a common directory database, security policies, and trust relationships with
other domains.
• Server URLs that point at the FQDN or IP address of the server. If it uses a different port
it can be added to the address.
• The Manager DN indicates the AD bind user account that ViPR uses to connect to AD or
LDAP servers. In our example we’re using the Administrator account which can be found
in the Users tree of the domain listed above.
• The Manager password is the password of the manager account listed above.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 16
As we continue to scroll down the authentication provider configuration the following fields
are required:

• Group Attribute indicates the active directory attribute that is used to identify a group.
This is used for searching the directory for groups, this only applies to active directory
and should be left to its default of CN. Once this value is set it cannot be changed
because of the tenants that are using this provider may already have role assignments
and permissions configured using group names in a format using the current attribute.

• The Group Whitelist is optional, it’s used to filter the group membership information that
ViPR retrieves about a user. When a group or groups are included in the whitelist, it
means that ViPR will be aware of a user’s membership in the specified group only.
Wildcards are also allowed, in our case the asterisk means ViPR will be aware of any and
all groups a user belongs to. The Group whitelist can also be left blank, this is the same
as using an asterisk.

Under the search parameters set the following:

• Search Scope can search for users one level under the search base if one level is
selected. If subtree is selected it can search the entire subtree under the search base.

• The Search Base indicates the base distinguished name that ViPR uses to search for
users at login time and when assigning roles or settings ACLs. In our example all users in
the user container will be searched for.

• The Search Filter indicates the string used to select subsets of users.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 17
Once the authentication provider is added it is time to test the configuration. ViPR does not
have any way of testing the parameters that were entered in the authentication provider,
for this it relies on the creation and assignment of roles. When a user attempts to create a
role and the role fails this is a good indicator that something was not configured correctly in
the authentication provider.

Notice you can select to add a group under the authentication providers, this assigns the
roles to all the users within that group. In our example we are assigning the role to a
specific user. The user created is dev-vipr-admin which belongs to vipr-ss.edu domain.
Since this is an administrator role, add the tenant and datacenter roles to this user. When
the roles are assigned, save the user.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 18
We created a second user that will not have full permissions on the environment. The user
is pod1-vipr-user created in the vipr-ss.edu domain. Notice we are making this user a
project administrator. The window shows the Edit Role Assignment window to demonstrate
the user creation and user edit steps are the same in ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 19
Once we log out of ViPR and log back in with the new admin user will have full permissions
and access to every View, Category and Menu. For our ViPR User the view has more limited
capabilities. Since the user is a project administrator it can create, delete and edit project.
Additionally it can configure consistency groups.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 20
This lesson covered the concepts of projects, tenants, and users. It demonstrated how to
perform authentication with AD/LDAP.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 21
This lab covers how to manage users in a ViPR environment. In it AD/LDAP will be
configured in ViPR and multiple tenants will be configured.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 22
This lab covered how to manage users in a ViPR environment. AD/LDAP was configured in
ViPR and multiple tenants were configured.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 23
This lesson covers the discovery process for storage arrays, fabric managers, protection
systems, hosts, and vCenters.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 24
When an asset is added to ViPR, ViPR uses its credentials to connect to it (over IP) and
obtain information that will help it to model your storage network. This process is referred
to as "discovery."

Discovery runs automatically when an asset is added and at a configurable interval (System
> Configuration). It can also be initiated manually to verify the status of an asset. If an
asset fails discovery, its status will show as error and an indication of the reason for failure
will be displayed. Typically, this will be "Device discovery failed". If ViPR is able to contact
the device, but it is not compatible, it will additionally be flagged as incompatible.

If an asset is unavailable it may affect the networks that are available or the end-points
associated with those networks. If those networks are contributing to a virtual array, and
the storage pools provided by storage systems on the network are contributing to a virtual
pool, the ability of a virtual pool to provide a location for a provisioning request may be
compromised.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 25
If the asset is a storage system, ViPR will collect information about the storage ports and
pools that it provides; if the asset is a host or vCenter, ViPR will discover its initiator ports.
Clusters can be discovered automatically or manually, allowing volumes to be provisioned to
the hosts in the cluster. For a fabric manager, ViPR will retrieve the VSANs or fabrics
configured on the switch and use them to discover networks within the data center, where a
network comprises a set of end-points (array ports and initiator ports) connected by switch
ports. Hence, the number of networks that you see when discovery is performed depends
on the way in which your fabric is configured.

In the case of IP connected storage systems and hosts, ViPR can discover the ports, but it
cannot discover the paths between them, so it necessary to create the IP networks
manually using the ViPR UI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 26
You can deregister certain ViPR physical and virtual components so that they can be made
temporarily unavailable to the ViPR virtual data center.

ViPR will continue to include the component in its model of the virtual data center and will
continue to perform discovery on storage systems and fabric managers. However, any
virtual pools that include storage pools contributed by the component will be affected as
those storage pools can no longer be a possible location for storage operations that use the
virtual pool. Registration is user initiated, if a device fails discovery, it will still be shown as
registered unless it is deregistered.

The following assets that can be deregistered are storage systems, fabric managers and
ports, networks, storage ports, and storage pools.

When the deregistration or failure of a component makes storage pools unavailable, they
will not be shown as matching pools when creating a virtual pool, so it will not be possible
to select them for inclusion in a "manual" virtual pool.

However, storage pools that are already contained in a "manual" virtual pool, or meet the
criteria of an "automatic" virtual pool, will automatically be added back into the virtual pool
when they are made available by a re-registration operation or when a rediscovery
operation restores an asset.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 27
The ViPR Controller provides several options to configure how discovery behaves. To
configure discovery in the Controller from the UI tab navigate to system, then configuration
and select “Discovery”.

Auto-discovery should always be enabled on ViPR, this option is only changed when EMC
customer service recommends it. The time for discovery and re-discovery of storage
systems and networks can also be configured. Once an SMI-S provider is discovered, the
enable auto-scan command will look for the configuration of the storage arrays managed by
SMI-S. Finally the time to wait between every re-scan of an SMI-S provider can also be set.

Notice every configuration change that’s made on the options, requires a ViPR Controller
reboot.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 28
The Storage Management Initiative (SMI-S) defines a method for the interoperable
management of a heterogeneous Storage Area Network (SAN), and describes the
information available to a WBEM Client from an SMI-S compliant CIM Server and an object-
oriented, XML-based, messaging-based interface designed to support the specific
requirements of managing devices in and through SANs. With SMI-S compliant devices an
SMI-S Provider is used as bridge between the third party device that is trying to
communicate with the storage component and the storage component itself.

VMAX and VNX Block use the SMI-S provider which is included with Solutions Enabler.
Remember within Solutions Enabler we discovered the VNX, so a single SMI-S Provider can
manage both array. Discovery of VMAX and VNX happen from the SMI-S Provider tab.
Brocade also uses SMI-S for discovery. The SMI-S Provider for Brocade can be found in
Connectrix Manager Converged Network Edition (CMCNE). CMCNE discovers the switches in
the environment. These switches are presented to ViPR through the built-in SMI-S provider.

Another method of discovering devices in ViPR is to use direct discoveries such as SSH.
ViPR can use SSH to login to VPLEX, Isilon and NetApp arrays and Cisco switches.

For VNX File array ViPR uses a hybrid method, one which leverages the SMI-S provider
built-in to VNX File array, together with the credentials to manage VNX File from Unisphere.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 29
In order to discover VMAX and VNX Block storage arrays an SMI-S Provider must be
discovered first. The SMI-S provider we use is included with the Solutions Enabler
installation in a Windows host. By going to the SMI-S Provider tab, under physical assets we
provide the credentials for this SMI-S Provider. Alternatively from the Storage System tab
you can select Storage Provider for EMC VMAX, VNX Block. Once the discovery is successful
the VMAX and VNX Block will be automatically added to the Storage Systems tab.
Alternatively, position yourself in the storage system tab and click Create. From the drop-
down menu you will be provided with the same options to create an SMI-S provider.

The default credentials for the SMI-S provider found in Solutions enabler are
admin/#1Password.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 30
The VPLEX discovery is very straight forward. To discover VPLEX, go to the Storage
Systems tab or the storage provider tab and select add. From the drop down, simply select
EMC VPLEX. Provide a name for the VPLEX system and the IP address of the management
server. Use the default port of 443 and the VPLEX service account. Finally, click save and
wait for ViPR to discover the existing VPLEX. Notice the Storage Provider host is the VPLEX
management server. If all the pre-requisites for VPLEX are met the system will be
discovered successfully and show a status of complete.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 31
To discover a standalone ScaleIO system simply select Block Storage Powered by ScaleIO,
assign a name and set the IP address or FQDN of the primary MDM under storage provider
host. The default port for SSH is 22. Set the user account that has storage system
administrator privileges and a password and click save.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 32
With ViPR 2.0 Hitachi Data Systems (HDS) are now supported in ViPR. Communication
between ViPR and HDS happen through the HiCommand Device Manager. The Device
Manager is similar to an SMI-S provider where multiple storage arrays can be managed
from a central location. To add it assign a name and set an IP address of the Hitachi
HiCommand Device Manager server. Enter the credentials and click save. The storage
provider section will show the HiCommand Device Manager while the Storage Systems
section will display all the HDS storage discovered the Device Manager.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 33
ViPR uses the OpenStack Block Storage (Cinder) Service to add third-party block storage
systems to ViPR. OpenStack must be configured in a Red Hat Enterprise, SUSE Enterprise
or Ubuntu Linux. It requires both the identity service for authentication (keystone) and the
block storage (cinder) driver. With this ViPR can leverage the multiple third-party vendor
drivers available through OpenStack. When discovering third-party systems enter the host
or IP address of the host containing OpenStack cinder. Set the credentials for this host and
use an account with storage system administrator privileges.

When the discovery is complete OpenStack will appear under storage providers and all the
storage arrays contained will be added to the storage systems.

Notice OpenStack cannot determine the WWPN of devices connected through FC or the IQN
of devices connected through iSCSI. It creates a default port under each storage array. You
need to modify this port and manually add WWPNs and IQNs of the devices being managed.
If you don’t export operations will not be available.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 34
If the third-party block storage is connected through Fibre Channel (FC), at least one
storage port, World Wide Port Name (WWPN) must be manually added to ViPR through the
ViPR CLI. You will need to get at least one valid WWPN for the storage port before
continuing.

Use ViPR CLI to list the storage arrays, find the one that belongs to the OpenStack provider
and note the last three digits of the serial number.

Use the ViPR CLI to get the storage port network ID for the default storage port of the
storage array.

Bind the WWN to the default storage port and make sure the WWN is now valid.

After the WWPN is added to the storage port, you can perform export operations on the
storage system from ViPR. At the time of the export, ViPR reads the export response from
the Cinder service. The export response will include the WWPN, which was manually added
by the system administrator from the ViPR CLI, and any additional WWPNs listed in the
export response. ViPR then creates a storage port for each of the WWPNs listed in the
export response during the export operation.

After a successful export operation is performed, the Storage Port page displays any newly
created ports, in addition to the Default storage port.

Each time another export operation is performed on the same storage system, ViPR reads
the Cinder export response. If the export response presents WWPNs, which are not present
in ViPR, then ViPR creates new storage ports for every new WWPN.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 35
VNX File discovery happens from the same location as VPLEX and other storage arrays.
When you choose to create a new storage system select EMC VNX File as the type. The
options will update to show specific characteristics for VNX File discovery. Notice VNX File
require two types of discoveries.

The first one requires the IP address of the control station with the admin credentials. The
default credentials for a VNX File are nasadmin/nasadmin.

Additionally, as you continue to move down, VNX File requires the SMI-Provider IP address
and credentials. Notice the IP address for the SMI-S provider is the same as the control
station. This is because the control station has a built in SMI-S provider.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 36
Isilon Discovery requires the Isilon IP address, port, username and password. When Isilon
is selected from the type of storage to be discovered, an IP address is requested. The only
IP address that ViPR accepts is the Smart Connect IP address. The default port number is
8080. In ViPR we also have two types of users, a root and an admin user. ViPR requires the
credential for the root user to complete a successful discovery.

Click save to complete the Isilon discovery.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 37
NetApp discovery is very similar to Isilon. Select NetApp from the type of storage system.
Enter the NetApp ON TAP IP address, port, and credentials to discover.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 38
All the storage arrays give us the opportunity to register and de-register storage pools and
ports that were discovered. In the example above, we see the physical pools on a VMAX
array.

Notice the check next to the name means the storage pool is registered. When a storage
pool gets unregistered, it can no longer be used by ViPR to provision resources. However,
devices that were provisioned from that storage pool will remain provision.

When the name of the storage pool is clicked, an administrator can now set the maximum
utilization and amount of oversubscription allowed for each storage pool. Additionally, a
limit on the number of volumes that can be created is also presented. If the volume is left
blank no limit will be introduced.

The same is true for the storage ports. All the storage ports are shown on the storage pool
table. If we unregister a storage pool, it will not be used on zoning operations.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 39
Once all the storage arrays are discovered, the next step is to discover Fabric Managers.
Fabric Managers are used to add a SAN network system to ViPR, the first step in making
SAN storage available for provisioning. When you add a SAN switch, ViPR discovers the
topology seen by the switch, and creates a network for each Cisco VSAN or Brocade fabric.

In the screenshot above, a fabric manager is added. The type selected is Brocade. ViPR
leverages the SMI-S client, included with the licensed versions of CMCNE. Notice the IP
address requested is that of the SMI-S Host, not the actual switch. The SMI-S Provider
configuration can be changed from within CMCNE.

Among the possible configurations for SMI-S are the credentials needed to establish a
connection. Credentials can be set for the SMI-S provider, or it can use the same
credentials as CMCNE. The default is to use the same credentials as CMCNE.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 40
Cisco discovery for ViPR is different from Brocade. Cisco switches must be discovered
through SSH. When Cisco is selected as the type of Fabric Manager, you are required to
provide a name, IP, port, and credentials. While ViPR calls the IP “Host”, notice that the
address must be that of the physical switch. The credentials used should also be the
physical switch’s credentials.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 41
RecoverPoint discovery takes place form the Data Protection Systems. It is recommended
to discover the Fabric Managers before RecoverPoint is discovered because this topology
information will allow ViPR to determine where the RPAs are connected. If the fabric
managers using RecoverPoint are discovered before RecoverPoint itself, by pointing at the
cluster IP, ViPR will determine all the other RPAs in the environment.

Notice RecoverPoint discovery requires the host IP address of the management IP, the port,
and then admin user credentials. Once this are entered save the new data protection
system.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 42
ViPR system administrators can learn the ViPR configuration requirements for compute
images, and how to add compute images to ViPR.

Compute Images are operating system (OS) installation files (ISO images) that ViPR uses to
deploy operating systems on Vblock compute elements that have been registered to ViPR. If
ViPR is used to provision VSphere clusters, ViPR can also be used to add the cluster to a
vCenter datacenter that has been registered to ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 43
You can use the Hosts tab to add Windows, AIX, Linux, HP-UX and other hosts to which
volumes can be exported and attached.

The definitive list of hosts supported is provided in EMC ViPR Data Sheet and Compatibility
Matrix on www.EMC.com/getvipr.

When provisioning storage for a host, ViPR needs to communicate with the host in order to
validate the host and connect storage to it. For Linux hosts, ViPR will SSH into the host; for
Windows hosts, ViPR needs to execute remote PowerShell commands on the host.

When a host is connected to a Fibre Channel (FC) fabric, ViPR uses information from the
fabric manager to discover end-points, array ports, and host initiators and add them to a
network. Discoverable checkbox are present in Windows and Linux discoveries. When the
checkbox is selected ViPR will automatically discover the initiators for the hosts. If the
option is not selected this will need to be added manually.

However, when you add an IP connected host, ViPR does not know if it is on the same IP
network as an array, so you need to manually add it to the IP network that contains the
array IP ports.

Notice ESXi servers should be configured under vCenter tab and not the hosts tab. ViPR
communicates with Linux hosts through ViPR. To create a Linux host give it a name, provide
the IP address, port and root credentials. While a root user is not required for ViPR, the
user created needs to have most of the permissions that root provides.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 44
Windows hosts can be discovered through WinRM. Windows Remote Management (WinRM)
is the Microsoft implementation of WS-MAN protocol [MS-WSMV]. In a previous module we
configured WinRM to listen for a connection. This automatically creates an exception in the
Windows Firewall. Once the IP address of the Windows host is entered select an http or
https protocol for the communication. This automatically updates the port to 5985 for http
or 5986 for https. Finally complete the Windows Administrator Credentials.

It is recommended to check the “Validate Connection on Save”, this automatically tests the
connection and provides an error messages if the discovery is not successful.

The third option when adding a host is adding “Other”. This option is very limited and
doesn’t scan for disks in the host that is added.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 45
The process for selecting HP-UX and other hosts is very similar as both only require host
name and IP information. No log in credentials are provided. When hosts are added this
way initiators must be manually selected. HP-UX hosts are specified separate from the
“Other” category because ViPR can provide VMAX flag configuration for hosts marked as HP-
UX.

The capabilities provided by adding hosts without having to discover them whether by using
HP-UX, Other or unchecking the discoverable checkbox on Linux and Windows are boot
from SAN capabilities. In a boot from SAN environment, the hosts are not configured and
therefore initiator information must be added manually.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 46
ViPR provides the capability of adding or modifying initiators from the specified hosts. If the
discoverable option was selected on Windows and Linux, host initiators cannot be edited. If
the discoverable option was not selected, or for HP-UX and other hosts, WWNs can be
manually added.

When initiators are added, a node and a port WWN must be specified.

It is important to note that ViPR does allow initiators to be registered or unregistered from
a host.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 47
ViPR automatically detects Windows and Linux hosts that belongs to clusters and provides
the ability of creating a cluster and adding hosts to a cluster. The benefits of having devices
on a cluster is to ensure provisioning can be performed for all the hosts in the cluster.

ViPR also allows the manual creation of clusters. From the Clusters tab, select a new
cluster, notice the only requirements for a new cluster is a new. As an example we are
creating an HP-UX cluster.

When a cluster is created, edit it and click on hosts to add hosts to the cluster. A window
opens with the list of available hosts, select the hosts that will be added to the cluster and
click Add. It is worth noting ViPR does not configure the hosts into a cluster, it merely
acknowledges the hosts belong in a cluster and therefore performs provisioning for all the
hosts in the cluster.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 48
vCenter servers can be added under the vCenter tab. Simply add the IP address of the
vCenter Server and the username and password. ViPR allows to export and mount storage
as a datastore for ESXi hosts.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 49
This lesson covered the discovery process for storage arrays, fabric managers, protection
systems, hosts, and vCenters.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 50
The purpose of this lab is to discover all the physical assets that will be managed by the
ViPR Controller.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 51
The purpose of this lab was to discover all the physical assets that will be managed by the
ViPR Controller.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 52
This module covered ViPR projects, tenants and users. We created and assigned ViPR roles
to users and integrated ViPR with AD and LDAP. ViPR physical asset discovery was
performed.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 53
Copyright 2015 EMC Corporation. All rights reserved. ViPR Discovery and Configuration 54
This module focuses on virtual resource provisioning. It covers how to plan virtual arrays,
networks, and pools on a ViPR environment.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 1
This lesson covers an introduction to abstraction, virtual arrays, virtual pools, and the
virtual datacenter. It shows how to create virtual arrays and virtual pools, their best
practices and design considerations.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 2
Configuration of the ViPR virtual data center requires the addition of physical assets and
their organization into virtual storage arrays, referred to as virtual arrays, and virtual
storage pools, referred to as virtual pools.

For each physical data center, a virtual array defines the physical storage to be made
available to ViPR.

You create a virtual array by giving it a meaningful name and by assigning networks to it. A
network consists of the storage ports and the host or initiator ports connected to the SAN
switches that were added to ViPR as fabric managers.

The assignment of the network to the virtual array, and the subsequent association of the
virtual array with a virtual pool, determines the storage that is available when a user
requests a provisioning service.

You can limit the devices available to a virtual array by manually selecting the networks
that contribute to it, and by selecting the ports that contribute to each network.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 3
A virtual data center is a collection of storage infrastructure that can be managed as a
cohesive unit by data center administrators. Geographical co-location of storage systems in
a virtual data center is not required. However, high bandwidth and low latency are assumed
in the virtual data center.

You can deploy ViPR as a multisite configuration, where several ViPR Controllers control
multiple data centers in different locations. In this type of configuration, ViPR Controllers
behave as a loosely coupled federation of autonomous virtual data centers.

The virtual data center enables a ViPR administrator to discover physical storage and
abstract it into ViPR virtual arrays and virtual pools. These abstractions are key to enabling
software-defined storage. They also enable administrators to implement easy to understand
policies to manage storage.

These ViPR virtual array and pool abstractions significantly simplify the provisioning of block
and file storage. Users consume storage from virtual pools of storage that a
ViPR administrator makes available to them, which relieves storage administrators from
provisioning tasks. When end users provision storage, they need to know only the type of
storage (virtual pool) and the host/cluster to which the storage should be attached. They do
not have to know the details of the underlying physical storage infrastructure.
All ViPR resources are contained and managed in the virtual data center. The virtual data
center is the top-level resource in ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 4
The virtual array is a ViPR abstraction for the underlying storage infrastructure, and the
network connectivity between hosts and these storage. The virtual array provides a more
abstract view of the storage environment for use in either applying policy or provisioning.

All physical arrays and commodity nodes participating in a virtual array should be connected
to the same fabrics or VSANs to ensure that they all have equivalent network connectivity
to the environment. When a storage administrator adds physical arrays to ViPR, ViPR
discovers their storage pools, ports, and configuration. And after FC switches are added,
ViPR automatically discovers and maps the FC networks. When populating a virtual array
with physical arrays and networks, the administrator must ensure that when storage is
presented from the virtual array to a host, that the host will be able to physically reach the
storage presented to it.

Having examined the connectivity between hosts and arrays the administrator can build the
virtual arrays. When all hosts can reach all arrays, the entire storage infrastructure can be
grouped into a single virtual array; however, physical arrays may be need to be placed into
separate virtual arrays to accommodate different physical configurations and different
requirements for fault tolerance, network isolation, or tenant isolation.

In the typical physical environment there are multiple arrays, each with its own
management tools, processes, and best practices. With the ViPR virtual array all of the
unique capabilities of the physical arrays are available, but ViPR automates the operations
of the tools, processes, and best practices to simplify provisioning storage across a
heterogeneous storage infrastructure. In this way ViPR can make a multi-vendor storage
environment look like one big virtual array.

With the physical arrays configured into ViPR virtual arrays, the administrator can now build
ViPR policies that are automatically applied across heterogeneous arrays.

Only ViPR users with a System Administrator role can create virtual arrays. Although the
end users who provision storage are aware of virtual arrays, they are unaware of the
underlying infrastructure components (such as shared SANs, computing fabrics, or
commodity nodes). Only the System Administrator has access to this information.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 5
The slide represents an example of traditional virtual pools in ViPR. Notice we have two
virtual arrays, each containing multiple physical pools. These physical pools are added into
the appropriate virtual pools that meet the selection criteria performed when creating them.
Notice virtual pools can be selected manually or automatically. The virtual pools can also be
protected by RecoverPoint or VPLEX.

A common misconception is that there is an exclusive relationship between virtual arrays


and virtual pools. Virtual pools can have physical storage pools from multiple virtual arrays,
additionally these physical storage pools can belong to multiple virtual pools.

A virtual pool represents a storage service offering from which you can provision storage. A
virtual pool can reside in a single virtual data center, or it can span multiple virtual data
centers.

ViPR has three types of virtual pools: block, file, and object.

Block virtual pools and file virtual pools are sets of block and file storage capabilities that
meet various storage performance and cost needs.

Object virtual pools store object data. Storage on underlying ViPR-managed file arrays or
commodity nodes backs them.

ViPR automatically matches existing physical pools on the ViPR-managed storage systems
to the virtual pool characteristics that the System Administrator specifies. The System
Administrator can enable ViPR to automatically assign the matching physical pools to the
virtual pool that he or she is creating, or the System Administrator can select a subset of
the matching physical pools manually to assign to the virtual pool.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 6
In a traditional data center, we can find multiple storage arrays connected to different
fabrics for block storage. For file storage, we only need IP connectivity between the storage
arrays and hosts. It is important to determine what is the best way to partition our physical
storage arrays into virtual arrays.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 7
When deciding how to partition physical storage into virtual arrays it is important to
remember that the purpose of virtual arrays is to represent what devices can talk to what
other devices. In our example on the block storage, notice we created two separate virtual
arrays. This is because the storage in the first array has no communication to the storage
on the second array. If there was an ISL between SW 2 and SW 3, we could create a single
array since all devices would communicate to each other.

In our example, we created a single array for file storage. The hosts have no FC
connectivity to the block devices so we will only use them for file storage. The Isilon, VNX
File and NetApp should all be stored in the same array since they all have connectivity to all
the hosts. While ViPR checks the network connectivity on the FC side by verifying what’s
attached to the switch ports and what are the initiator and storage ports on the hosts and
storage, this is not true for file.

ViPR only verifies a host can reach a storage, it does not care about where the hosts or
storage are attached.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 8
An entire network (VSAN or IP) can now be assigned to multiple virtual arrays. When we
associate a network with a virtual array we automatically associate all the ports in that
network with that virtual array. This will remain true when we associate that network with a
second Virtual Array. This supports the continuous availability use case for VPLEX metro,
where ViPR can now provision a distributed volume to hosts on both sides participating in a
stretched cluster across metro distance. If a new port is added to the network that new port
will automatically be associated with all virtual arrays associated with that network.
We can choose to simply associate ports directly with the Virtual Array skipping the
assignment of the VSAN altogether. We can manually assign ports from a physical array to
a virtual array. This may be particularly useful in a scenario where the ports on a physical
array are designated into three groups: one for production, one for test, and one for
development. This scenario can be supported by manually assigning each set of ports to an
individual Virtual Array configuration. However, one rule remains: A port can be in one,
and only one, network/virtual array.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 9
The Number of Paths parameter determines the number of ports that the host will be
allowed to access on the storage array. Broadly speaking, it can be viewed as a way to
regulate storage network bandwidth for volumes allocated out of this virtual pool. By setting
its value higher, more storage array ports will be used for host I/O on the backend VMAX or
VNX arrays.

Three parameters that influence path count have been exposed. The benefit is the end user
has better control over the host-to-storage paths that result from a provisioning request.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 10
Several considerations to keep in mind when dealing with number of paths as ViPR will
never allocate more than the specified maxPath across all networks for a single host. If you
specified a maximum of two paths and there are four networks, ViPR will only use two
networks. However, it will try to evenly distribute the paths across all the networks
available.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 11
Consider how this plays out in practice. Here we have a VMAX connecting to a host. The
host has two HBA initiators and the VMAX has four FA ports.

The storage administrator has configured the virtual pool with “maxPaths” set to 4 and
“pathsPerInitiator” set to 2.

Therefore, when ViPR provisions storage out of this pool, we will get four paths from the
host to the array and there will be two paths per HBA port.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 12
This is the same configuration on a VNX. Note that the recommended virtual pool
parameters are the same as for the VMAX.

Previously in ViPR, the numPaths would have ensured that there were always two paths
from host to array. Now ViPR supports configuring the numpaths.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 13
In another configuration we have a host with four initiator ports while we still have four FA
ports on our array. Because we have set the “Paths Per Initiator” to 1 we get four separate
paths from the host to the array, each using a different initiator.
If we change this to set “Paths Per Initiator” to 2, we still use all four initiators, but now
each initiator is connected to two different FA ports on the array.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 14
Note that this the recommended configuration for the VNX. Also note that the
recommended virtual pool configuration for a host with four initiators is different when the
host is connected to a VNX than it was for a similar host connected to a VMAX.
For the VNX we recommend two Paths Per Initiator.
For the VMAX we recommend only one Path Per Initiator.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 15
The Change Virtual Pool service in the ViPR 2.1 service catalog can now be used to change
the tiering policy assigned to a volume. The service can be used to assign tiering policies,
change volumes from one tiering policy to another, or change volumes with a tiering policy
to one without a tiering policy.

At the moment, a tiering policy associated with a VMAX volume behind a VPLEX cannot be
changed. For volumes on a VMAX storage system, all volumes within the same storage
group must be selected and changed together. For VNX, the volume property is changed to
associate the volume with the new policy.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 16
This lesson covered an introduction to abstraction, virtual arrays, virtual pools, and the
virtual datacenter. It showed how to create virtual arrays and virtual pools, their best
practices, and design considerations.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 17
This lesson covers the steps to configure a virtual array and a virtual network.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 18
The ViPR virtual array abstraction represents physical connectivity between hosts and
storage arrays. Hosts within a virtual array can be provisioned with storage (either file or
block) from any storage frames with ports on that virtual array.

When defining a virtual array you must also tell ViPR whether or not to take responsibility
for SAN zoning. Notice you can create FC networks, IP networks or both. If you are using
multi-tenancy in your environment select “grant access to a specific tenant”. Storage pools
can be selected to use in the virtual array.

Next let’s examine the configuration steps for configuring the virtual arrays in more detail.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 19
Virtual Array (vArray) is a ViPR virtual asset that partitions a Virtual Data Center (VDC) into
groups of

connected Hosts, Networks and Storage Arrays. vArray defines connectivity rules between
servers and

storage resources. Briefly, vArray is a networked access.

vArray includes:
• SAN switches and Fabric Managers
• IP Networks connecting the Storage Systems and Hosts
• FC and IP ports from Block & File Arrays, VPLEX, RecoverPoint and Hosts
• Physical Block & File Storage Pools

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 20
The first option when virtual assets are selected is the creation of virtual arrays. When a
virtual array is added, the only parameter required is a name that will be used to identify
the virtual array.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 21
Once the new virtual array is created you need to configure it. We can see the name of the
virtual array which can be edited.

A virtual array can be configured for either Automatic or Manual SAN zoning.

With Automatic, ViPR will take responsibility for creating host to storage array zones on the
SAN fabrics as needed when performing a block storage provisioning service. ViPR default
generated zones will have names with the prefix:
“SDS_Hostname_HostWWN_ArrayLast4SerialDigits_ArrayPort”. However, the administrator
can access Physical Assets -> Controller Config to configure their own naming
conventions for the zoning as well as have ViPR import existing Aliases.

With Manual, ViPR will not create any zones; therefore, administrators MUST have zoning in
place or provisioning orders against storage in this virtual array will fail.

You can grant access to specific tenants, assign commodity storage, block, or file storage to
the virtual array.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 22
When working in multi-tenant environment, you can check the “Grant Access to Tenants”
checkbox. A list of the tenants in your system will be displayed. You can select the tenant
or tenants that can access the virtual array. Once the tenants are selected, click Save.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 23
When you look at commodity storage, networks, storage ports, storage pools, and even
associated storage systems appear in 0. This is because storage systems have not been
added. When you select Add Storage System, a window pops up with all the storage
systems available. Notice you can search for storage system by name.

Select the storage systems that will be used and click Add to associate the storage systems
with the virtual array. Remember physical storage is not exclusive to a single virtual pool.
The same physical storage can be assigned to multiple pools. Commodity storage can also
be assigned to a storage pool.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 24
Once storage arrays are added to virtual array networks, storage ports, and storage pools
are automatically added. Notice the associated storage systems now display physical
storage arrays. There are no associated virtual pools since we have not created any. We will
now verify the storage ports, pools, and network configuration.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 25
When you select a storage port or storage pool from within your virtual array it will show
you all the ports and pools that are assigned to the virtual array. By default, ViPR assigns
all the resources of all storage systems that are discovered. For further granularity, simply
select the ports or pools and click Remove. This will remove them from the virtual array.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 26
When block storage arrays are selected the switches and VSANs that are attached to those
storage arrays get added to the virtual array automatically. To add new VSANs or switches
simply click on Add and select the devices to add. This devices will bind with the virtual
array.

Network configuration is very important for virtual arrays since it represents what devices
can communicate together. When a virtual array is created it is required to create a
network. Notice you can create an IP and an FC network in a virtual array. A virtual array is
not required to have both FC and IP networks. When you expand an FC network it will show
the storage arrays that are attached to that specific FC Network.

Unlike FC networks, with IP networks ViPR doesn’t need to be aware of where everything is
attached. When an IP network is created in ViPR, the IP ports on the storage arrays are
added. When a device is trying to communicate with this ports ViPR test connectivity
between the devices.

Networks can now be configured from multiple places. From the Virtual Array tab you can
click Networks on the right side of the array. This will bring you to the Networks view for
that specific array. In this view you can add an FC network or an IP network which would be
automatically assigned to the virtual array. The same is true when a virtual array is selected
and the Networks button is clicked.

Another alternative is to click the Networks tab. Here you will see all the VSANs, VF or
physical switches that were discovered under Fabric Managers. The Add button at the
bottom will allow the creation of IP Networks.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 27
By drilling down into a VSAN we can select the specific array and host FC ports that should
be included in this ViPR FC Network. Notice we can also change the name of the VSAN and
specify which virtual arrays it is assigned to.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 28
Adding IP networks is very easy in ViPR. An IP network can be added from within a virtual
array when the Networks button is selected. There are two parameters required to create
an IP network, the name of the network, and all the virtual arrays the will be sharing the
network. The virtual array network from where Networks was selected will be checked
automatically.

An IP network can also be created from the Networks tab. Once a network is created it will
be listed under networks.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 29
From the Edit IP network window, click Add to add ports to the network. Notice the menu
has three options, add ports allows you to type in the IP address of a device not found
under hosts or arrays. Add hosts shows all the host IPv4 and IPv6 addresses. The same is
true for the add array ports option. It’s important to add both hosts and arrays to the
network so the devices can communicate.

Additionally IP addresses that are not registered in ViPR can also be added. This is useful
when planning future additions or changes to your physical environment

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 30
This lesson covered the steps to configure a virtual array and a virtual network.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 31
This lesson covers the steps to configure virtual pools.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 32
Virtual pools can be created for block, file or object. We will cover file and block pools in this
lesson. Virtual pools are created after virtual arrays have been created in ViPR. Virtual pools
must be associated with one or more virtual arrays.

Virtual pools are a collection of storage pools grouped together according to user-defined
characteristics.

For example, if your virtual array has a number of storage pools that can provide block
storage using SSDs, you can group those physical pools into a single virtual pool. In that
case, the performance and protection characteristics of the virtual pool would determine
that it provides high performance storage. Hence, when giving a name to the virtual pool,
you might choose "gold" or "tier1" to indicate that the storage provides the highest
performance.

The ViPR virtual pool is the source from which storage will be provisioned. The virtual pool
defines the policies that will be applied to select the arrays, connectivity, performance, and
protection applied in the provisioning process.

When a provisioning user requests the creation of a block volume from the "gold" virtual
pool, ViPR chooses the physical array/physical storage pool combination on which the
volume will be created. The virtual pool can comprise physical pools spanning a number of
arrays, so the actual array chosen could be any of them. The provisioning user does not
care which physical pool is chosen, only that it provides the level of performance consistent
with "gold" storage.

Listed here are the high level steps to create virtual pools.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 33
When a block virtual pool is selected a name, description and selection of virtual arrays is
required. Once these parameters are selected, the Storage Pools section will show how
many storage pools match the criteria. Additionally a quota can be set. If enabled enter the
maximum amount of storage, in GB, that can be allocated to this virtual pool. While
defining the virtual pool criteria, it is recommended to change the criteria one at a time and
expand Storage Pools to check which storage pools matching the criteria are available.

The pool matching algorithm runs shortly after a criteria has been selected and the
matching pools will be from all systems that can provide pools that support the selected
protocol.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 34
The first section for the virtual pool is the hardware section. It allows you to choose either
thin or thick provisioning - virtual pools cannot combine both. FC and or iSCSI protocols can
also be selected. Other options include type of drive and system type. If a system type is
selected it will present a list of RAID options specific to the storage type. When FAST is
available for a storage type the auto-tiering policy option will be displayed. It includes the
different type of tiering configuration parameters available in the array.

If multi-volume consistency is enabled the devices can be added to ViPR consistency group.

The expandable option allows volumes to be expanded in a non-disruptive manner.


However, there are many limitations when virtual pools are set to expandable. Native
continuous copies are not supported and most compute stack integration do not support
virtual pools with expandable set.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 35
The SAN Multi-Path option displays the number of path configuration. Minimum paths as
expressed previously are the minimum number of paths from the host to the storage array.
Maximum paths are the maximum number of paths that can be configured per host. Paths
per initiator are the number of paths (ports) to allocate to each initiator that is used. ViPR
will not allocate more paths than the Maximum Path allows. When the Maximum Path is set
too low there may be unused initiators which will not be zoned to ports.

The High Availability option allows you to disable high-availability or configure VPLEX Local
or Distributed. With VPLEX local the virtual pool determines which storage pools support it.
If VPLEX distributed is selected options for the remote (distributed) virtual array and virtual
pool must be set. RecoverPoint can also be integrated by setting it as source point.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 36
The first three Data Protection options refer to array-based local protection mechanisms.

Maximum Native Snapshots are the maximum number of local snapshots allowed for
resources from this virtual pool. To use the ViPR Create Snapshot services, a value of at
least 1 must be specified.

Maximum Native Continuous Copies are the maximum number of native continuous copies
allowed for resources from this virtual pool. To use the ViPR Create Continuous Copy
services a value of at least 1 must be specified.

Native Continuous Copies Virtual Pool enables a different virtual pool to be specified which
will be used for native continuous copies. Native continuous copies are not supported for
virtual pools with the expandable attribute enabled.

Remote Protection Enables volumes created in the virtual pool to be protected by a


supported protection system. Notice remote protection are displayed on a drop-down,
allowing only a single method for remote protection. VMAX SRDF or RecoverPoint are the
two protection mechanisms available.

Access control is used with multi-tenancy where tenants can be granted access to specific
virtual pools.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 37
Ultimately the purpose of a Virtual Pool is to identify the specific physical pools that satisfy
the policy requirements for this Virtual Pool. Physical pools have a variety of characteristics.
When defining a Virtual Pool we must identify which physical pools we intend to use for
provisioning. This is accomplished by selecting specific criteria such as Virtual Array,
storage type (block or file), connectivity (FC, iSCSI, NFS, CIFS), performance, and
protection. As these criteria are assigned to the Virtual Pool, the list of physical pools
available and shown in the Storage Pool selection pane, will change to accommodate the
specific criteria.

When creating or editing a virtual pool, the UI helps you choose the physical pools that
match the performance and protection criteria that you are looking for by providing a set of
criteria and listing the pools that match the criteria. The storage pools table list all of pools
that are currently available that match the criteria and is dynamically updated as you make
criteria selections.

If you set the pool to be a "manual" pool, you can select the storage pools that will
comprise the pool. These storage pools will be fixed unless you edit the virtual pool.

If you select "automatic", the storage pools that comprise the virtual pools will be
automatically updated during the virtual pool's lifetime based on the availability of storage
pools in the virtual array.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 38
File virtual pools are very similar to what is requested of a user to filter the available pools.
We can see the type of provisioning must be selected, both cannot be selected. As a
protocol CIFS or NFS are available, you can choose both if desired. The system type shows
all the available storage.

Data protection has a single option for the maximum snapshots.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 39
This lesson covered the steps to configure virtual pools.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 40
This lab covers how to implement virtual resources such as virtual arrays, virtual networks,
and virtual pools.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 41
This lab covered how to implement virtual resources such as virtual arrays, virtual
networks, and virtual pools.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 42
This module covered virtual resource provisioning. It covered how to plan virtual arrays,
networks, and pools in a ViPR environment.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 43
Copyright 2015 EMC Corporation. All rights reserved. ViPR Virtual Resource Provisioning 44
This module focuses on the ViPR service catalog. In it users will understand how to create
and modify services and configure a service catalog for multi-tenancy.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 1
This lesson covers catalog services: modification, deletion, and customization of services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 2
The User view provides access to the UI areas required for running services, working with
orders, and reviewing the storage resources that you have created. In addition, if you are
an approver or root user, it provides access to the approvals area.

Access and use of the service catalog within the User view does not require any
administrator privileges, it is accessible to any user who has been configured to access the
tenant. In addition, for object storage users who are tenant members, it is possible to use
the UI to self-generate an object store key to enable access to ViPR object storage.

When accessing ViPR from the User view, the categories of the service catalog, and the
services within each category, that are visible to you can be configured by a Tenant
Administrator. In addition, the storage resources (block volumes and file systems) that you
have access to depend on the project to which you are assigned by the Tenant
Administrator.

If you are Tenant Administrator, restrictions on access to the service catalog, or to


resources based on project membership, do not apply; a Tenant Administrator has ultimate
authority in the tenant and can access any area of the service catalog and can create
resource for any project and access resources belonging to any project.

The administrator tasks that must be performed in order to prepare the ViPR virtual data
center for use by provisioning end-users.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 3
The menus within the User view enable provisioning users to access the service catalog and
execute services which result in the creation of orders.

The following table describes the system features provided by the User view menu and
describes how access permissions affect what you can see and do.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 4
The ViPR Service Catalog provides access to a set of predefined services, which includes
high level operations to carry out common provisioning activities such as creating a block
storage volume and exporting it to a host or cluster. as well as "building block" services, to
perform more granular operations such as creating a ViPR volume, or exporting the storage
to a host in separate operations.

ViPR services are organized in categories. Default categories group block operations, file
operations and object operations. Additionally, host specific categories exist. Categories can
be added and modified from the service catalog with proper access roles. The service
catalog varies depending on the permissions of the users viewing it.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 5
The categories in the catalog enables you to select a pre-configured service which is
appropriate to the storage operation that you want to perform. The services are grouped
logically into categories to make them easier to find.

The initial service catalog home page for the User view is shown in the slide. When you
open one of the categories you will see the services within that category.

Additionally, System Administrators can create new categories or delete categories within
the service catalog. This enables users to have access to only the actions they want to
perform.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 6
As a provisioning user, you can only create resources and perform operations on resources
belonging to projects that you are assigned to (or are the owner of). If you are a Tenant
Administrator, you can run all services and choose any project to be the owner of the
resource.

Block Storage Services can be divided into four sections.

• In the first section, you can create and remove volumes and/or assign them to a host. If
you create a block volume for host, it creates a block volume for a specified host. The
host to which you want to attach (export or mount) a block volume must have been
configured as a ViPR host asset so that ViPR knows how to connect to it.

• The second section has the expand volume option. It expands an existing block volume.

• The third selection allows you to export a volume (including VPLEX volumes) to a host. It
makes an existing volume available to a host. The host to which you want to attach
(export or mount) a block volume must have been configured as a ViPR host asset so
that ViPR knows how to connect to it.

• The fourth section is used to discover storage arrays and ingest them to be managed by
ViPR. The discover unmanaged volumes finds volumes within a virtual storage pool which
are not under ViPR management. The ingest unmanaged volumes imports unmanaged
block volumes, which have previously been discovered, into ViPR. The unmanaged
volumes must be in virtual storage pools associated with the array from which you want
to ingest.

Finally, the last section allows you to move volumes into a different virtual array (Moves a
VPLEX block volume to a different virtual array), and virtual pool (Moves a VPLEX volume to
a different virtual pool).

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 7
The block protection services enable the creation of snapshots, full copies and continuous
copies for a volume, and enables failover of a volume.

• Failover Block Volume - Performs a disaster recovery failover operation using


RecoverPoint. The volume must have been created in a pool that supports RecoverPoint
protection.

• Create a Block Snapshot - Creates a snapshot of a block volume.

• Restore a Block Snapshot - Restore a previous snapshot.

• Remove a Block Snapshot - Deletes a snapshot of a block volume.

• Create a Full Copy of a Block Volume - Creates a full copy of a block volume. The
virtual pool must have been configured to allow native snapshots.

• Create Snapshot Full Copy – Creates a full copy of a snapshot.

• Create a Continuous Copy of a Block Volume - Uses a continuous data protection


mechanism to create a continuous copy (a mirror) of a block volume. The virtual pool
that the volume belongs to must have been configured to allow continuous copies.

• Remove a Continuous Copy – Allows you to remove existing continuous copies of a


volume.

• Export VPLEX Volume - A new feature in VPLEX 1.1 is the capability to export VPLEX
volumes to a host or cluster.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 8
The Block services for Linux are similar to the traditional block services with the added
benefit of mounting or unmounting the volume in the Linux volume. Other block services
for Linux allow you to extend and mount volumes or mount existing volumes in Linux.

Windows block services allow to create and mount volumes, unmount volumes and expand
volumes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 9
The block services for VMware enable the creation of block volumes and mounting the
created volumes, or existing volumes, as a datastore on an ESX Host.

• Create a Volume and Datastore - Creates a block volume and attaches it to an ESX
host as a datastore. The vCenter to which you want to attach (export or mount) a block
volume must have been configured as a ViPR host asset so that ViPR knows how to
connect to it.

• Create a Volume for VMware - Creates one or more block volume for a VMware but
does not make them available as a datastore. The vCenter to which you want to attach
(export or mount) a block volume must have been configured as a ViPR host asset so
that ViPR knows how to connect to it.

• Create VMware Datastore Using Existing Volume - Creates a VMware datastore


using a previously created volume. A block volume created for an ESX Host available
within the VMware Datacenter must exist.

• Extend a VMware Datastore with a New Block Volume - Extend a VMware datastore
with a new block volume.

• Extend a Datastore with an Existing Volume - Extends a datastore using an existing


block volume.

• Delete a VMware Datastore and its Block Volume - Deletes a datastore and deletes
the block volume storage that backs the datastore. Any data on the volume will be lost.

• Delete VMware Datastore - Deletes a VMware datastore leaving the storage intact.

• Remove a Volume for VMware - Delete the volume that was created for use as a
VMware datastore.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 10
The file storage services enable the provisioning of NFS exports or CIFS shares and their export to hosts.
The File Storage Services category has the following items:

• Create a Windows Share - Creates a CIFS share. The share can be mapped as
Windows drive at the host. In ViPR, Windows share names can contain only alphanumeric
characters.

• Create a Unix Share - Creates an NFS export and sets the permissions on the export.
The export details are provided in the order and you can mount it when required. If you
want to create a file system and mount in on a VMware ESX Host in a single operation,
you should use the service provided for that operation.

• Expand File System - Expands an existing file system.

• Remove File System - Removes a file system.

• Discover Unmanaged File Systems - Finds file systems which are not under ViPR
management. The virtual array and virtual pool into which you want to ingest the storage
pools must exist when the discovery is performed. File systems will only be discovered on
file storage system that have been added to ViPR as physical assets.

• Ingest Unmanaged File System - Imports unmanaged file systems, which have
previously been discovered, into ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 11
The file protection services enable file systems to be snapshot and for snapshots to be
restored. The File Protection Services category has the following items:

• Create File Snapshot - Creates a snapshot of a file system.

• Restore File Snapshot - Restores a file system snapshot.

• Remove File Snapshot - Removes the snapshot for a file system.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 12
The file services for VMware enable the creation of file systems and attaching the created
file systems, or existing file systems, as a datastore on an ESX Host.

• Create File System and NFS Datastore - Creates an NFS export and mounts it to an
ESX host as a datastore. The vCenter to which you want to attach an NFS export must
have been configured as a ViPR host asset so that ViPR knows how to connect to it.

• Create VMware NFS Datastore - Creates a VMware datastore from an existing NFS
export. An NFS export must already exist.

• Delete VMware NFS Datastore - Delete a VMware datastore leaving the NFS export
intact.

• Delete NFS Datastore and File System - Deletes a datastore and the associated NFS
export.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 13
VCE Block services allows integration of ViPR to a VCE environment. You can provision or
expand a Cluster as well as work with the Bare Metal options available in VCE. Additional
options allow the administrator to decommission hosts and clusters, or update vCenter
systems

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 14
The Resources page of the User UI enables the file and block storage resources associated
with a selected project to be displayed. You can only see resources for projects that you
belong to.

The Project drop-down allows a project to be chosen. Selecting the Block Volume of File
System control displays the volume of file system information displayed in the slide.

Selecting an entry in the table displays the details of the block volume or file system
resources.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 15
Services and categories can be added within a catalog. Categories organize services, while
services execute specific options. Service creation allows an extra degree of granularity that
can be used to control user access to resources. When creating a service you can change
the title, description, image and category where it belongs. Additionally you can specify the
parameters that are normally options during the creation. In our example the virtual array,
virtual pool and project can be customized. Additional parameters can be configured such
as limiting the maximum size of LUNs. Approvals and execution Windows can be configured
for the service. Finally, ACLs can specify what users have access to the service.

The edit process is the same as the creation process but the options are already pre-
established.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 16
This lesson covered catalog services: modification, deletion, and creation of services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 17
This lesson covers service catalogs, catalog customization, and security configuration.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 18
When the service catalog section is selected the currently configured service catalog is
shown on the screen. There are several important factors regarding user permissions in this
view. Since the root user is logged in, the tenant window allows you to select which tenant
to view the service catalog for.

The view catalog button sends to the user view of the catalog, from which you can order
services.

The add service catalog button allows you to create services, as shown in the previous
screen. Similarly, the add category button allows you to add a new category to the service
catalog. Categories can be nested within other categories, so you can select any category
and add a new category within it. The edit current category will edit the parameters for the
category where you’re standing.

Everything can be returned to the traditional view by selecting the Restore Default option.
Notice all the categories also have the individual option of editing them or deleting them.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 19
When create a category is selected the following window is displayed. A title an Description
is required for the category. Alternatively an icon from the existing library can be selected
or one can be uploaded. Since the new category was created from the Service Catalog
home it selects it as the parent category. However, if you want to change where the
category is contained simply select it rom the parent category.

Since this is a new category there are no categories or services available to it. When you
choose to edit a category the same window will be displayed and the available categories
and services will be added to the list.

Finally, access control lists can be created where only specific users can see the category.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 20
Services in a category can be created from scratch. This requires a user to add a title,
description, icon, category and filters. Alternatively, the edit catalog view shows the
duplicate option in a service. This can be a faster option if your service already exists but
you want to make some changes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 21
There can only be one Service Catalog per tenant. The provider tenant has edit permissions
on all service catalogs. ACLs will not restrict access to a Tenant Administrator. A Tenant
Administrator has ultimate authority in the tenant and access to the service catalog and
projects cannot be restricted using ACLs. ACLs can assign users or groups to projects,
categories and/or services. ACLs allow the assignment of users that exist within an
authentication provider. The group or specific user must have a role assigned in ViPR in
order for the ACL to accept the configuration.

When an ACL is configured on a resource only the user or group authorized will be able to
see the resource.

If a tenant has multiple users several categories or services can be configured with the
different users. The users will only see the services which they can access but the tenant
administrator will see everything. Remember within the service you have the option of
specifying multiple parameters that can limit its reach.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 22
The top bar shows the edit view of a service catalog. Notice that service catalogs are
specific to a tenant. Only the groups or users that have been assigned to the tenant will
have access to the service catalog. This is the first level of access control on service
catalogs.

The second windows shows the bottom portion of any category or service. If the access
control list is enabled it will ask whether it will be configured for a user or group, the type of
permission and the name of the user or group. The name will be validated with the users or
groups configured with the tenant.

Notice multiple ACL parameters can be added for a specific service or category. Once the
service or category is saved it will only be visible to the tenant administrator and the groups
or users specified.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 23
ACLs on the service catalog only have the use option. If no ACLs are used, all the users can
see all the catalog categories and services. Notice in the pod1-student1-user view you can
see the category created in the previous slide. This is because the user was granted
permissions to see the category. Other users that are not tenant administrators will not be
able to see the category. Additionally, the other VMware categories where assigned to the
pod1-student1-admin user, making them invisible for our current user.

Any service that you add to the new category will only be seen by this user.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 24
This lesson covered service catalogs, catalog customization, and security configuration.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 25
This lab covers how to create a custom catalog and customize services in the catalog.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 26
This lab covered how to create a custom catalog and customize services in the catalog.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 27
This module covered the ViPR service catalog. In it users learned how to create and modify
services and configure a service catalog for multi-tenancy.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Service Catalog 28
This module focuses on using ViPR to provision and manage file storage. We will define file
concepts and how to provision and protect file assets.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 1
This lesson covers the definition of file services and how to create, delete, and modify a file
asset.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 2
File storage has fewer options than block storage. Within file storage, you can create a
share on Linux and/or Windows.

For windows, it creates a new file system from the selected virtual array based on the
criteria of the selected virtual pool, and creates the CIFS share for the Windows host.

For Linux it does the same, but instead of creating a CIFS share, creates an NFS share (also
known as export). The last two options are specific to VMware. You can create an NFS
datastore by using an existing NFS share created previously, or you can create a new NFS
share (or filesystem) and then mount a datastore off of it.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 3
When we talk about deleting file storage, ViPR does not differentiate between Linux or
Windows. You can remove a filesystem, which would unmount it if it was mounted to a host
and reclaim the storage on the array. This can be NFS or CIFS. VMware offers the options to
delete an NFS datastore which would only unmount the datastore but keep the file system,
or delete the NFS datastore and filesystem.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 4
On the file side, ViPR allows you to create a Windows share. Since this option takes place
from the File Services category, it does not mount the share. To create a Windows share,
select the virtual array. This will display the virtual pools that are available; select the
virtual pool, then the project. Finally set the share name, a comment or description, and
the size in GB.

On the right hand side, we can see the completion of the share creation, with the execution
steps.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 5
The CIFS share created previously was provisioned from an Isilon array. To map the share
on Windows, click Start > Run and point to the address of the share.

To verify the share properties, go to Isilon OneFS and expand the properties.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 6
ViPR treats file storage expansion the same without differentiating between type of hosts.
There is a single option to expand a file system which asks the storage to increase the size
of the file system. It also rescans the host once the file system has been expanded.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 7
An order is a record of a request to run a service. The order records the details of the
request, which service was requested, which parameters were specified in the service
request, whether the order is scheduled, and the outcome of the order submission.

The order history table tells you the service that was executed and the date and time at
which it was executed. The time displayed is UTC and your time offset is shown next to the
UTC time.

In addition, the table shows you the status of the order and displays an order ID that
provides a unique identity for the order that you can use if you need to refer to the order,
particularly where you want to ask your Tenant Administrator about some aspect of the
order.

The order details shows the order request information, details of the resource affected by
the order, and a summary of the execution steps.

The order details page has three details. The Summary area is the top part of the order. It
displays the order identity and status, and displays the parameters passed to the operation.
All of the parameters passed to the service are displayed. If the order is scheduled to run in
an execution window, the status will display “Order Scheduled”. If the order requires
approval, it will display “Pending Approval”.

The second area is Affected Resources. This area shows the details of the resource that was
created as a result of the operation or in which the operation was run.

The final area is order details. This area is divided into logs, pre-check, execution steps, and
rollback.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 8
This lesson covered the definition of file services and how to create, delete, and modify a
file asset.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 9
This lab covers file management such as provisioning and extending file systems, and
datastores.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 10
This lab covered file management such as provisioning and extending file systems, and
datastores.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 11
This lesson covers how to protect and assign high availability to file assets with ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 12
ViPR does not attempt to reinvent the wheel with yet another replication or protection
solution. It leverages proven solutions for protection that are already built into the arrays
natively or offered separately in array neutral environments.

Shown here are the protection technologies currently used by ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 13
There are three fundamental types of array-based local protection that can be leveraged.
Snapshots can be used for space efficient, point in time copies. Continuous copies leverage
cloning technology to keep a synchronized pairing of volumes. A third option, full copy,
provides a full point-in-time replica volume that is not kept synchronized.

With ViPR, full copy volume is the only one (of these three options) that can be exported to
a host. All three options are available when protecting block storage. However, for file
storage, only snapshots can be configured.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 14
When configuring virtual pools, the maximum native snapshots for the specified virtual pool
must be set. The default value is set to 0. If this value is left at default, snapshots will not
be available for the given storage pool. In our example, we set the maximum native
snapshots to 10. This means the storage pool can only have a total of 10 snapshots in the
volumes it provisioned. The snapshots can be configured later from the service catalog
under File Protection Services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 15
ViPR enables resources to be grouped into projects and for related resources to be
managed as part of a consistency group.

Volumes can be assigned to consistency groups to ensure that snapshots of all volumes in
the group are taken at the same point in time.

The Admin > Tenant > Consistency Groups page lists the consistency groups that exist
and enables consistency groups to be added or deleted.

Consistency groups are associated with projects, so provisioning users will only be allowed
to assign volumes to consistency groups that belong to the same project as the volume.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 16
Consistency groups can be created and volumes assigned to them during provisioning
operations. This operation can be performed by a Tenant Administrator for any project or by
a Project Administrator for owned projects.

From the Admin view, go to Tenant and select Consistency Groups. Select the project that
will be used to add the consistency group, then select Add. Give the consistency group a
name and save it.

When deleting a consistency group, simply select the checkbox next to the desired
consistency group and select delete.

If the consistency group has volumes associated with it, it cannot be deleted.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 17
All ViPR supported file arrays support snapshots as long as the appropriate licenses are
installed. Notice snapshot restores from ViPR are not supported on EMC Isilon.

The options provided in the File Protection Service category of the service catalog allow you
to create a snapshot, restore to the snapshot, or delete the snapshot.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 18
Creation and deletion of file snapshots is very simple. To create a file snapshot, select
Create File Snapshot from the File Protection Services category. Select the project, file-
system, and give a description to the snapshot.

Deleting a snapshot is also simple. From the Protection Services category, select Remove
File Snapshot. Select the project and file system to display all the snapshots allocated to
the file system. Check the one you want to delete and click Order.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 19
File snapshots are used to keep track of a point-in-time state of a file system export.
Changes to the export are tracked in the snapshot. When a snapshot is restored, it reverts
the changes and returns to the original point in time when the snapshot was created. This
deletes all the changes that happened after the snapshot was created.

For this reason, the restore file snapshot window warns you that the operation can result in
data loss. To restore a file snapshot, select the project, file system, and snapshot that
should be restored.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 20
This lesson covered how to protect and assign high availability to file assets with ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 21
This lab covers file protection such as snapshot services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 22
This lab covered file protection such as snapshot services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 23
This module covered using ViPR to provision and manage file storage. In it, we defined file
concepts and how to provision and protect file assets.

Copyright 2015 EMC Corporation. All rights reserved. ViPR File Services 24
This module focuses on using ViPR to provision and manage block storage. In it we will
define block concepts and how to provision and protect block assets.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 1
This lesson covers the definition of block services and how to create, delete, and modify a
block asset.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 2
Create Block Volume for a Host creates the volume from the selected virtual array, and
virtual pool, and the exports for the host or cluster. ViPR is in charge of orchestrating and
organizing workflows so hosts, storage and networking know exactly what to do. In this
case the hosts don’t do anything because the volumes are not provisioned to a host, simply
created.

When you choose to export a volume to a host ViPR creates the exports from the volume to
the host or cluster. An export makes the host aware of a volume, but the volume cannot be
written or read since it hasn’t been mounted. Notice this service requires an existing LUN as
it only exports it, it does not create it.

The next service, create block volume for host combines the creation and exporting of a
volume.

In the red box, the create, export, and mount volume options are not found in the Block
Service category but on specific categories for Windows, Linux, or VMware hosts. It goes
through the process of creating, exporting and also mounting the volume. This means the
volume will be visible by the Linux file system, a drive letter will be assigned and mounted
on Windows, or a datastore will be created in VMware.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 3
Everything that was created on the block volumes can be reverted. The first option is only
available for volumes that are not exported to a host. In here you can destroy an existing
volume. Notice the volume must have been created through ViPR or ingested, otherwise
ViPR will not be aware of the volume. You can also unexport a volume. This does not delete
the volume, it simply makes it invisible to the hosts. The volume will continue to exist.

The remove volume for host unexports the volume and deletes the LUN. The last three
options are under the block services category. The next two are specific to hosts. You can
unmount a volume, which doesn’t destroy it, or unmount and delete a volume.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 4
In the slide we demonstrate how to create and mount a block volume in Linux. As a
provisioning user, you can only create resources and perform operations on resources
belonging to projects that you are assigned to (or are the owner of). If you are a Tenant
Administrator you can run all services and choose any project to be the owner of the
resource.

The host to which you want to attach (export or mount) a block volume must have been
configured as a ViPR host asset so that ViPR knows how to connect to it.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 5
To verify the volume creation, log in to the Linux host and use the PowerPath command
displayed above to show the devices. From the command we can see the device is a
Symmetrix LUN, which is accessible through two ports.

The inq utility is also useful at seeing all the devices available to the host. Notice in the
output of the command we see sdc and sdd, these are in fact the same device seen by the
host from two different paths. The last device, emcpowera is the PowerPath device made up
of sdc and sdd.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 6
In order to create a volume and a datastore, you need to set the data store name and
protocol you will use to attach to the volume. The protocol option provides FC and iSCSI.
Next select the vCenter from the drop down list. This will populate the Datacenters
managed by that vCenter Server. When a Datacenter is selected, a list of ESXi Hosts will
become available.

Once the ESXi host is selected, choose the Virtual Array, Virtual Pool and project where the
volume will be provisioned. Finally give the volume a name, consistency group (optionally)
and size.

On the right side of the slide we can see the order was successful, the volume provisioned
and the Datastore created.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 7
To verify the datastore was created in vCenter, navigate to Inventory > Datastores and
Datastore Clusters. Expand the desired data center and the datastore should be available.
Click Datastore to see more information.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 8
With added support for provisioning to an entire host cluster in a single order execution, it
has become necessary to add a new field to the order form for certain block and file
services. This is the “Storage Type” field, which can be set to either “Shared” or
“Exclusive”. Shared should be used when provisioning to all hosts within a cluster.

Specifying Exclusive will ensure that the provisioned volume is accessible only by the
particular host to which it is provisioned. Listed here are all the services that have this
additional field, note that this is limited to Windows and ESX clusters only.

ViPR does not do any independent checking or verification for Linux hosts specified as
clustered hosts. The variety of clustering and file system options for Linux prevents ViPR
from doing this at this time. If shared storage is provisioned for a Linux host, it must be
mounted manually using whatever cluster technology you have.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 9
Here is an example of provisioning shared storage service to a cluster. If you select ‘Shared’
from Storage Type, it will provision storage to all the hosts within that cluster. If you
choose ‘Exclusive’ you will have to specify the host for which you want to provision the
volume.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 10
We can use the ability to defer ViPR “discovery” to provision a Boot LUN to a host.

First, add a new host:


• Leave the ‘Discoverable’ check box unchecked
• Configure initiators for the new host manually

Use “Provision Block Volume for Host” service to provision a boot LUN with an explicitly
specified HLU for the new host:
• Configure the HLU for the volume. A value of ‘-1’ means automatically assign. Other
values (such as 0) can be assigned and then used to configure the host with the boot
LUN ID
• The results from this service will provide all the parameter values needed to configure
the “boot from LUN” feature on the HBA

From the Bare Metal Host:


• Configure HBA BIOS settings for the boot LUN

On the Provisioned Volume:


• Install supported OS and configure settings (WinRM for Windows)

After booting, edit the ViPR Host entry and make the host ‘Discoverable’:
• When ViPR automatically discovers the host it will gather host information and
overwrite any existing information that must change

Provision data volumes via ViPR GUI as usual.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 11
Block volumes can be expanded when they are not bound to any other host by using the
expand block volume from the block services category. Here a volume is simply expanded
to the desired size. If you access the Windows or Linux-specific categories options are also
available to expand the LUN and rescan the host so it can identify the new size.

On the VMware size datastores can also be extended. The first option is to extend a
datastore with a new volume. This would create a block volume and then add it to the
existing datastore. Alternatively, if you have a volume that has already been created you
can extend the datastore with the existing volume.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 12
Another capability in PowerPath is to expand a block volume. The expansion process is very
simple, select a project, select a volume and state the new size of the volume. It is
important to not mistake the “New Size” input the amount you want to expand the volume
by. Here you actually need to set what’s the total size of the volume after it’s expanded.

We see the expansion process take place, once this is complete the volume will be
expanded.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 13
This lesson covered the definition of a ViPR LUN and how to create, delete, and modify a
LUN using the ViPR UI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 14
This lab covers block management such as provisioning and extending LUNs.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 15
This lab covered block management such as provisioning and extending LUNs.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 16
This lesson covers protecting block data with array-based protection. It also covers
appliance-based protection and HA through RecoverPoint and VPLEX.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 17
ViPR does not attempt to reinvent the wheel with yet another replication or protection
solution. It leverages proven solutions for protection that are already built into the arrays
natively or offered separately in array neutral environments.

Shown here are the protection technologies currently used by ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 18
The protection services for ViPR combine the capabilities of RecoverPoint and native array
protection. The failover of block volumes can only be performed on RecoverPoint protected
virtual arrays. Snapshots, Full Copies and Continuous Copies can be performed by VNX and
VMAX native capabilities.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 19
In order to use these array-based protections the volume being protected MUST be
managed by ViPR. In addition, snapshots, continuous copies, and full copies can only be
created for volumes from Virtual Pools that have Maximum Native Snapshots or Continuous
Copies greater than zero.

The Maximum Native snapshots/continuous copies refers to the maximum number of copies
you can allow out of this pool.

The snapshots themselves are not created automatically. A user must select the “Snapshot
Service” from the Service Catalog to initiate any snapshot or continuous copy for any
volume created from this Virtual Pool.

The maximum snaps and copies is a limit on all snaps and copies made from all volumes
provisioned from this pool. It does not apply to each volume individually.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 20
The snapshot creation process is very similar to the one performed by File arrays. The
requirements for a snapshot are a project, a block volume, the type of snapshot that will be
created and the name of the new snapshot. The name is only used as a description under
ViPR. Snapshots can be restored and removed on both VMAX and VNX arrays.

Full and continuous copies require the same parameters to be used, the project, volume
that will be copied, and the name of the copy. The number of copies can also be specified.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 21
Continuous Availability

Many customers have sites that are distributed across metro distances. To maintain
continuous availability of their application in case of a site failure, they use VPLEX metro to
continuously mirror the data across the sites and make the data available for active/active
use across both sites. This enables applications like VMware vMotion to able to migrate the
workload from one site to another incase of a disaster and also allows applications like
Oracle RAC to run in a continuous availability mode across sites.

Clones

A Customer is introducing a new processing system and wants to test the system with the
production application data in a test environment. The production application is running in a
VPLEX environment.

Change Service Levels

Customer is currently hosting an application on a VNX based storage. This application is


enabled to be highly available through VPLEX metro connected to VNX in another
datacenter. As more and more departments have started leveraging this application, it has
risen in importance and requires more resources to support its needs. Customer decides to
migrate the application to a VMAX based array non-disruptively.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 22
Tech Refresh

Customer wants to replace their old VMAX system with the new VMAX system they just
received. Both systems are visible to VPLEX.

The slide also shows additional use cases automated by ViPR in a VPLEX environment.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 23
The two sides of a VPLEX metro have to be in different virtual arrays. Logically, a VPLEX
metro links two virtual arrays with an HA connection.

In ViPR the two ends also have to be attached to different fabrics/VSANs, e.g. ViPR doesn't
support a single fabric spanning sites with both metro ends connected to that fabric.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 24
With VPLEX Distributed, you must specify a destination Virtual Array at the VPLEX remote
site. If you have hosts to be provisioned with distributed volumes at both sites then you
must define a separate destination virtual pool at each site.

For VPLEX Local protection, ViPR will just encapsulate the array volume one-for-one and
present it to the host as a VPLEX volume. No further qualifiers are needed because this can
all be done out of one Virtual Pool.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 25
In order to have a VPLEX Distributed Volume users must order a block volume by selecting
a Virtual Pool that has the VPLEX Distributed remote protection option enabled.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 26
ViPR 1.1 has added SRDF to the list of supported Remote Data Protection protocols
supported. ViPR will: discover the RDF configuration info from the VMAX’s it discovers.
Additionally it will automatically select source and target VMAX based on Virtual Array and
Virtual Pool configuration and create source and target volumes ViPR will also establish
SRDF Connection with respective policy.

ViPR 1.1 supports R1 to R2 and R1 to many R2s in both SRDF Sync and Async modes. ViPR
supports establishing SRDF link, fracture, detach, pause, resume, failover and failback, and
snapshots on R1 and R2 volumes in SRDF Sync Mode . ViPR does not support consistency
groups.
In Async mode ViPR will support establishing SRDF Links with empty RA groups and adding
volume pairs to RA groups having existing volumes. ViPR doesn't have intelligence to reuse
existing RA Groups in Async Mode, hence user needs to create an empty Async Ra Group
[Format <V-project>..].
Fracture, Detach, Start, Pause, Resume, Failover, Failback can be done on the whole
consistency Group, but not on individual volume pairs. No separate API for fail back, as
ViPR includes intelligence to decide on executing fail over or fail back based on the state.
Deletion of Volume in Async Mode requires fracturing the CGs, detaching the CGs, removing
the Volume from CG and re-establishing Async relationship again. Snapshots however are
NOT supported.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 27
You must create a project to which you can add SRDF-protected volumes.
• In User mode, select Tenant Settings > Projects.
• Click Add, then enter a project name:
– Must be the same as the RDF group name on the VMAXSRDF.
– Must be 10 characters or less, per Symmetrix RDF-naming restrictions.
Verify full RDF requirements at http://www.emc.com/techpubs/vipr/srdf-3.htm

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 28
EMC RecoverPoint provides continuous data protection with multiple recovery points to
restore applications instantly to a specific point in time. You can set up virtual pools and
arrays in ViPR so users can take advantage of RecoverPoint in block storage requests.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 29
When adding physical assets to ViPR you need to add the RecoverPoint system as physical
assets. You also need to add the storage systems that host the sources volumes that will be
protected, the target volumes, and the journal volumes for both.

The fabric managers need to see the storage systems and the RecoverPoint sites.

When creating virtual assets, select RecoverPoint as the data protection type when creating
a virtual pool. Add one or two RecoverPoint copies, specifying the destination virtual array,
and optionally a virtual pool. The virtual pool specifies the characteristics of the
RecoverPoint target and journal volumes. Set the journal size as needed. You can accept
the RecoverPoint default (2.5 times protected storage), or you can specify a fixed number,
or a multiplier of the protected storage other than the default.

Services that leverage RecoverPoint are in the catalog under Block Protection Services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 30
When creating a virtual pool for RecoverPoint the configuration takes place in the Data
Protection section of RecoverPoint. Notice the remote replication/availability section is listed
as a drop-down, which only allows a user to select EMC RecoverPoint, EMC VPLEX or VMAX
SRDF. This means the protection mechanisms cannot be selected in the same virtual pool.

When RecoverPoint is selected a section with RecoverPoint copies appears. This section
determines the type of protection RecoverPoint will provide. Note that even though the title
is “Remote Replication”, RecoverPoint can be used for local replication.

The journal size for the RecoverPoint source must be specified here too, the default
minimum is 2.5x the protected storage. The method of selecting where the RecoverPoint
copies are stored is described in the next slide.

Remote Protection parameters cannot be modified in an existing virtual array when volumes
have been provisioned.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 31
In the Virtual Pool creation window, RecoverPoint copies must be created. When Add is
clicked to set a new destination for the copies, the window displayed above is opened.
Under RecoverPoint Virtual Array, a user can select the default virtual array, which is the
same one where the source is located - a local virtual array or a remote virtual array. The
pool within the virtual array is also selected together with the destination journal size.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 32
In order to have RecoverPoint protection you must create a volume that is served by a
Virtual Pool that is defined for RecoverPoint remote protection.

Continuous Data Protection (CDP) uses a local array, therefore providing local protection.
Continuous Remote Replication (CRR) as it’s name explains it uses a remote virtual array
and pool to provide remote protection. The last option, Continuous Local and Remote (CLR),
provides both local and remote replication with RecoverPoint.

Notice ViPR decides the type of RecoverPoint protection to use based on the RecoverPoint
copies configured in the virtual array. If only a local array exists, and only a local array is
selected then RecoverPoint will perform CDP. If only a remote array exists, and/or only a
remote array is selected RecoverPoint will perform CRR. However if both local and remote
replication are added to the continuous copies CLR will be performed.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 33
For RecoverPoint ViPR provides a Service to Failover to the RecoverPoint target copy.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 34
This lesson covered protecting block data with array-based protection. It also covered
appliance-based protection and HA through RecoverPoint and VPLEX.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 35
This lab covers configuring VPLEX and RecoverPoint with ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 36
This lab covered configuring VPLEX and RecoverPoint with ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 37
This module covered using ViPR to provision and manage block storage. In it we defined
block concepts and how to provision and protect block assets.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Block Services 38
Throughout the course we have focused on a ViPR implementation in a green field where all
the equipment is dedicated to ViPR and didn’t have any previous configuration. This module
focuses on what to do when presented with storage that had been previously configured.
The topics of ingesting LUNs and/or onboarding LUNs will be covered. ViPR migration
utilities will be introduced.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 1
This lesson covers ingestion in ViPR. The requirements for ingestion are covered as well as
a demonstration on how to perform ingestion.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 2
We see two different scenarios in a traditional ViPR environment. In green field
implementations, ViPR is deployed with storage systems that have no volumes or file
systems on them. In this environment, ViPR will create and manage all volumes and file
systems. Any existing volumes or file systems created outside of ViPR will also be managed
outside of ViPR.

However, it is more common that ViPR gets deployed in an existing environment. There is a
high probability storage pools, volumes and file systems are already created. Traditionally
ViPR would ignore existing volumes. During initial discovery ViPR registers all the storage
pools that are empty. The ViPR administrator can discover other storage pools manually. If
such pools have volumes or file systems created they will be presented as unmanaged
under ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 3
It is important to identify when and why ingestion is necessary. In a brown field
environment the only way to make these resources manageable is to discover them and
ingest them.

Another brown field situation that one must address is for production volumes that have
already been provisioned to hosts. This is not ingestion, it is a different process called
onboarding which will be addressed with a host-based, utility called ViPR Migration Services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 4
ViPR Resource Ingestion, is a way of bringing existing volumes and/or file systems into a
ViPR instance. This is different from a greenfield implementation in which all volumes
and/or file systems are created by ViPR.

Existing block volumes and file systems can be brought under ViPR management by
ingesting them using services provided in the Service Catalog. These services are for use by
System Administrators only and are not visible to normal catalog users.

Once the process of ingestion is complete, the ingested resource is equivalent to a ViPR
created resource, and can leverage all of the ViPR provided benefits.

In order to ingest volumes and/or file systems the storage systems must be re-discovered.
Once the storage systems are re-discovered the volumes and/or file systems that were
found can be ingested.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 5
A set of requirements exist in order to achieve successful ingestion of resources. Discovery
on the storage system, and ingestion are two services that are available within the service
catalog. The only users that have discovery and ingestion services visible are those with
system administrator role.

Before ingestion occurs discovery must take place in the storage system. This process
queries the storage systems for volumes or file systems that once discovered would be
tagged as unmanaged.

To be ingested, the unmanaged volumes or file systems must be in physical pools which are
already associated with a ViPR Virtual Pool.

During the ingestion service a project must be selected. You must belong to the selected
project and have write-permission on the project.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 6
In order to discover or ingest a resource (file system or volume) log in to a user with
System Administrator role. From the service catalog, discovery and ingestion can be found
in the File Storage Services category (shown) or Block Storage Services category.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 7
Volumes must be discovered before they are ingested. From the Block Storage Services
category you can select the Discover Unmanaged Volume item. The discovery window is
very simple, it lists all the physical block storage systems available on ViPR. In our example
we have VNX and VMAX systems. Notice we can select specific storage systems for the
discovery or discover volumes on all the storage systems. Once Order is clicked processes
the order. In the next screen, the discovery is executed. However, ViPR does not display
the volumes discovered, it only presents a success or error condition.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 8
After discovering unmanaged volumes from one or multiple physical arrays they must be
ingested. Ingestion is a different service which can be found in the same category as
discovery (Block Storage Services). Notice that during ingestion you must select one
storage system. When a storage system is selected, the virtual array section is
automatically populated with the virtual arrays where the storage system exist. Likewise,
when a virtual array is selected the virtual pools dropdown is populated with the virtual
pools that exist within a virtual array.

ViPR automatically ingests unmanaged volumes that are not provisioned to any host. If a
volume is provisioned to a host, ViPR will skip its ingestion.

Using the provided service for ingestion you can only ingest into one virtual pool from one
physical array. You will need to run this service multiple times if you must ingest from
multiple arrays.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 9
File System discovery is very similar to volume discovery. This service can be reached from
the File Storage Service Category. From within the category select Discover Unmanaged File
Systems. A list of all the file storage that ViPR manages is displayed. You can choose which
storage systems to use for discovery. In our example we select all the file systems
discovery. When you click Order, the process re-scans each of the storage systems. When
you expand the execution steps after the order is successful, notice there is one step per
each storage system.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 10
File System ingestion allows you to add unmanaged file systems into a ViPR environment.
The file systems that are added must not be mounted on any hosts. In order to ingest file
systems, select the storage system which contains the desired file system. The virtual
arrays available for that storage system will show up next, select the appropriate storage
system and virtual pool. Remember, only the file systems accessible by the virtual pool will
be added. The project must contain write or administrative privileges for the selected
environment.

Once a file system is ingested it can benefit from all the functionality that ViPR provides.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 11
The discovery and ingestion process of volumes and file systems doesn’t provide
information on what resources were discovered and which were ingested. In order to verify
the new resources available, click on the Resources tab. Notice the resources are grouped
by project. It’s important to make sure that you select the appropriate project that you
used when ingesting the resources.

In the left hand side we have the option of selecting the Block Volumes or the File Systems
that are available with ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 12
The table above shows limitations of block volume ingestion.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 13
The table above shows limitations of file volume ingestion.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 14
This lesson covered ingestion in ViPR. The requirements for ingestion were covered as well
as a demonstration on how to perform ingestion.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 15
This lesson covers the process of onboarding LUNs in ViPR. The requirements for
onboarding will be covered as well as an introduction to ViPR Migration Utilities (ViPRmig).

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 16
The ViPR host migration utility is a host-based utility. It can be used to bring a PowerPath
managed block volume under ViPR control. This can be particularly useful in brown field
implementations where the volumes have been provisioned to hosts, and are already in
production. Unlike ingestion, ViPRmig does not require that the source array be supported
by ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 17
An alternate use for ViPRmig is for moving a ViPR volume from one virtual pool to another.
For this case, ViPRmig will leverage ViPR intelligence to exploit VPLEX to perform the
migration when possible. If this is not possible, it will revert to PPME HostCopy as with
onboarding.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 18
This lesson covered the process of onboarding LUNs in ViPR. The requirements for
onboarding were also covered as well as an introduction to ViPR Migration Utilities
(ViPRmig).

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 19
The purpose of this lab is to ingest storage resource from block and file storage systems
into ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 20
This lab covered ingestion of storage resource from block and file storage systems into
ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 21
This module covered what to do when presented with storage that had been previously
configured. The topics of ingesting LUNs and/or onboarding LUNs were covered. ViPR
migration utilities were also introduced.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Ingestion and Onboarding 22
This module focuses on the use cases available for ViPR integration, as well as different
components that can integrate with ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 1
This lesson covers an overview of VMware, Microsoft SCVMM, and OpenStack components
integrated with ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 2
As we can see in this diagram the ViPR controller stands between storage arrays and
different cloud platforms. We can think of ViPR as a translator which enables the cloud
platforms to communicate with different storage arrays through a single interface. We
consider Southbound integration the integration of storage arrays, host, connectivity
components such as SAN and IP switches and various data protection mechanisms.

Northbound integration relates to the communication of cloud platforms such as VMware,


OpenStack and Microsoft SCVMM. In this class we focus on how Northbound integration. We
will analyze the different components and the level of integration they provide.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 3
In this diagram we can find all the VMware integration components that interact with ViPR.
As we move up the solutions are more oriented towards enterprise clouds. There are three
basic categories where ViPR integrates with VMware: to provision storage, to categorize
storage and to monitor and report.

VSI uses vCenter Server. When configured, it allows ViPR to provision storage to individual
ESXi hosts through the vSphere Web Client. The plugin gives insight into the virtual arrays
and virtual pools used.

The VASA plugin is not used to provision storage, but to make VMware aware of the
different storage capabilities. Knowing the capabilities allows VMware to create different
profiles which can be used to satisfy the storage needs of virtual machines.

While the VSI plugin allows a user to provision storage to ESXi hosts, vCenter Orchestrator
creates multiple workflows. These workflows allow more capabilities when provisioning
block and file storage and perform datastore and RDM specific operations.

vCenter Operations Management, commonly referred to as VMware vRealize Operations is a


monitoring tool. With it we can see a score for all the different components of our
environment. We can identify where performance and be improved and where problems
exist.

vCloud Automation Center, known as vCAC leverages the workflow capabilities of vCO but
enhances the functionality by allowing massive provisioning of resources.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 4
VMware vSphere API for Storage Awareness is commonly referred to as VASA. It enables
storage partners to create VMware Virtual Center providers that allow an administrator to
monitor physical storage topology, capabilities, and state. This makes storage usage for
daily operation more transparent for customer.

With VASA vCenter can communicates with a VASA provider, which in turn communicates
with a storage array to request the capabilities of a LUN or file system. VASA Providers are
specific to a storage array and must therefore be developed by storage providers because it
uses the proprietary API calls for the storage arrays. The storage array replies to the VASA
Provider which in turn provides the capabilities to vCenter server.

With the use of storage profiles VASA enables vCenter Server administrators to specify the
capabilities that they want for specific storage arrays. When virtual machines are deployed
these profiles can be selected. When this is done the a list of compliant datastores are
displayed allowing a user to easily identify which datastores meet their criteria.

The VASA provider can exists as a vCenter plug-in or on a standalone server application.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 5
Simply said VASA is a set of APIs that will enable storage companies to create vCenter
providers that allow administrators to monitor storage topology, capability and state so
storage utilization is more transparent to the customer.

Array integration allows the offloading of processes from the host to the storage array.
Some of the processes include block copy and block zeroing, dead space reclamation and
out of space warnings when thin provisioning and NAS and block-based storage support.

Storage awareness is provided by information such as storage virtualization, health status,


configuration, capacity and provisioning. Array features such as snapshots, deduplication
and RAID type can also be gathered. This information can be used to integrated with
features such as vSphere Storage DRS and Profile-Driven Storage.

Multipathing capabilities are useful since it enables vCenter to integrate with third-party
storage vendor multipathing software. Multipathing can be incorporated as a plugin that
communicates directly with the array.

Data Protection allows backup capabilities without impacting VM performance by allowing


backups to take place outside of each specific VM. Full, differential and incremental backups
and restores are available with this feature. It is important to note vSphere Storage APIs
Data Protection is included with all vSphere editions.

The rest of the functionality available with VASA is included with vSphere Enterprise Plus
Edition.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 6
EMC Virtual Storage Integrator (VSI) allows you to easily provision and manage EMC ViPR
software-defined storage for VMware ESX/ESXi hosts.

VSI is an architecture that enables the features and functionality of EMC ViPR software-
defined storage.

While VASA allows VMware to understand the characteristics of LUNs and therefore allows
the VMware administrator to create storage profiles it doesn’t provide provisioning. On the
other hand VSI is a plugin that allows a user to use vCenter to provision storage by
communicating to the ViPR controller.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 7
vCenter Orchestrator is an automation tool designed for users who must streamline tasks
and remediation actions and integrate these functions with third-party IT operations
software. vCO is made up of the following components:

• Workflow designer – Allows a user to drag-and-drop workflow elements to produce


simple and complex workflows.

• Workflow engine – Is the one in charge of executing the workflows.

• Built-in workflow library – Contains pre-configured workflows such as taking snapshots


on large number of VMs from a particular resource pool, view resource utilization, use
building blocks to create new resources and link actions to produce specific workflows.

• Scripting engine – Allows a user to create new building blocks for vCO. Can be used for
building blocks for actions, workflows and policies.

• Versioning – vCO workflows have version history, packaging and rollback capabilities.

• Checkpointing content database – Every step of the workflow is saved in a content


database. This allows for servers to restart without losing state and context.

• Central management – vCO provides centralized management of processes. Since all the
scripts are centralized versioning can be enforced.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 8
VMware vCenter Operations Management Suite automates operations management using
patented analytics and an integrated approach to performance, capacity, and configuration
management. vCenter Operations Management Suite enables IT organizations to get
greater visibility and actionable intelligence to proactively ensure service levels, optimum
resource usage, and configuration compliance in dynamic virtual and cloud environments.

The role of operations management is to ensure and restore service levels while
continuously optimizing operations for efficiency and cost. Dynamic virtual and cloud
infrastructure present new challenges to infrastructure and operations teams, including a
larger number of VMs to manage, unpredictable workload demands due to resources
provisioned through self-service portals and converged architectures with different metric
requirements.

VMware vRealize Operations provides self-learnings analytics that can provide a higher
degree of automation. The integrated approach to performance, capacity and configuration
management combines different disciplines to truly identify performance issues. VMware
vRealize Operations delivers a comprehensive set of management capabilities which include
performance, capacity, change, configuration and compliance management; application
discovery and monitoring; and cost metering.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 9
vCenter Operations Management Suite includes five components.

vCenter Operations Manager is the foundation of the suite. It provides the operations
dashboards, performance analytics, and capacity optimization capabilities needed to gain
comprehensive visibility, proactively ensure service levels, and manage capacity in dynamic
virtual and cloud environments.

vCenter Configuration Manager automates configuration management across virtual and


physical servers, increasing efficiency by eliminating manual, error-prone, and time-
consuming work. This automation enables enterprises to maintain continuous compliance
by detecting changes and enforcing IT policies, regulatory requirements, and security-
hardening guidelines.

vCenter Hyperic monitors physical hardware resources, operating systems, middleware, and
applications. Because vCenter Hyperic is tightly integrated with vCenter Operations
Manager, you can manage virtual and physical infrastructure and business-critical
applications with a single suite.

vCenter Infrastructure Navigator automatically discovers and visualizes application and


infrastructure dependencies. It provides visibility into the application services running over
the virtual-machine infrastructure and their interrelationships for day-to-day operational
management.

vCenter Chargeback Manager enables accurate cost measurement, analysis, and reporting
of virtual machines, providing visibility into the actual cost of the virtual infrastructure
required to support business services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 10
VMware vCloud Automation Center provides the agility your business needs by automating
the delivery of personalized IT services. It provides a single solution for application release
automation, and support for various DevOps automation tools, abstracted from diverse
infrastructure services. Through a self-service catalog, users request and manage a wide
range of multi-vendor, multi-cloud applications, infrastructure, and custom services. Policy-
based governance assures that users receive the right size resources for the task that
needs to be performed across the service lifecycle.

A flexible automation approach provides agility in deploying new IT services while


leveraging existing investments by mapping into the current infrastructure, processes and
environments.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 11
vCAC has several capabilities that demonstrate the value of deploying an automated, on-
demand cloud infrastructure.

vCloud Automation Center is a purpose-built, enterprise-proven solution for the delivery


and ongoing management of private and hybrid cloud services, based on a broad range of
deployment use cases from the world’s most demanding environments.

It also enables users to apply their own way of doing business to the cloud without
changing organizational processes or policies. Enterprises gain the flexibility needed for
business units to have different service levels, policies and automation processes, as
appropriate for their needs.

vCAC accelerates application deployment by streaming the deployment process and


eliminating duplication of work using reusable components and blueprints.

It automates the end-to-end deployment of multi-vendor infrastructure, breaking down


internal organizational silos that slow down IT service delivery.

vCloud Automation Center provides a full spectrum of extensibility options that empower IT
personnel to enable, adapt and extend their cloud to work within their existing IT
infrastructure and processes, thereby eliminating expensive service engagements while
reducing risk.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 12
ViPR provides integration with third party components from Microsoft and Open Stack.
Microsoft System Center suite includes Virtual Machine Manager, a hypervisor where virtual
machines can be managed.

Additionally, ViPR has a Cinder driver for OpenStack. OpenStack is an open source cloud
solution developed by NASA and Rackspace. Cinder is a block storage service for OpenStack
designed to allow the use of an LVM to present storage resources to end users. This can be
consumed by OpenStack Compute Project known as Nova.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 13
Microsoft System Center cloud and datacenter management solutions provide a common
management toolset for private and public cloud applications and services. System Center
provides automation of repeatable tasks, cross-platform interoperability, and the creation and
deployment of modern, self-service, and highly available applications that can span datacenters.

System Center delivers capabilities such as management of Windows Server infrastructure


and first-party Microsoft workloads. SAN-based storage management technologies allow the
virtualization of the most demanding workloads.

Web based interfaces enable easy integration with existing management investments such
as portals. System Center also focuses on optimizing applications and workloads through
their lifecycle. Rich diagnostics and insight facilitate predictable application SLAs, including
the ability to elastically scale applications.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 14
Virtual Machine Manager (VMM) is a management solution for the virtualized data center. It
can be used to configure and manage your virtualization host, networking, and storage
resources in order to create and deploy virtual machines and services to private clouds.

A Virtual Machine Manager installation consists of the following components:

Host where VMM service runs, processes commands and controls communication between
database, library server and VM hosts

Microsoft SQL Server database that stores VMM configuration information

Program used to connect to VMM management server to centrally view and manage
resources

PowerShell-based command shell that makes available cmdlets that perform VMM functions.

The following resources in VMM are referred to as fabric, and you must configure them
before you can deploy virtual machines and services. You can use VMM to configure and
manage the following fabric resources.

Virtual machine hosts are where VMs will be deployed. They can be Microsoft Hyper-V,
Citrix XenServer, and VMware ESXi hosts or clusters. Host groups can be created to
organize hosts.

Networking resources include logical networks, IP address pools and load balancers.

Storage resources such as storage classifications, logical units, and storage pools are
required for hosts or clusters.

Library servers and shares provide a catalog of resources used to deploy VMs and services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 15
OpenStack is a cloud computing platform intended on meeting the needs of public and
private clouds regardless of size, by being simple to implement and massively scalable.

OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a set of


interrelated services. Each service offers an application programming interface (API) that
facilitates this integration. Depending on your needs, you can install some or all services.

Openstack can be divided into three areas, OpenStack Compute which is called Nova,
OpenStack Network known as Neutron and within storage OpenStack Object Storage called
Swift and OpenStack Block Storage called Cinder.

The OpenStack dashboard (known as Horizon) provides a web-based self-service portal to


interact with underlying OpenStack services, such as launching an instance, assigning IP
addresses and configuring access controls.

Nova manages the lifecycle of compute instances in an OpenStack environment.


Responsibilities include spawning, scheduling and decommissioning of virtual machines on
demand.

Neutron (networking) enables network connectivity as a service for other OpenStack


services, such as OpenStack Compute. It provides an API for users to define networks and
the attachments into them. Additionally it has a pluggable architecture that supports many
popular networking vendors and technologies.

On the storage side, Swift stores and retrieves arbitrary unstructured data objects via a
RESTful HTTP based API. It is highly fault tolerant with its data replication and scale out
architecture. Its implementation is not like a file server with mountable directories.

Cinder provides persistent block storage to running instances. Its pluggable driver
architecture facilitates the creation and management of block storage devices.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 16
In 2010, Rackspace and NASA founded the OpenStack Project by combining open source object
based storage software (Swift) with open source compute software (Nova). Their goal was to
create an open source cloud operating system that was an alternative to proprietary cloud
platforms and supported by a community.

In 2012, NASA reduced its contribution to the OpenStack Project. Later that year, the
OpenStack Foundation was formed. The OpenStack Foundation is an independent body that
provides shared resources to help achieve the OpenStack mission by protecting,
empowering, and promoting OpenStack software and the community around it, including
users, developers, and an entire ecosystem. The OpenStack Foundation has formal bylaws
and is governed by a board of directors, technical committee, and user committee.

Today, Rackspace is still a platinum-level sponsor of the OpenStack Foundation. EMC,


VMware, and VCE have joined the OpenStack Foundation as sponsors and actively
contribute to development. The community is supported by thousands of individuals and
hundreds of companies from around the world. OpenStack community has increased to over
9,000 members and over 200 supporting companies. Recently, major companies such as
Dell, HP, and IBM are embracing OpenStack.

Since OpenStack is a modular framework, it offers a scalable and flexible method to


implement a cloud. Enterprises or service providers have the option to deploy the entire
OpenStack release or just specific components. Open APIs make it possible to customize or
replace certain components and still provide a fully functional IaaS delivery solution. The
pluggable architecture of OpenStack supports multiple hypervisors, bare-metal servers,
network infrastructure, and storage components, thus adding to its flexibility and
scalability. Since OpenStack is open source and governed under Apache 2.0 licensing rules,
it is available at no cost. This helps to reduce the cost of ownership of a cloud solution.
Since OpenStack works with many infrastructure options, it allows enterprises or service
providers to obtain the most cost effective solution for their cloud.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 17
Although it is hard to predict what will happen to OpenStack in the future, this is what we know.
The OpenStack Foundation operating budget is currently between $4-5 million dollars. Since it
began in 2010, the OpenStack community has increased to over 9,000 members and over 200
supporting companies. Recently, major companies such as Dell, HP, and IBM are embracing
OpenStack. Linux vendors, such as Red Hat, SUSE, and Canonical (Ubuntu), are delivering
OpenStack distributions for their operating systems. VMware has joined the OpenStack
Foundation as a gold-level sponsor. Storage vendors, such as EMC and NetApp, are on board
and sponsors. Big name companies, such as AT&T, Cisco WebEx, PayPal, Sony PlayStation
Network, and the US Department of Energy, use OpenStack.

The list of contributors and users is growing. Development and adoption is happening at a faster
rate than the Open Linux operating system adoption. This type of momentum is hard to ignore
and implies that OpenStack will continue to be a viable cloud platform in the future.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 18
The OpenStack Compute Project is a collection of APIs, operating system layer services, and
plug-ins that provision and manage virtual machine instances and control the underlying
compute resources. The operating system layer services are distributed throughout the cloud
environment and perform various functions, such as determining virtual machine instance
placement, tracking instance state, and enforcing quotas. OpenStack Compute is not a
virtualization application. Through plug-ins, it supports deploying virtual machine instances on
multiple hypervisors, such as XenServer, KVM, ESXi, and directly on bare-metal hardware.
OpenStack Compute provides APIs to both users and administrators for management functions.
In previous versions, Compute included services for network and volume management. These
services have been separated into different projects dedicated to the management of these
resources. Compute uses API calls to other OpenStack services for network connectivity,
storage, authentication, and other requirements.

The goal of OpenStack Compute is to provide a standard interface to deliver virtual machine
instances, regardless of the underlying hypervisor or physical server technology.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 19
When accessing the OpenStack Block Storage Service, end-users will use the Dashboard service,
or a CLI to send a command to the Cinder API. For example, Create a Volume. The Cinder API
will then send the command to an OpenStack block storage driver, which is developed by a
storage vendor. The volume will then be created on the back-end storage.

Block storage volumes are fully integrated into OpenStack Compute and the Dashboard,
allowing cloud users to manage their own storage needs.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 20
The OpenStack Block Storage service works through the interaction of a series of daemon
processes named cinder-* that reside persistently on the host machine or machines. The
binaries can all be run from a single node, or spread across multiple nodes. They can also
be run on the same node as other OpenStack services.

OpenStack Block Storage enables you to add extra block-level storage to your OpenStack
Compute instances. This service is similar to the Amazon EC2 Elastic Block Storage (EBS)
offering.

The default OpenStack Block Storage service implementation is an iSCSI solution that uses
Logical Volume Manager (LVM) for Linux. The OpenStack Block Storage service is not a
shared storage solution like a Storage Area Network (SAN) of NFS volumes, where you can
attach a volume to multiple servers. With the OpenStack Block Storage service, you can
attach a volume to only one instance at a time.

The OpenStack Block Storage service also provides drivers that enable you to use several
vendors' back-end storage devices, in addition to or instead of the base LVM
implementation. The ViPR Cinder driver allows users to provision storage through
OpenStack on ViPR systems.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 21
In the example of creating a volume, the request is sent to the Cinder API. The request is then
routed through the Cinder Message Bus to the Cinder-Scheduler service. Using a set of filters,
the scheduler determines which Cinder-Volume to use for the request. The Cinder-Volume will
then pass the request to the back-end storage through its native API. Other volume commands
may be sent directly to the Cinder-Volume service.

Cinder runs on one or multiple storage nodes (servers with connectivity to back-end storage)
and manages the storage space. The scheduler service will distribute requests across all nodes
that are running the volume service.

Additionally, the Cinder-Backup service provides a means to back up a Cinder-Volume to an


OpenStack Object Store.

A Cinder-Volume service is broken down into a volume manager and storage driver, and is
controlled by a single executable called Cinder-Volume. Each array will have at least one
dedicated Cinder-Volume. A single array may have multiple Cinder-Volumes, if separate
pools exist on the array.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 22
OpenStack Object Storage (code-named swift) is open source software for creating redundant,
scalable data storage using clusters of standardized servers to store petabytes of accessible
data. It is a long-term storage system for large amounts of static data that can be retrieved,
leveraged, and updated. Object Storage uses a distributed architecture with no central point of
control, providing greater scalability, redundancy, and permanence. Objects are written to
multiple hardware devices, with the OpenStack software responsible for ensuring data
replication and integrity across the cluster. Storage clusters scale horizontally by adding new
nodes. Should a node fail, OpenStack works to replicate its content from other active nodes.
Because OpenStack uses software logic to ensure data replication and distribution across
different devices, inexpensive commodity hard drives and servers can be used in lieu of more
expensive equipment.

Object Storage is ideal for cost effective, scale-out storage. It provides a fully distributed, API-
accessible storage platform that can be integrated directly into applications or used for backup,
archiving, and data retention. Characteristics of object storage are listed above.

Developers can either write directly to the Swift API or use one of the many client libraries
that exist for all of the popular programming languages, such as Java, Python, Ruby, and
C#. ViPR leverages existing API commands from Swift to manage ViPR Object Data
Services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 23
This lesson covered VMware, Microsoft, and OpenStack components that will be integrated
with ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 24
This lesson covers use cases for the ViPR integration component.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 25
ViPR enables storage-as-a-service capabilities for organizations. However, today we begin
to understand the advantage and the big picture of the ViPR solution and realize the
magnitude of it’s capabilities. While many organizations are trying to move to a cloud
infrastructure by leveraging VMware as their transport, they have run into several problems
on the storage side.

With ViPR we see how this problem is addressed by allowing homogenous provisioning of
heterogeneous storage resources. All the storage arrays of an organization can be
discovered under ViPR, having a single point of provisioning. Additionally since provisioning
takes place from a centralized location it enables multi-tenant and a metered-as-a-service
model.

While the previous benefit alone satisfies storage-as-a-service and simplification of the
storage layer ViPR takes it a step further with the implementation of data services. With
data services ViPR touches the data plane by allowing storage to morph the presentation of
data allowing it to meet the many needs of it’s users.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 26
In the previous slide we have identified the benefits of ViPR and how it enables the journey
to the Cloud. While the two benefits described are impressive we have focused this far on
the storage portion of the equation. However, Cloud management should ease end-to-end
data center management.

Here is where the third ViPR component, and the focus of our training becomes important.
ViPR Integration and Extension manages what happens above the ViPR Controller. With
ViPR we combine the ease of provisioning through a single interface with the capability of
doing so from VMware vCenter, Microsoft SCVMM or OpenStack.

ViPR enables users to control the entire data center from a single location. Additionally, with
API capabilities ViPR presents a framework for doing application-specific storage service
provisioning in an advanced cloud model: one storage framework, many services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 27
Initial target customers for ViPR will fall into one of two categories: Enterprise IT
departments and Managed Service Providers (MSP).

ViPR is well suited to meet two broad use cases for Enterprise IT departments, automating
the storage provisioning process for storage administrators and enabling Enterprise IT to
deliver a self-service solution, via IT-as-a-Service (ITaaS).

For Managed Service Providers who already provide automation and IT-as-a-Service, ViPR is
multi-tenant ready for self-service, policy-based provisioning of storage. It enables the
implementation of policies that meet individual tenant needs for physical isolation of their
data.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 28
In our first use case we have an organization with a traditional data center on their journey
to the private Cloud. We can see they have started their journey with ESXi hosts managed
by vCenter Server. Moving away from physical hosts have allowed them to maximize
resource utilization while easing scalability requirements.

The organization has bought different types of storage arrays, throughout the years to
satisfy their demand for storage. Today storage requirements are not satisfied by a one-
size-fits all solution, therefore working towards homogenizing storage arrays is not a viable
option. Additionally, the storage is served replicated and distributed to a different
datacenter with VPLEX and RecoverPoint.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 29
As our data center expands it becomes harder to manage the different heterogeneous
connectivity devices, storage arrays and data protection appliances with their own native
tools. Reporting only shows the storage perspective, making it harder to look at the big
picture to troubleshoot and improve data center performance.

As the data center is run by different teams problems in communication arise as storage
administrators want specifics on storage requirements while the server and application
teams focus on other priorities.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 30
With ViPR, we can focus the storage team on optimizing resources and carefully configuring
virtual arrays and virtual pools based on different classes of service required on the
organization. Their roles will move way from repetitive exhaustive configurations to more
planning and monitoring roles.

Errors presented due to manual intervention, will be significantly reduced in the


connectivity devices and storage arrays due to ViPR automation processes.

The integration power of ViPR can be seen when we leverage the storage vendor
proprietary APIs that have been created for OpenStack in ViPR. This allows us to manage
not only storage arrays that can be natively discovered by ViPR but those that can be
managed through OpenStack. This allows management of legacy arrays for which native
ViPR integration would not be possible otherwise.

We are also transferring the capabilities of identifying the best storage for a users needs
through storage profiles created with VASA. The ViPR plug-in for VASA allows vCenter to
focus on the virtual arrays and virtual pools without having to consider the underlying
infrastructure.

Additionally, we can provision from vCenter Server with VSI, vCO and vCAC depending on
the requirements of the datacenter. With these integration components we can easily
manage all the components of a data center from a single location without sacrificing
compatibility and functionality.

Lastly, we can use VMware vRealize Operations to get real-time performance information
for all the components in your data center, including ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 31
In our second use case an organization has invested heavily in automating many tasks from
their data center through vendor-specific APIs. They managed to automate hosts with
vCenter APIs, which enables them to orchestrate and automate common tasks.

Additionally, they invested in automating switches and storage arrays in the environment.
This automation is accomplished by working with switch and storage vendor APIs.

The automation worked very well for some years however as time passes by it’s time for
technology refreshes and the organization now finds themselves in a vendor and model
lock-in since APIs used for automation are proprietary. Changes such as upgrading Cisco
code from SAN-OS to NX-OS also cause problems with the integration.

This model is difficult to maintain as future technology refreshes would require changes in
the automation API.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 32
The ViPR architecture provides a new set of integration capabilities using APIs. With ViPR,
we have one storage framework for many services. This means the organization would only
have to develop using a single set of APIs that would enable it to provision storage through
the ViPR Controller.

When it comes time for technology refreshes, new heterogeneous storage arrays can be
discovered with ViPR, allowing them to migrate to new storage arrays without having to
change the integration they have programmed.

As for the connectivity layer, ViPR automates zone creation that eliminates the need to
create switch-specific integration.

The ESXi hosts can be automated through VMware solutions such as vCO and vCAC which
all have ViPR plug-ins.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 33
This lesson covered use cases for the ViPR integration component.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 34
This module covered use cases available for ViPR integration, as well as different
components that can integrate with ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 35
Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration Overview 36
This module focuses on ViPR integration with VMware products. We will cover VSI for
vSphere, ViPR plug-in for vRealize, and ViPR Enablement Kit for vCAC.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 1
In this diagram, we see how ViPR interacts with different VMware components. Notice the
ViPR UI is used to communicate with the ViPR controller for provisioning operations.
Additionally ViPR APIs or the CLI can also communicate with the ViPR Controller.

The ViPR controller includes integration with VASA. We can see how VASA is used to
provide storage capabilities to vCenter.

Within vCenter we have VSI and vRealize. VSI allows a user to provision storage for VMs
within vCenter. vRealize automates VM management and integration process by creating
workflows. With the ViPR plugin these workflows can be used to provision storage through
the ViPR UI.

On top of vCenter there is vCloud Automation Center which extends vRealize and provides
service-based provisioning.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 2
ViPR comes with a number of components that allow for the provisioning and control of
ViPR storage natively from VMware management interfaces. Users of vCenter Server,
vCenter Operations Manager, and vRealize can work with ViPR storage through their native
user interface, without having to use the ViPR UI. The integration components, which are
optional and cannot function without the ViPR Controller, include:

• ViPR Storage Provider for VMware vCenter (VASA Provider)

• Virtual Storage Integrator for vSphere Web Client (VSI Plug-in)

• ViPR plug-in for VMware vRealize (vRealize Plug-in)

• ViPR Enablement Kit for vCloud Automation Center (vCAC)

• ViPR Analytics Pack for VMware vCenter Operations Management Suite (VMware vRealize
Operations Analytics Pack)

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 3
The documentation listed here is available in the support.emc.com website. It contains
installation and configuration guides for ViPR integration with VMware.

For specific instructions on VMware products, refer to the VMware support pages.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 4
This lesson covers storage provider preparation, registration, and provisioning.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 5
The ViPR Storage Provider integrates ViPR with vCenter Server; it sits between storage
arrays and vCenter providing the latter storage information such as class of service, size,
and volume World Wide Name. The Storage Provider enables vCenter administrators to
perform policy based management of ViPR storage, such as mapping class of service
against storage capabilities and associating VM storage profile with such capabilities. It also
lets vCenter administrators view events and alarms originating in ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 6
The ViPR Storage Provider conforms to the VMware VASA Provider 1.0 specification. Many
other EMC products have VASA providers such as VNX, VMAX, VPLEX and Isilon. The
Storage Provider works in a multi-node ViPR configuration.

The Storage Provider is started as a service in the Controller when ViPR is deployed in the
datacenter. The vCenter Server communicates with the Storage Provider, which
communicates with underlying storage arrays (in this case, ViPR). In the eyes of vCenter
Server ViPR is a huge storage array. There is no visibility into the physical arrays behind
ViPR.

The table shows the information provided to VASA. The storage array and processor are the
ViPR VDC. The alarms are any alerts available on ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 7
In order to prepare ViPR for the Storage Provider it is important to verify the service is
running in the ViPR controller. Additionally the Service End Point Reference (ERP) must be
copied to be used when configuring vCenter for the Storage Provider.

To confirm the Storage Provider service is running on the ViPR controller you must have a
ViPR System Role, such as the root user, and you must know the virtual IP address of the
ViPR Controller. Log in to ViPR and make sure you are on the Admin tab. Select System
and select System Health. Verify the vasasvc (VASA service) is listed as running. The
check next to the status will confirm this.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 8
The storage provider Service EPR is the link used to communicate between vCenter Server
and the ViPR Controller. To determine the ERP type the following IP address in the URL:
https://<vipr_controller_virtual_ip_address>:9083/storageos-vasasvc/services/listServices
An Apache website will display information about the services available through the
provider. More importantly, a Service EPR will be displayed. That address must be copied as
it will be required for further configuration. In our example the Service EPR is
https://10.127.55.205/storageos-vasasvc/services/vasaService

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 9
Once we have the information we need to register the storage provider we can do so from
vSphere Client or vSphere Web Client. In this screen we see the vSphere Web Client. Log in
to the vSphere Web Client with root credentials and on the right-hand menu select vCenter
> vCenter Servers and click on your local vCenter server. From the top tabs select manage
and in the tabs underneath it select Storage Providers. Initially there should be no storage
providers listed in your vCenter Server, click the plus icon to add a new storage provider.

The storage provider requires a descriptive name, a URL which in our case is the EPR which
came from the previous screen and a username or password for a user that has monitor
role on vCenter Server. In our example we are selecting the admin user. When all the
information is provided click Ok and wait for the Storage Provider to load.

Once the Storage Provider appears in the list click Refresh and verify the status of the
storage provider is online. The storage provider is now configured with vCenter Server.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 10
Once the storage provider is listed as online you can click on it and verify it’s configuration.
Make sure the provider is online. The provider version is listed as well as the VASA API used
with the provider.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 11
With the release of vCenter Server 5.5 Storage Profiles were renamed to Storage Policies.
To access the Storage Policy section go to vSphere Client’s home and from monitoring
select Storage Policies. The Storage Policies window is very simple, it only contains a list of
the storage policies that are created. Before creating storage policies it is necessary to
enable VM Storage Policies on the resources, or ESXi hosts that will be using it. You can
select a vCenter server from the drop down list and ESXi hosts will be displayed, in our
example we selected all the hosts and clicked Enable. Once enabled the VM Storage Policy
status will be shown as active. Once these have been activated click Close to close the
window.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 12
You must create new storage policies, associate virtual pools created in ViPR to these
storage policies, and create virtual machines(VM) with these storage policies to view the
storage capabilities available in ViPR from the vCenter Server. You can create a new VM
storage policy that will be enforced in the hosts that were enabled. When the create new
VM storage policy icon is selected the policy will ask for a name and a description. Once this
is entered it will go to rule sets. You can add multiple rule sets with different characteristics
that will be used to validate storage resources.

In our example, we are adding a rule that states the storage used should be provisioned
from the File virtual array and pool created on ViPR. The next window, although not visible
here will show you all the datastores that are compatible with the storage selected.

When you click Next it will give you an overview of your storage policy and click Finish to
complete. This storage policy can now be enforced when creating VMs.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 13
During the traditional VM creation process the “Select storage” step now displays an entry
for VM Storage Policies. This drop down list will contain all the policies that exist, once a
policy is selected it will filter the available datastores. Notice datastores are differentiated
between compatible and incompatible. In our example we have one datastore that is
compatible with the storage policy that we had created.

The other datastores are available to the ESXi server where the VM will be deployed, but
are not compatible to our ViPR File storage requirement. You will be able to deploy the VM
with storage that is not compatible but will be warned when doing so.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 14
This lesson covered storage provider preparation, registration, and provisioning.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 15
The purpose of this lab is to demonstrate how to install and configure the ViPR Storage
Provider for vCenter Server.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 16
This lab demonstrated how to install and configure the ViPR Storage Provider for vCenter
Server.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 17
This lesson covers Virtual Storage Integrator. It shows us how to install and configure VSI
and register and manage the ViPR storage system with VSI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 18
The Virtual Storage Integrator (VSI) is an EMC developed plug-in for vCenter Server that
allows for the provisioning, monitoring and management of VMware vSphere Datastores on
EMC storage arrays directly from vCenter, greatly simplifying management of the virtualized
environment.

Version 6.0, known as EMC VSI Plug-in for vSphere Web Client 6.0, is an architecture that
makes the features and functionalities of ViPR storage available in vCenter; it enables you
to view the properties of ViPR storage systems, NFS datastores, VMFS datastores, and RDM
volumes. It also allows for the provisioning of NFS datastores and VMFS datastores on ViPR
storage, and for the creation of a ViPR-hosted RDM attached to a VM.

With the release of ViPR 2.0 a new VSI plugin was released to support it, VSI for VMware
vSphere Web Client 6.2.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 19
In this diagram we can see the architecture of a typical deployment of VSI for VMware
vSphere web client.

The VSI 6.1 plug-in has a Storage Access Control feature. With is a storage administrator
can enable virtual machine administrators to perform management tasks on a set of
storage pools. The current version of VSI supports a wide range of EMC storage systems
and features.

In EMC ViPR it supports viewing properties of NFS and VMFS datastores and RDM volumes.
It also supports provisioning NFS and VMFS datastores and RDMS.

With VNX series for ESX/ESXi hosts VSI supports viewing properties of NFS and VMFS
datastores and RDM volumes, provisioning NFS and VMFS datastores and RDM volumes,
compressing and decompressing storage system objects on NFS and VMFS datastores,
enabling and disabling block deduplication on VMFS datastores and creating fast clones and
full clones of virtual machines on NFS datastores.

For VMAX storage arrays VSI allows administrators to view properties of VMFS datastores
and RDM volumes and provision VNFS datastores and RDM volumes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 20
These are the requirements for VSI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 21
This is a short summary of what you need to do to prepare your VSI Web Client
environment. ViPR support for VSI is provided by deploying the Solution Integration
Service virtual appliance. The Solution Integration Service enables the communications
between the vSphere Web Client and the ViPR Appliance.

The Solutions Integration Service is a no-charge item and is downloaded from


support.emc.com.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 22
The EMC Solutions Integration service is a vApp that must be deployed and registered with
the vCenter server.

vSphere Web Client IP address and credentials, and information on where the OVA/OVF are
contained is required.

Information about IP address, gateway, NDS and netmask, and host or resource pool are
required for deploying the vApp.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 23
The Solutions Integration Service can be downloaded from http://support.emc.com and is
free. The download can be found under VSI for VMware vSphere Web Client.

To install the Solutions Integration service go to vSphere Web client and select vCenter >
VMs and Templates (or Hosts and Clusters) and from here select Action and Deploy OVF
template.

If you are prompted to allow the VMware Client Integration plug-in, allow it.

As the source select the solutions integration OVA and click next. The details and End User
Agreement will be displayed, select Next again. Select the ESXi host where Solutions
Integration will be installed and the storage and network.

In the Customize Template you will be prompted for the public network IP, gateway, DNS
server and netmask to be assigned to the Solutions Integration Service. This is the image
we can see in our slide. Verify the installation and Complete the wizard, make sure to have
the vApp power on once the deployment is complete.

Once the Solutions Integration Service has started verify the REST web service is available
by checking the following link:

https://<Solutions Integration Service IP>:8443/vsi_usm

If prompted with certificates and exceptions accept them, you should see the VSI Solutions
Integration Service loaded.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 24
Once the Solutions Integration Service is deployed, the VSI plugin must be registered with
a vCenter Server. To manage the Solutions Integration Service point a browser to
https://<Solutions Integration Service IP>:8443/vsi_usm/

The log in website is launched, to access it use admin as the username and the default
password is ChangeMe.

Once logged in select Register VSI Plug-in and set the and click Register.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 25
Now that we have matched the Solutions Integration Service with a vCenter Server we
need to register the Solutions Integration Service within vCenter Server through vSphere
Web client. Log in to the vSphere Web Client and navigate to Home > vCenter. At the
bottom of the vCenter Server notice the EMC VSI tab, within it you can click the Solutions
Integration Service.

Select actions and click Register EMC Solutions Integration Service. Type the
credentials of the Solutions Integration Service and click Ok to register.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 26
VSI allows you to register VNX or VMAX or ViPR as storage systems. To do so from within
the VSI menu select Storage Systems. On the top right you have a drop-down with options
called actions, in it select Register Storage Systems. A dialog opens asking the storage
system type, IP, and credentials to connect to it.

When the registration is complete vCenter Server will have a greater understanding of what
storage resources are provisioned through ViPR and can give an administrator better insight
into their ViPR environment.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 27
When ViPR is registered as a storage system it will appear on the top-right. When you click
on it a summary of information will be provided, including the name, type of storage and
management IP address. Additionally, the Actions will allow you to register, unregister, or
refresh the registered storage system.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 28
Now that vCenter Server has more insight into storage provided by ViPR we can check a
datastore that was provisioned through the ViPR UI.

On vSphere Web Client go home and select vCenter and select Storage. A list of datastores
are listed within the datacenter, when a datastore provisioned through ViPR is selected
notice the Storage Device and Storage System sections.

The Storage Devices contains all the information used when provisioning he datastore.
From there you can see the virtual array and project where it belongs to, capacity and
virtual pool. The description provided for the virtual pool is also displayed.

The storage system section shows the IP address of the ViPR controller and lists the model
of storage system as ViPR. This information is very valuable when connecting the dots
between storage provisioned by ViPR and the servers that are using it.

Additionally, when volumes are provisioned as RDM to virtual machines you can select the
virtual machine and under monitor click EMC Storage Viewer. This tab will show you
information about the storage system and storage devices too.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 29
VSI also allows us to provision ViPR storage directly through vCenter Web Client. IN this
example we will provision a Datastore by going to vCenter > Hosts. In here we right-click a
host and find a new menu called All EMC VSI Plugin Actions. The menu allows a user to
deploy a New EMC Datastore. The new datastore can be an NFS datastore and a VMFS
datastore.

Alternatively, RDMs can also be provisioned directly to Virtual Machines. To provision to a


VM simply go to the inventory and select Virtual Machines. Right click any virtual machine
an again select the All EMC VSI Plug-in Actions. The first option is a New EMC RDM Disk.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 30
The process of deploying a new VMFS datastore or an NFS datastore is the same, the
wizard guides you through the same parameters that are required when provisioning the
datastores through the ViPR UI.

The first step requires a name for the datastore and the location. If you have right-clicked
on a host, the location will be pre-populated with the desired host. The second step requires
the type of datastore that will be deployed. The datastore can be deployed as NFS from file
storage or VMFS through block storage.

ViPR will display a list of available storage systems that have been configured. The project
and virtual array that will be used are selected in the next step. If the project changes, all
virtual arrays available to that project will be listed. Since datastores can be shared
between hosts this step allows you to select more hosts to share the new datastore.

Once the virtual array is selected the next step displays a list of available virtual pools, their
description, and available capacity in GB.

On the next step select the capacity of your datastore.

All the parameters are validated in the final step, make sure they are everything you
selected and click Finish. VSI will proceed to communicate with ViPR and provision the
datastore. The VSI plug-in allows you to perform ViPR User actions from a central location,
the vSphere Web Client.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 31
This lesson covered Virtual Storage Integrator. It shows us how to install and configure VSI
and register and manage the ViPR storage system with VSI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 32
The purpose of this lab is to deploy the EMC VSI service in vCenter server, configure the
Solutions Integration Service and use the tool to provision ViPR storage through vSphere
Web Client.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 33
The purpose of this lab was to deploy the EMC VSI service in vCenter server, configure the
Solutions Integration Service, and use the tool to provision ViPR storage through vSphere
Web Client.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 34
This lesson covers the ViPR plug-in for vRealize. Configuration requirements, plug-in
installation and configuration and provisioning through ViPR workflows are also covered.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 35
The EMC ViPR plug-in for vRealize provides an orchestration interface to the EMC ViPR
software platform. The EMC ViPR plug-in has prepackaged workflows used through the
different clients supported by vRealize. The EMC ViPR (ViPR) plug-in workflows includes high
level workflows to carry out common activities such as provisioning storage for an entire
cluster, as well as building-block workflows that provide more granular operations, such as
creating a ViPR volume.

The ViPR plug-in for vRealize supports multiple clients and allows for either interactive or
programmatic access. When used interactively, some workflow values will be populated with
values corresponding to the context from which they were launched. When used
programmatically, the EMC ViPR plug-in provides the capability to pass new workflow
parameters through an external application.

The functionality provided with the EMC ViPR plug-in for vRealize is supported by the
following clients when vRealize is integrated with the application: vRealize REST API,
vSphere Web Client, and vCloud Automation Center.

EMC is also providing documentation within an enablement kit to support integration of


ViPR and vCloud Automation Center (VCAC). No specific plug-in or other developed code is
provided to support VCAC.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 36
Before configuring vRealize ViPR 1.1 plugin some configuration requirements must be
configured in ViPR. Only a ViPR Tenant Administrator or Project Administrator for the Project
being configured, can run the workflows. These ViPR roles must be configured b y the system
security administrators from the ViPR administration and ViPR UI.

A ViPR System Administrator must configure ViPR virtual storage arrays and virtual storage
pools, these cannot be configured with the plugin. It’s important to note that multi-volume
consistency groups are not supported by the EMC ViPR Plug-in for vRealize. The ViPR plug-in
does not allow users to select virtual pools in which multi-volume consistency is enabled.
However, if invoking the workflows programmatically, or through an external application,
provisioning fails if a virtual pool with multi-volume consistency enabled is selected.

The hosts must be registered and have physical access to the virtual arrays and networks
required. The vCenter Server must also be added to ViPR.

Finally, each instance of the ViPR plug-in can be used to manage one instance of ViPR . You
cannot manage multiple instances of ViPR with a single instance of the ViPR plug-in for
vRealize.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 37
vRealize must meet specific configuration requirements to support the EMC ViPR plug-in for
vRealize. The first requirement is to have vRealize version 5.x or above.

The EMC ViPR plug-in requires that the VMware vCenter Server plug-in for vRealize version
5.1.x is installed on the same vRealize running the EMC ViPR plug-in.

Enable the Share a unique session mode for the vCenter Server plug-in for vRealize before
invoking the EMC ViPR workflows. Enable the Share a unique session mode through the
vRealize Configuration interface, vCenter Server, New vCenter Server Host tab.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 38
The EMC ViPR plug-in is called EMC-ViPR-vRealize-Plugin.dar. It is available for download
from the EMC Support website http://support.emc.com.

Once the plug in is downloaded it must be installed from the Orchestrator Configuration
interface. To do so, enter the VMware vRealize server host name in a supported browser to
access the vRealize Web view. Click the vRealize Configuration link. Once you log in to the
vRealize administration click Plug-ins, and upload the plugin to vRealize.

Once the plug-in is installed, restart the vRealize server host. Also, restart both the vRealize
and the Orchestrator Configuration server. Make sure to restart the vRealize server host
before restarting the services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 39
Set the EMC ViPR settings and workflow timeout period from the vRealize Configuration
interface of the EMC ViPR Configuration page.

vRealize requires users to be part of the vRealize administrator group to use the
Orchestrator Configuration interface. The ViPR user entered in the vRealize Configuration
settings must have been assigned a ViPR Tenant Administrator role or a Project
Administrator role for the Project being configured.

While working with EMC ViPR workflows interactively, through vRealize or vSphere, ViPR
configuration settings behave as follows, The EMC ViPR hostname/IP address, username,
password, port, and project are not presented in the ViPR workflows, and can only be
changed through the vRealize Configuration interface.

Also, the ViPR virtual array is a default setting, which may be automatically or manually set.
While executing the workflow, the target virtual array is computed when possible. For
example, if the workflow is executed within the context of a vSphere cluster, the correct
virtual array for the cluster is computed by the plug-in, and set in the drop-down menu. If
the virtual array cannot be computed, the virtual array, defined as the default in the
vRealize Configuration interface is set as the first option in the drop-down menu, followed
by a list of the other available virtual arrays. The screen also asks for the number of time in
minutes a ViPR Workflow can time out.

In the screen above, once the ViPR credentials are entered, you should click the Verify
Connection link. This results in the green message above and information about virtual
pools and virtual arrays.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 40
Once the plug-in is configured, restart the vRealize server host. Also, restart both the
vRealize and the Orchestrator Configuration server. Make sure to restart the vRealize server
host before restarting the services.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 41
To verify that the EMC ViPR plug-in was correctly install simply check for an EMC ViPR
folder in the Workflows library. In that section, you can review the list of existing
workflows. In our slide, we can see how after expanding the Library we see an EMC ViPR
folder. Within this folder, we see specific workflows for vCenter and within it datastores and
hosts and clusters as well as general workflows for ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 42
There are two types of workflows pre-packaged with ViPR. One set of workflows is used for
common ViPR operations, while the other set are the "building block" workflows used for more
granular ViPR operations. Listed here are the workflows used to automate common ViPR
operations such as provisioning storage for an entire cluster.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 43
These are the building block workflows. Building block workflows are individual operations
used to construct the common workflows. The building block workflows can also be used
individually, to perform granular ViPR operations such as creating a ViPR volume.

Missing in this list are other workflows that get a ViPR volume WWN for a given datastore,
create a volume from ViPR storage and presents it to the hosts. Additionally there is a non-
interactive workflow that is to be used specifically with the EMC ViPR Enablement kit for
vCAC.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 44
The workflows installed with EMC ViPR plug-in for vRealize can be integrated with vCenter
server. Once they are integrated, the workflows can be invoked and used through vSphere
Web Client instead of the vRealize client.

It is important to note EMC ViPR workflows can only be invoked from the vSphere Web
Client. The workflows cannot be accessed from the locally installed vSphere Client.

When the vSphere client imports workflow information ViPR hostname, username, password
and project cannot be reconfigured when the workflows are invoked through vSphere web
client.

The EMC ViPR virtual array may be set automatically or manually. While executing the
workflow, the target virtual array is computed based on the context of the workflow when it
is invoked. If not computed, the virtual array is automatically set to the virtual array
configured for the workflows in vRealize. The virtual array however, can always be manually
overridden when executing the workflows.

If manually entering the virtual array, and the virtual pool, the virtual array and virtual pool
names are not rendered in vSphere. The virtual array and virtual pool names must be
manually typed into the fields.

In this screen, we see how inside of Center Server we have a vRealize section. In it we
select the vCenter Server and edit the configuration. Within it we point to the vRealize
Server to complete the integration.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 45
ViPR workflows can be executed from vRealize or from vSphere Web Client. In our example
we have selected a general workflow, creating an EMC ViPR volume. When you analyze
workflows the process is very similar to using the ViPR UI to provision storage. In our
example we have a two-step wizard. On the first step we select the volume properties such
as LUN size. In the next step we are prompted to select a virtual array and based on the
selection, a list of available virtual pools is displayed. When these are selected click submit
and the ViPR workflow starts running, displaying step by step pre-configuration, and
provisioning steps. While the look and feel is somewhat different the steps and information
are equivalent to the ViPR UI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 46
vRealize provides a feature for generating workflow documentation that describes the workflow
inputs in detail. From within vRealize, you can right-click on the EMC ViPR folder or any of the
workflows underneath it and generate the documentation. The output will be a pdf as seen
above. In the PDF, we can see version information, what is needed from the user, and what is
returned by the application.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 47
The EMC ViPR plug-in for vRealize allows further integration with other components such as
the ones listed above.

vRealize has a rich set of APIs that can be used to integrate it with third-party components.
In order for the APIs to work with the ViPR plug-in the vRealize rest APIs must be version
5.1 or higher. Additionally by default they use the host, user, project, virtual array and
virtual pool that were configured in vRealize Configuration.

In previous slides, we mentioned how to integrate vCenter Server with vRealize. It is


important to configure vSphere Single-sign-on between vRealize and vSphere. vRealize
must have configured the ViPR plug-in as well as the vCenter Server plugin. Functionality is
only present in the vSphere Web Client, not the physical client. When using vRealize from
within vCenter Server there are certain limitations such as not being able to change
hostname, username, password and projects. The virtual pools and virtual arrays can
sometimes be computed by the workflow. If the workflow doesn’t compute them it will set
the default virtual array and virtual pool configured. In vCenter Server if you want to
change them you have to type in their names as they can’t be determined.

vCloud Automation Center expands the capabilities of vRealize by allowing for greater
operations and making available more workflows than the ones included in vRealize. A
single instance of vCloud Automation Center can be used to manage multiple instances of ViPR
for example, the ViPR username and password can be passed in programmatically, allowing
vCloud Automation Center to have multiple user mappings. This might be done in a multi-
tenant environment or an enterprise, with self-service portals that may have different
configurations for different departments.

The ViPR hostname, username, password, project, and virtual array that are configured for
the workflows through the vRealize Configuration interface are set as defaults, but can be
overridden by the parameters passed when invoked through vCloud Automation Center.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 48
This lesson covered the ViPR plug-in for vRealize. Configuration requirements, plug-in
installation and configuration and provisioning through ViPR workflows are also covered.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 49
The purpose of this lab is to configure the EMC ViPR Plug-in for vRealize. Once the ViPR
plug-in is integrated ViPR workflows will be executed.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 50
The purpose of this lab was to configure the EMC ViPR Plug-in for vRealize. Once the ViPR
plug-in was integrated, ViPR workflows were executed.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 51
This lesson covers vCenter Operations Manager. We will identify configuration requirements
and install and configure the ViPR Analytics pack. The VMware vRealize Operations ViPR
dashboard will also be introduced.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 52
An optional component of ViPR is the VMware vRealize Operations Plugin. VMware vRealize
Operations provides a holistic view of the health, risk and efficiency of the virtual and hybrid
cloud infrastructure and applications, using analytics to automate performance, capacity
and configuration management for maximum resource utilization and operational efficiency.
It provides real-time performance dashboards that let you manage SLAs by alerting you
about performance issues and capacity shortfalls before they affect end users.

When you add EMC ViPR Analytics Pack, thereby importing ViPR inventory, metering, and
event data into VMware vRealize Operations, you can make use of the pre-configured
dashboards that aggregate compute and storage metrics for end-to-end analysis. You can
visualize anomalies, workload, capacity remaining, and reclaimable waste for all ViPR-
managed storage. When used with the VNX/VMAX adapters it can further help improve the
health scores.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 53
VMware vCenter Operations Management Suite must meet specific configuration
requirements to support the ViPR Analytics Pack for VMware vCenter Operations
Management Suite. The first requirement is vCenter Operations Management Suite version
5.7, 5.7.1 or 5.8.

Additionally, users must be part of the VMware vCenter Operations Management Suite
administrator privilege group to install the ViPR Analytics Pack, change log files, or change
settings. To monitor ViPR using the Analytics Pack, users must have at least read-only
privileges in VMware vCenter Operations Management Suite.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 54
vCenter Operations Manager is installed as a vApp made up of a database VM and a
Analytics VM. When you log in to the UI Administration, VMware vRealize Operations
requires certain configuration parameters. The first thing is to specify which vCenter Server
will VMware vRealize Operations bind to. Notice in the screen we also see the IP address of
the analytics VM. The next step requires admin and root default passwords to be changed.

The third step asks you to specify a vCenter server that you want to monitor with VMware
vRealize Operations. Under import data VMware vRealize Operations searches vCenter
server for relevant plug-ins which can be imported. The last step checks the vCenters that
are linked to the vCenter specified in previous steps. Linked vCenter Servers can be
registered with VMware vRealize Operations again.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 55
After VMware vRealize Operations is installed it needs to be licensed. The licensed is derived
from the vSphere Web Client. Under licensing select the license and click on Assign License
Key. vCenter Server will automatically detect the unlicensed solution that can be enabled
with the license. You can select the solution and click Assign License Key. Once VMware
vRealize Operations is licensed you can log in to the custom UI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 56
The VMware vRealize Operations Administration page has several tabs. The Registration
tab shows all the information about vCenter Servers registered. The SMTP/SNMP tab is
where an email service can be configured for notifications. The SSL tab shows certificate
information. It allows users to install their own certificates.

The Status tab allows you to start, stop, or reset VMware vRealize Operations. It is helpful
in showing status information and version information for all the VMware vRealize
Operations components. The last portion of the status tab allows a user to download
diagnostics information.

The Update tab is where the ViPR analytics pack gets uploaded. Select the emc-vipr-dist-
pak.pak and upload it. You will receive a warning informing you the change cannot be
reverted. After that the EULA agreement must also be accepted. The progress of the update
will continue until it is complete. Once the installation is complete you will receive a
message like the one listed in the slide. The steps during the update and their status are
also listed.

The last tab is the Account tab which allows you to change passwords for the admin and
root user.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 57
Once the installation is successfully complete, go back to the status tab and scroll down on
the status into the Adapter Details section. In this section, look for the EMCViPRAdapter. If
the adapter is listed, the ViPR analytics pack is successfully installed.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 58
Once the VMware vRealize Operations configuration is successfully configured, the next part
of the configuration happens from the vCenter Operations Manager Custom UI. In order to
configure the ViPR Analytics pack, you must have administrator access to VMware vRealize
Operations administration console.

In order to access the custom UI you must point to the following address in your web
browser: https://VMware vRealize Operations Analytics VM IP/VMware vRealize
Operations-custom

Make sure to log in with your admin account.

From the vCenter Operations Manager site follow these steps to configure your ViPR
Analytics plug-in with the ViPR controller:
1. Click on Environment.
2. Select Configuration.
3. Click on Adapter instances.
4. Filter by collector and adapter kind. When ViPR adapter is selected there are no
collectors so click the add icon to add a new collector, the window displayed above
appears.
5. Set a name for your ViPR interface, provide the ViPR controller IP, set enable filtering.
6. Click on Add to add new credentials.
7. Select EMC ViPR Adapter and EMC ViPR Credentials and fill in the credentials for the
ViPR controller, then click Ok.
8. Test the configuration to see if it can connect to ViPR.
9. Click Ok to conclude the new interface configuration.

Once this is configured VMware vRealize Operations will start to receive ViPR data.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 59
Use the ViPR Capacity dashboard to monitor virtual storage pool capacity and datastore disk
usage. Notice the left tab has ViPR related components while the right tab has vCenter
components. Additionally the center of the dashboard has the resource selector at the top
and all the devices below it.

The ViPR Capacity dashboard has the following components:

Virtual Storage Pool Workload, which displays the provisioned capacity used by the
Datastores - in other words how much of the provisioned capacity is already used by the
consumers (datastores).

Virtual Storage Pool Capacity Remaining, which displays the free capacity for virtual
storage pools – or what percentage of capacity remains on the specific storage pools.

The Resource Selector is used to search for a specific resource - notice it provides a
search box which filters the resources to whatever is typed. When a resource is selected, all
the resources will grey out except the ones that are related to the selected resource.

The Status Boards display various status and relationship information for ViPR resources.
Notice you can filter the different thresholds depending on the level of severity that you
want displayed. Additionally, you can select which status views should be displayed.

On the right of the dashboard we see the Clusters in Workload at the top. It displays the
top 10 vSphere clusters in disk capacity workload across all datastores in the cluster.

The Datastores in Workload displays the top 10 datastores in disk capacity workload. You
can see the percentage used vs. the amount provisioned.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 60
Use the ViPR Performance dashboard to monitor storage network and datastore latency
performance data.

The ViPR Performance dashboard has the following components:

Storage Network Workload, which displays the aggregate IO utilization for all storage
ports in a network.

The Storage Port Workload displays the IO workload for storage ports. It displays the
virtual storage pool’s capacity remaining. In other words the percentage of free capacity
across all matching storage pools.

Resource selector and status boards have the same behavior as the capacity dashboard.

On the right, we have the Datastores with highest IO workload. These are the top-10
vSphere datastores with the highest IO workload.

Additionally, we have the datastores with the highest read latency and the Datastores with
highest write latency. As with the previous listings it includes the top-10.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 61
ViPR At-a-Glance shows an aggregate of information relevant about the entire environment.
In the top we see capacity monitoring, and at the bottom we see performance monitoring.

The first item is the virtual storage pool capacity remaining. It shows what percentage of
the allocated capacity is available.

The next screen shows the virtual storage pool workload, which shows a rating of the
amount of IOs moving trough the virtual storage pools.

The third section on the top-right shows the vSphere cluster capacity workload.

On the performance portion at the bottom-right we have storage network workload. We


see the VSANs that are available in the environment and colors for the amount of IOs that
are going through it.

In the bottom-middle screen we see the top 10 storage port workload and on the right
we see the top-10 vSphere datastores with the highest latency.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 62
In a ViPR environment the typical deployments are expected to use one instance of VMware
vRealize Operations with one instance of ViPR. Depending on the size of the deployment it
is likely ViPR will scale better than VMware vRealize Operations.

VMware vRealize Operations has no insight into deleted or removed objects. The VMware
vRealize Operations topology will continue to show the resources. To stop showing
resources VMware vRealize Operations objects need to be manually deleted or filtered using
the widgets on the VMware vRealize Operations UI.

VMware vRealize Operations polls ViPR for information every five minutes, for this reason
the previous slide does not show any performance information, ViPR had been recently
discovered. This configuration settings can be changed manually, however lowering the
amount of time increases CPU and memory utilization.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 63
This lesson covered vCenter Operations Manager. We identified configuration requirements
and installed and configured the ViPR Analytics pack. The VMware vRealize Operations ViPR
dashboard was also introduced.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 64
The purpose of this lab is to install and configure VMware VMware vRealize Operations
Analytics Pack.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 65
The purpose of this lab is to install and configure VMware VMware vRealize Operations
Analytics Pack.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 66
This module focused on ViPR integration with VMware products. We covered VSI for
vSphere, ViPR plug-in for vRealize and VMware vRealize Operations.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 67
Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with VMware 68
This module focuses on ViPR integration with 3rd party platforms such as Microsoft SCVMM.
It describes both platforms and the integration components and then covers how to
configure and provision through ViPR plug-ins from these platforms.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 1
In this diagram, we can see the ViPR controller with the storage arrays below it. We see
how, through the use of APIs, ViPR can communicate with SCVMM through the ViPR add-in
for SCVMM.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 2
The documentation for Microsoft SCVMM integration is available in the support.emc.com
website. It contains installation and configuration guides for ViPR integration with SCVMM.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 3
This lesson covers the ViPR Add-in for SCVMM configuration and installation steps. ViPR
settings will be configured and SCVMM will be used to provision ViPR storage.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 4
The EMC ViPR Add-in for System Center Virtual Machine Manager (SCVMM) makes ViPR
prepackaged services available within the SCVMM Console. Virtual Machine Manager (VMM)
is a management solution for virtualized datacenters that enables you to configure and
manage virtualization hosts, networking, and storage resources for the creation of virtual
machines and services for private clouds.

The ViPR prepackaged services for SCVMM include host and cluster services, as well as
pass-through disk services (provision a volume for a cluster, expand a volume for a cluster,
delete a volume, create pass-through disk, and expand pass-through disk).

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 5
EMC ViPR and SCVMM must meet specific configuration requirements before using the EMC
ViPR Add-in for SCVMM. As general configuration requirements, the hosts must be added to
ViPR before they can be managed by the ViPR add-in for SCVMM. Prior to using any ViPR
add-in services, EMC ViPR tenant administrators can either add the hosts to ViPR, after the
EMC ViPR add-in for SCVMM is installed, or add the hosts using EMC ViPR Administration
through the self-service Uis. The user adding the hosts must have the EMC ViPR tenant
administrator role.

The EMC ViPR environment must meet specific configuration requirements to support the
EMC ViPR add-in for SCVMM. The EMC ViPR user that will use the EMC ViPR add-in for
SCVMM must have been assigned an EMC ViPR Tenant Administrator role or a Project
Administrator role for the Project resources being configured. EMC ViPR roles are configured
by EMC ViPR Security Administrators from the ViPR Administration and Self-Service UI. EMC
ViPR resources, such as virtual storage arrays (virtual arrays) and virtual storage pools
(virtual pools), must have been configured through the EMC ViPR Administration and Self-
Service UI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 6
System Center Virtual Machine Manager (SCVMM) must meet specific configuration
requirements to support the EMC ViPR add-in for SCVMM. The EMC ViPR add-in is supported
on Microsoft System Center 2012 SP1 – Virtual Machine Manager. EMC recommends
installing the Microsoft Administration Console (KB2826392) update package. You can
install the package from the Microsoft support page for System Center 2012 SP1 page
(http://support.microsoft.com/kb/2802159). Locate the Virtual Machine Manager –
Administration Console (KB2826392) download link and follow the instructions to install the
package. Microsoft Hyper-V Server 2008 R2 and Microsoft Hyper-V Server 2012 are
supported by the EMC ViPR add-in. The services provided with the EMC ViPR add-in for
SCVMM can only be used through the VMM console, and cannot be integrated with other
applications.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 7
Logon to an instance of Microsoft System Center Virtual Machine Manager 2012 SP1 as a
domain user with local administrator privileges. This user must have the ViPR role of Tenant
Administrator or Project Administrator.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 8
Launch the SCVMM console and connect to the SCVMM server using the current Microsoft
Windows session identity.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 9
Click Settings in the console’s lower section of the left-hand side pane.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 10
The Settings “ribbon” appears across the top of the console.

Click the Import Console Add-in button. The Import Console Add-in Wizard will open up.

Select the add-in to import by browsing to its location.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 11
You will receive a message stating that the add-in is valid but has warnings, which indicates
that the add-in is self-signed. Check the “Continue installing this add-in anyway” box and
click Next. If the “Next” button is greyed out even after checking the “Continue installing
this add-in anyway” make sure the KB specified in the requirements was installed.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 12
In the last screen of the wizard, review the settings and click Finish to begin the
installation.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 13
The Jobs screen will show that the add-in has been installed. You can wait for the
installation to complete or close the window. The installation will continue even though the
job screen is closed.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 14
We are now back in the “VMs and Services” section of the SCVMM console. Note that the
“All Hosts” section on the left pane of the UI lists three hosts. Those are Hyper-V servers.
The one highlighted in green, w2k8-83-181-HYP, is a server also found in ViPR’s physical
host inventory.

EMC ViPR Add-in for SCVMM can add discovered hosts to ViPR if they are not already
present in ViPR. If you choose to do this, keep in mind that the add-in will register all the
SCVMM hosts in ViPR, since the add-in cannot register individual hosts. Individual hosts can
be added to ViPR only through the ViPR UI, its API, or its CLI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 15
Notice what happens when we click on Hyper-V server w2k8-83-181-HYP: the EMC ViPR
Add-in icon, previously grayed out, becomes enabled. This is because the host has already
been discovered in the ViPR UI.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 16
From the ViPR section of SCVMM click on the EMC ViPR Add-in icon, which opens up the
add-in interface. Click on the Settings link.

The ViPR Configuration Settings interface opens up. There are two tabs in it. The
Preferences tab shows values that can be left as they are. The ViPR Credentials tab shows
the values that we need to set:

• EMC ViPR Hostname: ViPR virtual IP address.

• EMC ViPR Port: 4443

• EMC ViPR Username: <domain user credential>@domain (for instance, vmm-


[email protected]).

• EMC ViPR Password: The corresponding password.

• Click the Apply button.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 17
In our example we are logged in Microsoft System Center Virtual Machine Manager 2012
SP1, as a domain user with local administrator privileges. The user has the role of Tenant
Administrator; it could have had a ViPR Project Administrator role instead.

Click on VMs and Services in the console’s lower section of the left-hand side pane. Click
Hyper-V server w2k8-83-181-HYP in order to enable the EMC ViPR Add-in icon.
Click on the EMC ViPR Add-in icon, which opens up the add-in interface. Click the Provision
a volume for a Cluster or Host icon.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 18
The corresponding panel appears. Most of the information needed will be automatically filled
out. Set the size of volume and specify the Virtual Pool if not satisfied with the automatic
setting. Make sure that the selected virtual pool is a block pool. Fiber Channel should be the
protocol. Click Next.

When the “Enter Windows Drive Parameter” panel appears, accept the default values and
click Preview. Review your configuration and click Finish if satisfied.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 19
Follow the execution of the provisioning job. The ViPR Add-in will show the steps it has
taken to complete the operation.

Notice the second entry from the top: it tells us that the ViPR Add-in has started to create a
volume called VMM-00041, which is 1GB in size.

The third entry states that the volume has been successfully created with a World Wide
Name of 60000970000195900205533030313936. This number has been provided by ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 20
In the ViPR Add-in for SCVMM, we see the fourth line from the top marks the beginning of
the steps to assign the volume to the host, Hyper-V Server w2k8-83-181-HYP.vipr1npr.edu.
Line 5 indicates that the volume has been exported to the host, and line 6 tells us that the
volume was successfully presented to the host.

The next three entries reveal additional details about the presentation of the volume. Do
not get misled by the add-in’s use of the expression “cluster shared volume,” which seems
to indicate that the volume is being presented to a Hyper-V cluster rather than to a single
server. That is not the case: the volume is presented to Hyper-V Server w2k8-83-181-
HYP.vipr1npr.edu, but the add-in uses a clustering expression by default, since it is
common to find clustered virtualization servers in Hyper-V production environments.

The last entry gives us the name of the physical disk presented to the Hyper-V Server,
namely \\.\PHYSICALDRIVE8.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 21
Now go to the Jobs section of the SCVMM Console and find the entry about the provisioning
of the physical disk. If you open the previous “Refresh host” entry, you will find it empty,
which means that the ViPR Add-in for SCVMM polled the Hyper-V Server twice to obtain the
physical disk information, getting no response the first time it did.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 22
This lesson covered the ViPR add-in for SCVMM configuration and installation steps. ViPR
settings were configured and SCVMM was used to provision ViPR storage.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 23
In this lab you will configure and use the EMC ViPR add-in for Microsoft System Center
Virtual Machine Manager (SCVMM).

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 24
In this lab you configured and used the EMC ViPR add-in for Microsoft System Center Virtual
Machine Manager (SCVMM).

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 25
This module covered ViPR integration with 3rd party platforms such as Microsoft and
SCVMM. It described both platforms and the integration components and then covered how
to configure and provision through ViPR plugins from these platforms.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with Microsoft 26
This module focuses on ViPR integration with 3rd party platforms such as Microsoft and
SCVMM. It describes both platforms and the integration components and then covers how
to configure and provision through ViPR plug-ins from these platforms.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 1
In this diagram we can see the ViPR controller with the storage arrays below it. We see that
through the use of APIs, ViPR can communicate with OpenStack using the cinder driver.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 2
The documentation for Microsoft SCVMM integration is available in the support.emc.com
website. It contains installation and configuration guides for ViPR integration with SCVMM.

The Cinder driver however is hosted in the github website to abide by the open
infrastructure of OpenStack.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 3
This lesson covers the ViPR Add-in for SCVMM configuration and installation steps. ViPR
settings will be configured and SCVMM will be used to provision ViPR storage.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 4
This lesson covered the ViPR add-in for SCVMM configuration and installation steps. ViPR
settings were configured and SCVMM was used to provision ViPR storage.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 5
This lesson covers the installation of the Cinder driver, configuration of ViPR and the Cinder
driver and ViPR-managed storage provisioning.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 6
When it comes to ViPR and OpenStack, EMC plans to develop and release adapters or
drivers that will support multiple storage platforms. These ViPR drivers will provide both
object and block storage capabilities on various types of underlying physical storage. The
benefits is that drivers will not need to be changed as different types of storage are
introduced into the OpenStack environment.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 7
The EMC ViPR Cinder driver contains both an ISCSIDriver as well as a FibreChannelDriver, with
the ability to create/delete and attach/detach volumes and create/delete snapshots, etc.

In order for the Cinder driver to work a minimum version of ViPR 1.1 is required. The node
containing Cinder requires ViPRCLI installed and configured.

With the Cinder driver you will be able to provision iSCSI and Fibre Channel volumes
through OpenStack. The API calls from OpenStack will be translated into ViPR APIs and
from ViPR up to storage arrays. The ViPR Cinder driver support create, delete, attach and
detach volumes, create and delete snapshots or create volume from snapshot, copy image
to volume or volume to snapshot and get volume stats, clone and extend a volume.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 8
You can install the ViPR command line interface executable directly from ViPR appliance onto a
supported Linux host. Before you begin you need to have access to the ViPR appliance host and
root access to the Linux host.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 9
The EMC ViPR environment must meet specific configuration requirements to support the
OpenStack Cinder Driver. ViPR users must be assigned a Tenant Administrator role or a Project
Administrator role for the Project being used. ViPR roles are configured by ViPR Security
Administrators. Consult the EMC ViPR documentation for details.

The following configuration must have been done by a ViPR System Administrator, using the
ViPR UI, ViPR API, or ViPR CLI. ViPR virtual assets, such as virtual arrays and virtual pools, must
have been created.

Each virtual array designated for use in the OpenStack iSCSI driver must have an IP network
created with appropriate IP storage ports. Multi-volume consistency groups are not supported
by the ViPR Cinder Driver. Please ensure that the Multi-volume consistency option is not
enabled on the Virtual Pool with ViPR.

Each instance of the ViPR Cinder Driver can be used to manage only one virtual array and one
virtual pool within ViPR. The ViPR Cinder Driver requires one Virtual Storage Pool, with the
following requirements (non-specified values can be set as desired):

• Storage Type: Block


• Provisioning Type: Thin
• Protocol: iSCSI or Fibre Channel or both
• Multi-Volume Consistency: DISABLED
• Maximum Native Snapshots: A value greater than 0 allows the OpenStack user to take
Snapshots

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 10
You can download the EMC ViPR Cinder driver from the following location:
https://github.com/emcvipr/controller-openstack-cinder.

To configure it copy the vipr subdirectory to the cinder/volume/drivers/emc directory of


your OpenStack node(s) where cinder-volume is running. This directory is where other
Cinder drivers are located.

Modify /etc/cinder/cinder.conf file with the parameters specified above. To use FC


driver replace the volume_driver line with:

volume_driver = cinder.volume.drivers.emc.vipr.emc_vipr_fc.EMCViPRFCDriver

Modify the rpc_response timeout value in /etc/cinder/cinder.conf to at least 5


minutes, if the value doesn’t exist add it.

Restart the cinder-volume service and create OpenStack volume types with the cinder
command.

Finally, map the OpenStack volume type to the ViPR virtual pool.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 11
Add/modify the following entries if you are planning to use multiple back-end drivers.

1. The "enabled_backends" parameter needs to be set in cinder.conf and other parameters


required in each back end need to be placed in individual back end sections (rather than the
DEFAULT section).

2. The “enabled_backends” will be commented by default, un-comment and add the multiple
back-end names.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 12
The volume-types and volume-type to volume-back end association need to be created too.

It’s important to take into consideration that the OpenStack compute host must be added
to the ViPR along with its iSCSI initiator. The iSCSI initiator must be associated with the IP
network on ViPR.

For Fibre Channel the OpenStack compute host must be attached to a VSAN or fabric
discovered by ViPR. There is no need to perform any SAN zoning operations since ViPR will
perform the necessary operations automatically.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 13
This lesson covered the installation of the Cinder driver, configuration of ViPR and the
Cinder driver, and ViPR-managed storage provisioning.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 14
This module covered ViPR integration with 3rd party platforms such as Microsoft and
SCVMM. It described both platforms and the integration components and then covered how
to configure and provision through ViPR plugins from these platforms.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 15
Copyright 2015 EMC Corporation. All rights reserved. ViPR Integration with OpenStack 16
This module focuses on the ViPR REST API. It includes an overview of the REST API,
configuration and provisioning through API and HTML and API Error Codes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 1
This lesson covers an introduction to APIs, their evolution, and the protocols and types of
APIs that exist today. It identifies benefits of using APIs and presents ViPR APIs and their
capabilities.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 2
Faced with growing business and technology challenges, many organizations are leveraging
cloud computing to transform their business operations. These organizations are building
software-defined cloud stacks for the next generation applications, all powered by Application
Programming Interfaces (APIs).

Organizations are also using APIs to increase their overall agility, through automation and data
services, in delivering applications. This lesson provides an introduction to Application
Programming Interfaces and their use and explains how APIs evolved, why they are created and
how they are leveraged within a cloud infrastructure stack.

The trends in the industry landscape driving the evolution of the APIs and the transformational
value to the business are also discussed.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 3
An application programming interface (API) provides a means of communicating with an
application without understanding its underlying architecture. This allows programmers to
create channels of communication between applications and hardware, as well as pool
resources from multiple sources such as databases and news feeds.

APIs can be pre-compiled code to be leveraged in programming languages and can also be
web-based.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 4
As virtualization and cloud computing become more and more prevalent, the ability to adapt to
end-users’ needs becomes increasingly difficult. Speed-to-market, elasticity and scalability is
crucial in today’s data hungry market. APIs provide a flexible, easy-to-use method of integrating
third-party data services and capabilities into any existing architecture. This integration also
provides a layer of security between public (external) and private (internal) business
capabilities, allowing companies to provide services in any way they see fit while allowing end-
users free range of their offerings.

For example, Amazon Web Services (AWS) provides an extensive API to their offerings for
everything from storage provisioning to managing video streaming services. Google Maps has a
widely adopted API for their mapping and navigation features while social networks such as
Facebook and Twitter allow developers comprehensive access to their users’ feeds and data
APIs.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 5
The intersection of cloud computing, mobile and embedded devices and usable APIs
continues to prove to be an on-going challenge. However, the evolving world of APIs is
helping in breaking through these challenges as devices beyond the scope of mobile are
already being developed.

Web API enabled devices are becoming a standard as manufacturers and service providers
are finding innovation around every corner. Advancements in technology are occurring
everyday and APIs are providing better methods of communication and connectivity. In our
example:

• Everyday Devices – On the road, vehicle data service APIs will soon provide public
safety information and other warnings upon entering intersections.

• Marketing and e-Commerce - PIs provided by Google Offers and Groupon will help
deliver savings to specific markets and consumers, driving up revenue.

• Social Media - Social networks such as Facebook and Twitter are becoming channels for
everything from gaming, marketing and even customer service venues. Even now,
airports are leveraging social networks to help improve our travel experience by
analyzing social behaviors such as check-ins and tweets.

• Software Defined Data Center - The future of the SDDC is to orchestrate, coordinate
and apply resources from the server, storage and networking pools to ensure that the
applications or services meet the capacity, availability and response time SLAs that the
business requires.

• Smart Devices - Near-field communications or NFC, will provide users with location-
aware devices, an immersive experience, whether it is real-time, augmented reality to
assist in sight-seeing or instant, in-store product comparisons with a single gesture.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 6
There are many protocols by which APIs can communicate. The Hypertext Transfer Protocol
(HTTP) has become increasingly more popular as more and more developers move to
HTML5 and mobile app development. Other protocols include Remote Procedure Call (RPC),
Action Message Format (AMF) and RTMP (Real Time Messaging Protocol). There are a
number of advantages and disadvantages to using any method of communication and so
each must be carefully considered when deciding to choose one over others.

HTTP – An HTTP session consists of a sequence of request and response transactions


between a client and host. The HTTP/1.1 specification includes the GET, POST, HEAD,
OPTIONS, PUT, DELETE, TRACE and CONNECT request methods by which most Internet
traffic travels. Since HTTP is the standard for Internet communication as defined by The
World Wide Web Consortium (W3C), it is a popular choice for API development.

RPC – RPC provides a client the ability to invoke a procedure directly on the server to send
and receive messages with supplied parameters and was proposed as a means of safe and
reliable communication by including common tasks such as security and data flow.
However, RPC has been implemented across many different technologies and therefore may
not provide compatibility between systems and applications.

AMF – AMF is a binary format protocol that provides fast, serialized data between a Flash
client and remote service. It is commonly used in real-time Flash applications for streaming
data. It is supported across many platforms but is only used by Flash clients, limiting the
ability to expand to new technologies such as HTML5.

WebSockets – The WebSocket protocol provides a persistent connection between client


and server and is an independent TCP-based protocol. Unlike long polling in HTTP,
WebSockets is analogous to push notifications and allows constant, real-time data to be
sent from the server to the client and vice versa. Though fairly new, this provides an
extremely fast and reliable means of communication for applications such as chat, social
media streams and multiplayer online games.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 7
The W3C defines a "Web service" as a software system designed to support interoperable
machine-to-machine interaction over a network. It has an interface described in a machine
readable format, such as Web Services Description Language (WSDL). Other systems
interact with the web service in a manner as defined by its description using Simple Object
Access Protocol (SOAP) messages. These are typically conveyed using HTTP with an XML
serialization in conjunction with other web-related standards.

These methods have served as the API communication standards for quite some time.
However, developing APIs with these request-based services has proven to become more
difficult and expensive with the increased demand of real-time data. To solve this problem,
developers are turning to an increasingly popular style of API known as REST. But what
does it mean to build a RESTful service? Let’s take a look.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 8
REST (Representational State Transfer) provides a way of interacting with application resources
in a meaningful way. It is not a “Standard” but rather a “Style” that has become a popular choice
for developing HTTP-based APIs. It allows for the development of scalable and lightweight web
applications while adhering to a set of constraints.

Web APIs have been moving to simpler, REST-based communications by providing readable,
human friendly data access methods. RESTful APIs do not require XML-based web service
protocols such as SOAP to support their light-weight interfaces, however they still support XML
based and JavaScript Object Notation (JSON) data formats. These services provide an easy to
consume architecture and support the combination of multiple web resources into new
applications. Supporters for older methods of communication are also realizing the popularity
of this style and are adjusting accordingly. For example, WSDL version 2.0 offers support for
binding to all the HTTP request methods, beyond GET and POST as in version 1.1 allowing for a
"RESTful web services" approach.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 9
In order to provide access to their services or assets, a provider must first define an API
just like they would write a contract to ensure standard methods of communication and
usage. This provides the terms by which a developer must adhere to ensure end-users are
provided a consistent, seamless experience in multiple locations across the Internet and
mobile applications. It also allows for better maintainability from version to version.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 10
APIs serve a multitude of needs in bringing services to users. They enable the sharing of data
across physical, virtual and social networks. They bring together mobile, desktop and embedded
technologies in unforeseen and unpredicted ways. And, because of their high rate of adoption
they also open new avenues of creativity and efficiency. Let’s take a look at some API categories
and how those APIs are consumed.

In social media, as third-parties find better and more creative ways to leverage this data, access
to this raw data has become exponentially important. By providing data such as “likes” from
Facebook, hashtag trends from Twitter and geo-location data from both, there are limitless
offerings to users of these networks for developers to present.

A mashup is considered a website or application that uses and combines data, presentation
or functionality from two or more sources to create new services. The term implies easy,
fast integration, and frequent use of open APIs and data sources to produce enriched
results that were not necessarily the original reason for producing the raw source data.

Smaller businesses tend to rely on service providers for their business communication and
infrastructure needs. In order to maintain a small, mobile footprint, startups rely on
services such as Gmail and third-party backup and recovery solutions to ensure a secure,
reliable and low cost operation. Through the APIs these providers supply, businesses are
able to integrate these services seamlessly into their new or existing business processes.

Cloud resources such as virtualized storage, compute and networking are being leveraged
more and more through provider APIs. Providers such as Amazon Web Services supply APIs
for every facet of their offerings, ensuring availability of methods to interface and extend
applications on the web and mobile devices allowing for a broader infrastructure as a
service (IaaS) platform. This ensures vital automation compatibility for both new and
existing IT infrastructure stacks.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 11
IT vendors are moving away from specialized, proprietary and pricey hardware and focusing
more on making the entire data center more programmable via software and APIs. Currently,
administrators have the ability to perform all the required functions in order to provide a
completely virtualized data center, but this process is still being refined.

The next generation of Software Defined Data Centers will not only be able to discover what
storage assets are available and allow automated provisioning, but will also detect if “smart”
storage is available and offload processing to that array for data path and control abstraction.
Software defined storage will also be easily managed and provisioned through the use of APIs.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 12
As IT vendors rush to the cloud, companies are beginning to develop their cloud products as if
they were going to consume these products themselves. And, more commonly, APIs are
becoming a significant feature of these products. The trend is moving from “API as a feature” to
“API as a product” and “API as a service.” For example, many cloud providers are looking at
OpenStack, which has a robust, supported, well documented API and is a pioneer in
perpetuating open APIs.

The rise of the API has become critical because it enables customers to use products in ways the
vendor never could have imagined. APIs enable a thriving partnership ecosystem so that
providers, like Amazon Web Services, can innovate and rapidly develop features that customers
think up. AWS adds value on top of the cloud platforms so that users don’t have to. But most
importantly, the cloud API starts a conversation about what customers really want a product to
do and it gives them a way to express their requirements with code.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 13
The ViPR Controller REST API describes the programmatic interfaces that allow you to create,
read, update, and delete resources in a ViPR data center.

The ViPR REST API is accessible using any web browser or programming platform that
can issue HTTP requests. HTTP can issue POST, PUT and DELETE requests. To do so from
a browser you need special plugins such as Internet Explorer’s httpAnalyzer. You can
also access the REST API using scripting platforms such as curl and perl. EMC also
provides a Java client that wraps the ViPR Controller REST API calls in a set of Java
classes.

To send requests using ViPR you must point to your ViPR controller public IP with port
4443. However the ViPR Controller needs two different ports, 4443 (HTPS) manages
resources controlled by block services, file services, computer services, virtual data
center services and tenant and project services. Additionally, port 443 (HTPS) manages
approvals, asset options, catalog, execution window, orders and schema.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 14
ViPR provides three native management interfaces: an Application Programming Interface
(API), Admin and Self-Service User Interface, and a Command Line Interface (CLI). Any of
these may be used to drive discovery of the physical resources, mapping them to create
logical Virtual Arrays and Virtual Pools, making them available for provisioning and
management.

The rest of this module will focus on the API management interface.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 15
The ViPR API delivers an open environment enabling users to extend ViPR functionality. All
the resources managed by ViPR are accessible through its Representational State Transfer
(REST) API. The ViPR API is involved in all management, monitoring and reporting
operations within ViPR, driving the UI, CLI, plug-ins, add-ins, Solution Pack and non-EMC
developed tools.

There are multiple aspects to the ViPR API:

1. Management, monitoring and reporting of the ViPR instances. This includes activities
such as configuring VIPR call home, SMTP settings and log management.

2. Management, monitoring and reporting of Block and File resources. This includes
discovery of physical resources, creation of ViPR abstractions, user control, provisioning,
and Block and File services such as snapshots.

3. Management, I/O, monitoring and reporting of Object Data Services. This includes
creating, reading, writing and managing ViPR defined objects. The ViPR API supports the
Amazon Web Services (AWS), Simple Storage Service (S3), OpenStack Swift and EMC
Atmos storage APIs. This provides developers the flexibility to write applications to use
AWS, OpenStack or Atmos object storage APIs, and actually place the objects into ViPR
managed storage.

The ViPR API can be accessed by using any web browser or programming platform that can
issue HTTP requests. Specific browser plug-ins are required to issue GET, POST, PUT and
DELETE HTTP requests. Examples include: httpAnalyzer for Internet Explorer, Poster for
Firefox and PostMan for Chrome.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 16
This lesson covered an introduction to APIs, their evolution and the protocols and types of
APIs that exist today. It identified benefits of using APIs and presented ViPR APIs and their
capabilities.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 17
This lesson covers how to integrate with ViPR through REST APIs. We will use the Firefox
REST plugin here. GET and POST methods would be tested to retrieve information from our
ViPR environment and complete provisioning and configuration operations.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 18
Since API calls are commonly browser based using POST, GET and several other methods
the easiest way to test the ViPR APIs is by installing the REST client extension on Firefox.
This extension supports all HTTP methods. With it you can construct custom HTTP requests
to directly test requests against a server, in this case ViPR.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 19
Before configuring the REST extension for Firefox it’s very important that Firefox can
communicate with the ViPR controller. To do so, simply try to log in to the ViPR UI through
Firefox. If you are presented with an untrusted security warning make sure to add the
certificate to Firefox so the connection will become trusted. Otherwise, the REST client
would not be able to communicate.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 20
Log in credentials must be configured on the application so that it can authenticate with the
ViPR Controller. To do that in RESTClient we simply select the Authentication drop-down
and select Basic Authentication. On the Basic Authentication window you complete the ViPR
credentials.

Additionally, a custom header must also be added to the RESTClient which will let ViPR
know that the content-type is XML, therefore the communication happens in XML.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 21
Now we are ready to start communicating with our ViPR controller. Lets start with GET
methods which provide information for us but don’t perform any configuration changes. In
our RESTClient we can see the method is get, the URL portion will include the element you
want to receive information on. You can use the request body to specify more information
requests. Notice we are typing the controller IP address followed by a port and licenses.
ViPR’s response can be seen at the bottom. The response header is showing a status code
or 200 OK, this means we received a valid response from ViPR. All the body tags contains
the license information requested.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 22
As we look into the body of the response we can see al the licensing information for ViPR
organized in XML to ease integration.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 23
With Post methods we can begin to send configuration commands to ViPR. In our example
above we have decided to create a new project. Notice the method is set to POST. The body
of the request includes information about a new project to create. We can see from the
response that the project was created. We can also see important information about the
project. The most important piece of information here is the id of the project. It is
everything included within ID tags and can be used to refer to the project to either GET
information about it or reference it when sending configuration/provisioning commands.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 24
We can use several other GET commands to retrieve information about virtual pools and
virtual arrays. All of them are labeled with an ID that is used to identify them. In our
example we are sending a post to the virtual pools section, however the body of the request
has all the information necessary to satisfy the provisioning. We have the name and size of
the volume as well as the virtual array, virtual pool and project where it should come from.

The response includes the volume information including a unique identifier, however the
status appears as running so the volume has not been provisioned. You can use the ID of
the volume to check for it’s status from API.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 25
Through the ViPR UI you can validate the elements created through the APIs are present.
At the top-left we can see the project created through the RESTClient. Since we used that
project for our provisioning we can see two hosts that were provisioned through the
RESTClient.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 26
This lesson covered how to integrate with ViPR through REST APIs. We used the Firefox
REST plugin here. GET and POST methods were tested to retrieve information from our ViPR
environment and complete provisioning and configuration operations.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 27
This lesson covers different ViPR API and HTML error codes and their explanation.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 28
The table presents common ViPR API error codes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 29
The table presents common ViPR API error codes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 30
The table presents common ViPR API error codes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 31
The table presents HTTP return codes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 32
The table presents HTTP return codes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 33
This lesson covered different ViPR API and HTML error codes and their explanation.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 34
The purpose of this lab is to introduce the ViPR Application Programming Interface (API)
and demonstrate its features.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 35
This lab introduced the ViPR Application Programming Interface (API) and demonstrated its
features.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 36
This module covered the ViPR REST API. It included an overview of the REST API,
configuration and provisioning through API and HTML and API Error Codes.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 37
This course provided an introduction to integrating ViPR with cloud stacks for provisioning,
VCO/VCOPS orchestration, SCMM, and OpenStack. It included how to integrate and build
applications using the ViPR Data Service.

Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 38
Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 39
Copyright 2015 EMC Corporation. All rights reserved. ViPR Controller REST API 40

You might also like