0% found this document useful (0 votes)
20 views98 pages

M 04 Res 01

Uploaded by

Ladislau
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views98 pages

M 04 Res 01

Uploaded by

Ladislau
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

This module focuses on how to integrate hosts with VNX Block storage.

We will organize the


process into three stages: storage networking, storage provisioning, and readying the
storage for the host.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 1
Host access to VNX Block storage requires a host having connectivity to block storage from
the VNX system. The graphic illustrates an overview of configuration operations to achieve
host block storage access to a VNX. These activities span across the host, the connectivity
and the VNX.
Although hosts can be directly cabled to the VNX, connectivity is commonly done through
storage networking, and is formed from a combination of switches, physical cabling and
logical networking for the specific block protocol. The key benefits of switch-based block
storage connectivity are realized in the logical networking. Hosts can share VNX front-end
ports; thus the number of connected hosts can be greater than the number of VNX front-
end ports. Redundant connectivity can also be created by networking with multiple
switches, enhancing storage availability. Block storage logical networking for Fibre Channel,
iSCSI and FCoE protocols are covered in the Storage Area Networking (SAN) training
curriculum available from the EMC training portal. Logical networking of block storage will
not be covered in this training.
Storage must be provisioned on the VNX for the host. Provisioning VNX storage consists of
grouping physical disk drives into a RAID Group or a Storage Pool. Block storage objects
called Logical Unit Numbers (LUNs) are then created from the disk groupings. Connected
hosts are registered to the VNX using Unisphere Host Agent software. Host registration can
also be done manually without an agent. The VNX LUN masking and mapping feature
presents LUNs to the host. The feature uses a logical object called a Storage Group which is
populated with LUNs and the registered host. A Storage Group creates a ‘virtual storage
system’ to the host, giving it exclusive access to the LUNs in the Storage Group.
The host must then discover the newly presented block storage within its disk sub-system.
Storage discovery and readying for use is done differently per operating system. Generally,
discovery is done with a SCSI bus rescan. Readying the storage is done by creating disk
partitions and formatting the partition.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 2
The process of provisioning Block storage to host can be broken into a few basic sections.
For the purposes of this course, we will group the activities into three parts.
The first stage of our process focuses on storage networking. The steps in this section set
up the host to “see” the storage array via one the supported storage protocols.
The next stage will deal with configuring storage that will then be provisioned, or assigned,
to the host. When these steps are completed, the storage will be “visible” in the host’s
operating system.
The final stage uses host-based tools to ready the storage volumes for the operating
system. After this stage is completed, users or applications will be able to store data on the
storage array from the host, and management options will be available through various
utilities, which will be discussed later.
Please note that, although all of the steps presented in the module are essential, the actual
sequence of steps is very flexible. What is presented in this module is merely one option for
the sequence.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 3
The topics covered in this module and the next one are covered in detail within each of the
block host connectivity guides shown. General steps and task overviews will be shown on
the slides. Please refer to the appropriate block host connectivity guide for details of
integrating a specific host via a specific protocol (Fiber Channel, iSCSI or FCoE) and HBA to
the VNX. The guides are available from the EMC online support site via Support Zone login.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 4
This lesson covers the various storage network topologies and the requirements to
implement them. We will look at identifying the different network technologies, while taking
a closer look at the Fibre Channel and iSCSI implementations. We will delve into the Fibre
Channel and iSCSI components and addressing as well as look at the various rules
associated with implementing those technologies. Finally we will look at host connectivity
requirements for the various storage network topologies.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 5
The first stage of the process is configuring storage networking so that the host and array
can “see” each other.
The first element here is to choose, or identify, the storage protocol to be used. VNX
supports FC, iSCSI, and FCoE.
Once the storage protocol is confirmed, the host will need to have an adapter of some kind
to communicate via the storage protocol. In Fibre Channel environments, a host will have a
Host Bus Adapter, or HBA, installed and configured. For iSCSI, either a standard NIC or a
dedicated iSCSI HBA can be used. FCoE uses a Converged Network Adapter (CAN). This
course focuses primarily on iSCSI and FC.
With a storage networking device ready on the host, connectivity between the host and the
array will be required. In FC, this will include setting up zoning on an FC switch. In iSCSI
environments, initiator and target relationships will need to be established.
After connectivity has been configured, the hosts need to be registered with the VNX.
Registration is usually automatic [when a host agent is installed], though in some cases it
will be performed manually. In either case, the registrations should be confirmed.
Having completed the connectivity between the host and the array, you will then be in a
position to configure storage volumes and provision them to the host.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 6
The choice of block storage protocol can depend on various factors such as distance,
performance, scalability and overhead. Each technology has different strengths and
weaknesses and are compared and contrasted here.
iSCSI uses IP networking technology over Ethernet for connecting hosts to storage. These
IP networks are very common and in wide use today. They provide unrivaled connectivity to
all parts of the world. The technology is relatively inexpensive and the skillset to connect
devices is common. IP technologies can connect hosts to storage over larger distances than
the channel technology of Fibre Channel. IP can also scale to larger numbers of hosts
connecting to the storage. If distance and/or scalability are prime concerns, iSCSI may be
the protocol of choice. Selecting iSCSI has the tradeoff of slower performance and more
overhead than Fibre Channel.
Fibre Channel uses channel technology for connecting hosts to storage. Channel
technologies are specialized and their networks create connectivity between specific
devices. The technology is relatively expensive and requires a specialized skillset to connect
devices. The channel technology of Fibre Channel performs faster and has lower protocol
overhead than iSCSI. If fast performance and/or low overhead are prime concerns, Fibre
Channel may be the protocol of choice. Selecting Fibre Channel has the tradeoff of shorter
distances and less scalability that iSCSI.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 7
The rules concerning iSCSI and FC host connectivity are shown here.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 8
In order to connect a host to a VNX storage array, you will need to meet the requirements
shown here.
Keep in mind that most vendors have management plug-ins in addition to the drivers which
allow users to view and configure HBA/NIC parameters, such as the Emulex OCManager
plug-in or the Qlogic QConvergeConsole plug-in. The fabric switches will need to be properly
configured and zoned, and there will also need to be a properly configured Ethernet
network as well.
You can successfully run Unisphere management software from a storage system or a
Windows off-array management server.
Note: If the HBA/NIC drivers are not installed, consult the current EMC® VNX™ Open
Systems Configuration Guide document on support.emc.com for the latest supported
configuration guidelines.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 9
This lesson covers storage networking with Fibre Channel, including its characteristics,
tasks, and connectivity requirements.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 10
Fibre Channel is a serial data transfer interface that operates over copper wire and/or
optical fiber at full-duplex data rates up to 3200 MB/s (16 Gb/s connection).
Networking and I/O protocols (such as SCSI commands) are mapped to Fibre Channel
constructs, and then encapsulated and transported within Fibre Channel frames. This
process allows high-speed transfer of multiple protocols over the same physical interface.
Fibre Channel systems are assembled from familiar types of components: adapters,
switches and storage devices.
Host bus adapters are installed in computers and servers in the same manner as a SCSI
Host Bus Adapter or a network interface card (NIC).
Fibre Channel switches provide full bandwidth connections for highly scalable systems
without a practical limit to the number of connections supported (16 million addresses are
possible).
Note: The word “fiber” indicates the physical media. The word “fibre” indicates the Fibre
Channel protocol and standards.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 11
World Wide Names identify the source and destination ports in the Fibre Channel network.
World Wide Node Names (WWNN) identify the host or array, while the World Wide Port
Name (WWPN) identifies the actual port. These two 64 bit names are often combined into a
128 bit name.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 12
An HBA is an I/O adapter that sits between the host computer's bus and the I/O channel,
and manages the transfer of information between host and storage. In order to minimize
the impact on host processor performance, the HBA performs many low-level interface
functions automatically or with minimal processor involvement.
In simple terms, an HBA provides I/O processing and physical connectivity between a
server and storage. The storage may be attached using a variety of direct attached or
storage networking technologies, including Fibre Channel, iSCSI, or FCoE. HBAs provide
critical server CPU off-load, freeing servers to perform application processing. As the only
part of a Storage Area Network that resides in a server, HBAs also provide a critical link
between the SAN and the operating system and application software. In this role, the HBA
enables a range of high-availability and storage management capabilities, including load
balancing, fail-over, SAN administration, and storage management.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 13
VNX Fibre Channel ports can be viewed from the Unisphere GUI (as well as by using the
Navisphere CLI commands) by navigating in Unisphere to the System > Storage Hardware
menu. Once there expand the tree for I/O Modules to view the physical locations and
properties of a given port.
The example shows SPA expanded to display the I/O modules and ports.
To display port properties, highlight the port and select Properties. The WWN can be
determined for the port as well as other parameters such as speed and initiator information.
VNX can contain FC, FCoE, and iSCSI ports depending on the I/O module installed.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 14
A Switched Fabric is one or more Fibre Channel switches connected to multiple devices. The
architecture involves a switching device, such as a Fibre Channel switch, interconnecting
two or more nodes. Rather than traveling around an entire loop, frames are routed between
source and destination by the Fabric.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 15
Under single-initiator zoning, each HBA is configured with its own zone. The members of the
zone consist of the HBA and one or more storage ports with the volumes that the HBA will
use. In the example, there is an Emulex HBA zoned to two VNX ports.
This zoning practice provides a fast, efficient, and reliable means of controlling the HBA
discovery/login process. Without zoning, the HBA will attempt to log in to all ports on the
Fabric during discovery and during the HBA’s response to a state change notification. With
single-initiator zoning, the time and Fibre Channel bandwidth required to process discovery
and the state change notification are minimized.
Two very good reasons for single-initiator zoning:
• Reduced reset time for any change made in the state of the Fabric
• Only the nodes within the same zone will be forced to log back into the Fabric after
a RSCN (Registered State Change Notification)
When a node’s state has changed in a Fabric (i.e. cable moved to another port), it will have
to perform the Fabric Login process again before resuming normal communication with the
other nodes with which it is zoned. If there is only one initiator in the zone (HBA), then the
amount of disrupted communication is reduced.
If you had a zone with two HBAs and one of them had a state change, then BOTH would be
forced to log in again, causing disruption to the other HBA that did not have any change in
its Fabric state. Performance can be severely impacted by this.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 16
This lesson storage networking with iSCSI, including its characteristics, tasks, and
connectivity requirements.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 17
iSCSI is a native IP-based protocol for establishing and managing connections between IP-
based storage devices, hosts, and clients. It provides a means of transporting SCSI packets
over TCP/IP. iSCSI works by encapsulating SCSI commands into TCP and transporting them
over an IP network. Since iSCSI is IP-based traffic, it can be routed or switched on standard
Ethernet equipment. Traditional Ethernet adapters or NICs are designed to transfer file-
level data packets among PCs, servers, and storage devices. NICs, however, do not usually
transfer block-level data, which has been traditionally handled by a Fibre Channel host bus
adapter. Through the use of iSCSI drivers on the host or server, a NIC can transmit packets
of block-level data over an IP network. The block-level data is placed into a TCP/IP packet
so the NIC can process and send it over the IP network. If required, bridging devices can be
used between an IP network and a SAN.
Today, there are three block storage over IP approaches: iSCSI, FCIP, and iFCP. There is no
Fibre Channel content in iSCSI.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 18
All iSCSI nodes are identified by an iSCSI name. An iSCSI name is neither the IP address
nor the DNS name of an IP host. Names enable iSCSI storage resources to be managed
regardless of address. An iSCSI node name is also the SCSI device name, which is the
principal object used in authentication of targets to initiators and initiators to targets. iSCSI
addresses can be one of two types: iSCSI Qualified Name (iQN) or IEEE naming convention,
Extended Unique Identifier (EUI).
iQN format - iqn.yyyy-mm.com.xyz.aabbccddeeffgghh where:
• iqn - Naming convention identifier
• yyyy-nn - Point in time when the .com domain was registered
• com.xyz - Domain of the node backwards
• aabbccddeeffgghh - Device identifier (can be a WWN, the system name, or any
other vendor-implemented standard)
EUI format - eui.64-bit WWN:
• eui - Naming prefix
• 64-bit WWN - FC WWN of the host
Within iSCSI, a node is defined as a single initiator or target. These definitions map to the
traditional SCSI target/initiator model. iSCSI names are assigned to all nodes and are
independent of the associated address.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 19
Challenge Handshake Authentication Protocol, or CHAP, is an authentication scheme used
by Point to Point servers to validate the identity of remote clients. The connection is based
upon the peer sharing a password, or secret. iSCSI capable storage systems support both
one-way and mutual CHAP. For one-way CHAP, each target can have its own unique CHAP
secret. For mutual CHAP, the initiator itself has a single secret with all targets.
CHAP security can be set up either as one-way CHAP or mutual CHAP. You must set up the
target (storage array) and the initiator (host) to use the same type of CHAP to establish a
successful login.
Unisphere is used to configure CHAP on the storage array. To configure CHAP on the host,
use the vendor tools for either the iSCSI HBA or NIC installed on each initiator host. For a
Qlogic iSCSI HBA use SANsurfer software. For a standard NIC on a Windows host use
Microsoft iSCSI Initiator software. On a Linux host, CHAP is configured by entering the
appropriate information in the /etc/iscsi.conf file.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 20
LAN configuration allows Layer 2 (switched) and Layer 3 (routed) networks. Layer 2
networks are recommended over Layer 3 networks.
The network should be dedicated solely to the iSCSI configuration. For performance reasons
EMC recommends that no traffic apart from iSCSI traffic should be carried over it. If using
MDS switches, EMC recommends creating a dedicated VSAN for all iSCSI traffic.
CAT5 network cables are supported for distances up to 100 meters. If cabling is to exceed
100 meters, CAT6 network cables would be required.
The network must be a well-engineered network with no packet loss or packet duplication.
When planning the network, care must be taken in making certain that the utilized
throughput will never exceed the available bandwidth.
VLAN tagging is also supported. Link Aggregation, also known as NIC teaming, is not
supported.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 21
CPU bottlenecks caused by TCP/IP processing have been a driving force in the development
of hardware devices specialized to process TCP and iSCSI workloads, offloading these tasks
from the host CPU. These iSCSI and/or TCP offload devices are available in 1 Gb/s and 10
Gb/s speeds.
As a result, there are multiple choices for the network device in a host. In addition to the
traditional NIC, there is the TOE (TCP Offload Engine) which processes TCP tasks, and the
iSCSI HBA which processes both TCP and iSCSI tasks. TOE is sometimes referred to as a
Partial Offload, while the iSCSI HBA is sometime referred to as a Full Offload.
While neither offload device is required, these solutions can offer improved application
performance when the application performance is CPU bound.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 22
VNX SP Front End connections in an iSCSI environment consist of iSCSI NICs and TOEs. In
VNX Unisphere, right-clicking on a selected port displays the Port Properties.
This example shows an iSCSI port, Port 0, which represents the physical location of the port
in the chassis and matches the label (0 in this example) on the I/O module hardware in the
chassis. A-4 in this example means:
• A represents the SP (A or B) on which the port resides
• 4 represents the software assigned logical ID for this port.
The logical ID and the physical location may not always match.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 23
iSCSI basic connectivity verification includes Ping and Trace Route. These are available from
the Network Settings menu in Unisphere.
Ping provides a basic connectivity check to ensure the host can reach the array and vice
versa. This command can be run from the host, Unisphere and the storage system’s SP.
Trace Route provides the user with information on how many network hops are required for
the packet to reach its final destination. This command can also be run from the host,
Unisphere, and the storage system’s SP. The first entry in the Trace Route response should
be the gateway defined in the iSCSI port configuration.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 24
This lesson covers the activities included in registering hosts with Unisphere.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 25
Registration makes a host known to the storage system and can be performed in a number
of ways:
• Automatically, by the Unisphere Agent, when it starts
• Automatically, by the Unisphere Agent, in response to a naviseccli register command
• Manually, through Unisphere
• Manually, through Navisphere CLI
Connectivity to the array depends on the protocol the host is using to connect to the
storage system. If the host is fibre attached, fabric logins tell the VNX which ports and
HBAs are connected. If the host is iSCSI attached, iSCSI logins tell the VNX which ports and
initiators (hardware or software based) are connected.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 26
In Windows, the Unisphere Host Agent (or the manual registration of a host) will inform the
storage system of how the host is attaching to the system. It will either inform the array of
the hostname and the WWNs if it’s a fibre-attached host, or it will inform the array of the
hostname and either the IQN or the EUI if it’s an iSCSI attached host.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 27
Presenting LUNs to a Windows iSCSI host is the same process as a Fibre connected host
with the exception of discovering the iSCSI targets and LUNs with the Unisphere Server
Utility.
Similar to the Host Agent, the Unisphere Server Utility registers the server’s HBA (host bus
adapter) or NICs with the attached VNX.
With the server utility you can perform the following functions:
• Register the server with all connected storage systems
• Configure iSCSI connections on this server (Microsoft initiators only)
• Verify Server High Availability

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 28
To run the Host Agent, CLI, or server utility, your server must meet the following
requirements:
• Run a supported version of the operating system
• Have an EMC VNX supported HBA hardware and driver installed
• Be connected to each SP in each storage system either directly or through a switch
• Each SP must have an IP connection
• Have a configured TCP/IP network connection to any remote hosts that you will use
to manage the server’s storage systems, including any host whose browser you will
use to access Unisphere, any Windows Server host running Storage Management
Server software, and any AIX, HP-UX, IRIX, Linux, NetWare, Solaris, Windows
Server host running the CLI.
• If you want to use the CLI on the server to manage storage systems on a remote
server, the server must be on a TCP/IP network connected to both the remote server
and each SP in the remote server’s storage system. The remote server can be
running AIX, HP-UX, Linux, Solaris, or the Windows operating system.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 29
Depending on your application needs, you can install the Host Agent, server utility, or both
on an attached server.
If you want to install both applications, the registration feature of the server utility will be
disabled and the Host Agent will be used to register the server’s NICs or HBAs to the
storage system. Note if the server utility is used while the Host Agent is running, a scan of
the new devices will fail.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 30
If you have a Microsoft iSCSI initiator, you must install the Microsoft iSCSI Software
Initiator because the Unisphere Server Utility uses it to configure iSCSI connections.
Note: In FC configurations, do not install the server utility on a VMware Virtual Machine.
You can install the utility on a VMware ESX Server.
Do not disable the Registration Service option (it is enabled by default.) The Registration
Service option automatically registers the server’s NICs or HBAs with the storage system
after the installation and updates server information to the storage system whenever the
server configuration changes (for example, when you mount new volumes or create new
partitions). If you have the Host Agent installed and you are installing the server utility, the
server utility’s Registration Service feature will not be installed.
You must reboot the server when the installation dialog prompts you to reboot. If the
server is connected to the storage system with NICs and you do not reboot before you run
the Microsoft iSCSI Software Initiator or server utility, the NIC initiators will not log in to
the storage system.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 31
Also, in Linux, the Unisphere Host Agent (or the manual registration of a host) will inform
the storage system of how the host is attaching to the system. It will either inform the
array of the hostname and the WWN’s if it’s a fibre attached host or it will inform the array
of the hostname and either the IQN or the EUI if it’s an iSCSI attached host.
EMC recommends that you download and install the most recent version of the Unisphere
Host Agent software from the applicable support by product page on the EMC Online
Support website.
1. On the Linux server, log in to the root account.
2. If your server is behind a firewall, open TCP/IP port 6389. This port is used by the Host
Agent. If this port is not opened, the Host Agent will not function properly.
3. Download the software:
a) From the EMC Online Support website, select the VNX Support by Product page
and locate the Software Downloads.
b) Select the Unisphere Host Agent, and then select the option to save the tar file to
your server.
4. Make sure you load the correct version of the package;
a) 32-bit server – rpm -ivh UnisphereHostAgent-Linux-32-noarch-en_US-version-
build.noarch.rpm **
b) 64-bit server – rpm -ivh UnisphereHostAgent-Linux-64-x86-en_US-version-
build.x86_64.rpm **
** Where version and build are the version number and build number of the software.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 32
In order to run CLI commands on Linux, the Host Agent configuration file must be modified
to include an entry that defines users, in lower case only, who will issue the CLI commands
as a privileged user.
1. For a local user
 user name ( user root )
2. For a remote user
 user name@hostname or ( user system@Ipaddress )
3. Save the file and restart the agent
 /etc/init.d/hostagent restart
4. Verify that the Host Agent configuration file includes a privileged user
 more /etc/Unisphere/agent.config

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 33
In Linux, the hostagent command can be used to start, stop, restart and provide status on
the Host Agent.
The command executes from the /etc/init.d directory once the Host Agent is installed.
The example displays a sequence to verify the agent status, stop and start the agent, and
verify the Host Agent is running by looking at the process status.
The commands used are:
• hostagent status
• hostagent stop
• ps –ef | grep hostagent
• hostagent start
• ps –ef | grep hostagent

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 34
You can verify that your Host Agent is functioning using Unisphere. Navigate to Hosts >
Hosts. Once there, click the host you want to verify and then click properties. The example
on the left displays the Host Properties window when the Update tab has been selected to
view the LUN status. However, the Host Agent is either not installed or the agent is not
started.
On the right, is the proper display when a Host Agent is started The window displays the
agent information and privileged users list from the agent.config file. Selecting Update will
succeed.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 35
Naviseccli can also be used on Linux systems to verify that a Host Agent is functioning
properly. As shown in the example, the naviseccli port and getagent commands can be used
to verify HBA connections to the VNX and retrieve the Host Agent information.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 36
The Unisphere Server Utility can be used on a Linux host to discover and view connected
storage information. The utility comes as an RPM package and once installed, can be
executed form the /opt/Unisphere directory by issuing a ./serverutilcli command as shown
in the example. Selecting “1” from the menu options will perform a scan and report the
information about connected storage systems.
Note the Host Agent must be stopped for the utility to work.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 37
ESXi hosts register automatically with the VNX even though it doesn't run a Host Agent.
Because there's no Host Agent, it shows up as 'Manually Registered', even though that's not
strictly true.
If you need to manually register an ESXi host with the VNX, navigate to Hosts > Initiators
and click the create button. The Create Initiators Record window allows manual
registration of a host which is logged in to the fabric, but does not have a Unisphere Host
Agent capable of communicating with the VNX.
To add a new host entry, the user must select the New Host radio button, and enter a
hostname, IP address, and other information in the New Initiator Information boxes. Once
complete, the host is regarded as manually registered; management of host access to LUNs
may now take place in the normal manner.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 38
The maximum number of connections between servers and a storage system is limited by
the number of initiator records supported per storage-system SP and is model dependent.
An initiator is an HBA or CNA port in a server that can access a storage system. Some HBAs
or CNAs have multiple ports. Each HBA or CNA port that is zoned to an SP port is one path
to that SP and the storage system containing that SP. Each path consumes one initiator
record. Depending on the type of storage system and the connections between its SPs and
the switches, an HBA or CNA port can be zoned through different switch ports to the same
SP port or to different SP ports, resulting in multiple paths between the HBA or CNA port
and an SP and/or the storage system. Note that the failover software environment running
on the server may limit the number of paths supported from the server to a single storage
system SP and from a server to the storage system.
Access from a server to an SP in a storage system can be:
• Single path: A single physical path (port/HBA) between the host system and the
array
• Multipath: More than one physical path between the host system and the array
via multiple HBAs, HBA ports and switches
• Alternate path: Provides an alternate path to the storage array in the event of a
primary path failure.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 39
On a properly configured host with the correct drivers loaded, users can verify host to
target connectivity by examining the Hosts > Initiators window. ESXi hosts can have either
Fibre Channel or iSCSI connectivity. This example shows an ESXi Fibre Channel connection.
The initiator is attached to host esx-57-181 and it’s IP address is currently 10.127.57.181.
Users can validate the initiator is registered and logged into the array.
The green Status icon indicates the host is currently connected to a Storage Group.
The Host Initiators window can display up to 1000 hosts at one time. Click More to display
the next 1000 hosts or click Show All to display all the remaining hosts/initiators. Each
host initiator record includes the following information:
• Status: Shows whether the host is connected to a Storage Group
• Initiator Name: The iSCSI IQN, or the FC WWN:WWPN
• SP Port: The port to which the initiator is connected on the array
• Host Name: The hostname of the system connecting to the array
• Host IP Address: IP address of the host connecting to the array
• Storage Group: Storage Group with which the host is associated
• Registered: Whether or not the initiator has registered with the array
• Logged In: Shows whether the initiator has logged in
• Failover Mode: Mode of failover to which the port has been set
• Type: Type of initiator
• Protocol: Which protocol the initiator is using
• Attributes: The attributes of the initiator

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 40
To view a Fibre Channel initiator record, highlight the entry and select Properties. Initiator
values are detailed below:
• Hostname: The name of the host in which the initiator HBA is located
• IP Address: The IP address of the host in which the initiator HBA is located
• Initiator Type: Used for specific host OS configurations
(Select CLARiiON/VNX unless instructed otherwise.)
• HBA Type: Host or Array
• Array CommPath: Indicates the status of the communication path for this initiator
(Enabled or Disabled)
• Failover Mode: Indicates the failover mode for the selected initiator (0, 1, 2, 3, or 4)
• Unit Serial Number: Reports the serial number on a per LUN, or per storage system,
basis to the host operating system
• Storage Group: All Storage Groups to which this initiator has access
• SP-Port IQN/WWN: The iSCSI qualified name or World Wide Name of the selected SP
port
• SP Port Physical Location: The location of the SP port within the enclosure
• SP Port: The logical target ID (not physical ID) or SP port to which the initiator
connects
Note: Different host failover software requires different settings for Failover Mode and
ArrayCommPath. Be sure to refer to the Knowledgebase Article 31521, available in
Knowledgebase on the EMC Online Support website, for correct failover values.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 41
iSCSI initiators are identified in Unisphere by the iqn.yyyy-
mm.naming_authority:unique_name naming convention. VNX ports must be configured
with an IP Address which is done by opening an iSCSI properties page from the Settings >
Network > Settings for Block menu.
Similar to Fibre Channel, iSCSI initiators should show a value of Registered and Logged in
along with an SP port, hostname and IP address, and Storage Group. The example displays
an iSCSI connection on SP port B-6v0 for Host esxi05a with an IP Address of
192.168.1.205. This initiator is logged in and registered and is currently not connected to a
Storage Group as indicated by the yellow triangle with the exclamation point.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 42
The initiator information window for iSCSi connections provides the same information as the
Fibre channel window. Here we see the information for Host esxi05a. Note the SP port
contains a different identifier, B-6v0. This is because the IP address is assigned to a virtual
port on physical port Slot B2 Port 2.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 43
After confirming that host initiators are successfully registered, these hosts are available to
be provisioned storage from Unisphere.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 44
This lesson covers the storage architecture and configuration options for Pools, RAID
Groups, and LUNs. We will also discuss LUN masking.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 45
The second stage of our process is intended to create storage on the array and provision it
so that the host can discover the volume with its operating system or volume management
software.
The first element here is to choose the storage architecture to be used. This includes
determining which disk, Storage Pool (Pool or RAID Group), and LUN type to configure, as
well as choosing between classic LUNs or using storage Pools.
Once the storage architecture is determined, Unisphere can be used to create the storage
objects that compromise the storage configuration. When this stage is complete, one or
more LUNs will be available to provision to hosts.
With the LUNs ready, the next step is to provision the storage by putting both the LUN and
host into the same Storage Group. This effectively “connects” the host’s initiator to the
LUN, and the host is ready to discover the volume locally.
After storage has been provisioned to the host, the host needs to discover the LUN. This
can be done with native operating system tools, or with volume management software.
Having completed the provisioning of the storage to the host, you will then be in a position
to use host-based utilities to configure and structure the LUN so that it is usable for write
and read I/O from the operating system.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 46
Before provisioning storage to hosts, it is important to understand the options in the
underlying storage architecture. We will now look at the various choices regarding storage
media, Pool and RAID configurations, and LUN types.
Block storage is provisioned from the VNX array as LUNs. Even in VNX File implementations,
the storage is allocated to Data Movers as LUNs. So, LUNs are the storage object seen by
the connected hosts.
LUNs, in turn, are allocated from either Pools or RAID Groups, and can be Thin provisioned
or Thick provisioned.
The Pools and RAID Groups are the storage object that are comprised of the actual storage
media, which can be solid state or spinning media.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 47
Some general guidelines for selecting drive types should be followed to optimize and enable
good performance on a VNX storage system.
The disk technologies supported are Flash Drives, Serial Attached SCSI (SAS) drives, and
Near-line Serial Attached SCSI (NL-SAS) drives.
Flash Drives are EMC’s implementation of solid state disk (SSD) technology using NAND
single layer cell (SLC) memory and dual 4 Gb/s drive interfaces. Flash drives offer increased
performance for applications that are limited by disk subsystem latencies. Its technology
significantly reduces response times to service a random block because there is no seek
time—there is no disk head to move.
Serial Attached SCSI drives offer 10,000 or 15,000 RPM operation speeds in two different
form factors: 3.5 inch and 2.5 inch. The 2.5-inch drive technology provides significant
density and power improvements over the 3.5-inch technology.
NL-SAS drives are enterprise SATA drives with a SAS interface. NL-SAS drives offer larger
capacities and are offered only in the 7200 RPM speed and in a 3.5-inch form factor.
Matching the drive type to the expected workload is primary in achieving expected results.
When creating storage pools, drives in a storage pool are divided into three tiers based on
performance characteristics: Extreme Performance Tier (Flash), Performance Tier (SAS) and
Capacity Tier (NL-SAS).
Basic rules of thumb to determine the required number of drives used to support the
workload can be found in the EMC VNX2 Unified Best Practices for Performance guide found
on the EMC Online Support website.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 48
Before provisioning LUNs, it will need to be decided if the LUNs will be built from a Pool or
from a RAID group.
The left side of this slide shows an example of a Pool. A Pool can be made up of different
tiers of storage. Each tier uses different types of disks and can have a different RAID type.
It is strongly recommended that all disks in the tier be in the same configuration (in this
example, all disks in the Performance Tier are in a RAID 5 (8+1) configuration).
Pools may include Solid-State-Disk (SSD), also called Flash drives, for the extreme
performance tier, SAS drives for the performance tier, and Near-Line SAS drives for the
capacity tier. Pools support both thick and thin LUNs, as well as support for features such
as Fully Automated Storage Tiering (FAST) which enables the hottest data to be stored on
the highest performing drives without administrator intervention.
Pools give the administrator maximum flexibility and are easiest to manage; Therefore,
they are recommended.
The right side of the slide shows RAID group examples. Each RAID group has an
automatically assigned name beginning with RAID Group 0 (the RAID Group ID can be
defined by the user thus affecting the RAID group name. For example, if a RAID group ID of
2 is chosen, then the RAID Group Name would be set to RAID Group 2). A RAID group is
limited to using a single drive type; Flash, SAS, or NL-SAS and a single type of RAID
configuration. The needs of the host being attached to the storage will largely determine
the drive technology and RAID configuration used.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 49
Pools are dedicated for use by pool LUNS (thick or thin), and can contain a few or hundreds
of disks. Best practice is to create the pool with the maximum number of drives that can
initially be placed in the pool at creation time based on the model of the array. Because a
large number of disks can be configured, workloads running on pool LUNs will be spread
across many more resources than RAID groups requiring less planning and management.
Use homogeneous pools for predictable applications with similar and expected performance
requirements. Use heterogeneous pools to take advantage of the VNX FAST VP feature with
facilitates the automatic movement of data to the appropriate tier.
The RAID configuration for drives within a pool is performed at the tier level. Within each
tier, users can select from five recommended RAID configurations using three RAID types
that provide an optimal balance of protection, capacity and performance to the pool. Mixing
RAID types within a pool is supported and allows for using best practice RAID types for the
tiered drive types in a pool. Keep in mind that once the tier is created, the RAID
configuration for that tier in that pool cannot be changed.
Multiple pools can be created to accommodate separate workloads based on different I/O
profiles enabling an administrator to dedicate resources to various hosts based on different
performance goals.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 50
Some general guidelines should be followed to optimize and enable good performance on a
VNX storage system.
For best performance from the least amount of drives, ensure the correct RAID level is
selected to accommodate the expected workload.
RAID 1/0 is appropriate for heavy transactional workloads with a high rate of random writes
(greater than 25%.)
RAID 5 is appropriate for medium-high performance, general workloads, and sequential I/O.
RAID 6 is appropriate for NL-SAS read-based workloads and archives. This protection
provides RAID protection to cover the longer rebuild times of large drives.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 51
RAID groups are limited to a single drive type and a maximum limit of 16 drives. For parity
RAID levels, the higher the drive count, the higher the capacity utilization as well as a
higher risk to availability. With RAID groups, administrators need to carefully create storage
since there is a tendency to over provision and underutilize the resources.
When creating RAID groups, select drives from the same bus if possible. There is little or no
boost in performance when creating RAID groups across DAEs (vertical). There are of
course exceptions typically in the case where FAST Cache drives are used (see the EMC
VNX2 Unified Best Practices for Performance guide for details).
RAID5 4+1 RAID groups have an advanced setting to select a larger element size of 1024
blocks. This setting is used to take advantage of workloads consisting of predominantly
large-block random read I/O profiles, such as data warehousing.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 52
The capacity of a thick LUN, like the capacity of a RAID group LUN, is distributed equally
across all the disks in the Pool on which the LUN was created. This behavior is exactly the
same as when data is added from hosts and when additional disks are added to the Pool.
When this happens, data is distributed equally across all the disks in the Pool.
The amount of physical space allocated to a thick LUN is the same as the user capacity that
is seen by the server’s operating system and is allocated entirely at the time of creation. A
thick LUN uses slightly more capacity than the amount of user data written to it due to the
metadata required to reference the data.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 53
The primary difference between a thin LUN and a thick LUN is that thin LUNs present more
storage to a host than is physically allocated. Thin LUNs incrementally add to their in-use
capacity and compete with other LUNs in the pool for the pools available storage. Thin
LUNs can run out of disk space if the underlying Pool to which it belongs runs out of
physical space. As with thick LUNs, thin LUNs use slightly more capacity than the amount of
user data due to the metadata required to reference the data.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 54
Classic LUNs are created on RAID groups and like thick LUNs, the entire space consumed by
the LUN is allocated at the time of creation. Unlike Pool LUNs, the Classic LUNs LBAs
(Logical Block Addresses) are physically contiguous. This gives the Classic LUN predictable
performance and data layout.
Classic LUNs can be seen and accessed through either SP equally. With Classic LUNs, MCx
supports Active-Active host access (discussed in another module), providing simultaneous
SP access to the LUN. Thus LUN trespass is not required for high availability. If a path or SP
should fail, there is no delay in I/O to the LUN. This dual SP access also results in up to a
2X boost in performance.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 55
Access Logix is a factory installed feature on each SP that allows hosts to access data on
the array and the ability to create storage groups on shared storage systems.
A storage group is a collection of one or more LUNs or metaLUNs to which you connect one
or more servers. A server can access only the LUNs in the storage group to which it is
connected. It cannot access LUNs assigned to other servers. In other words, the server
sees the storage group to which it is connected as the entire storage system.
Access Logix runs within the VNX Operating Environment for Block. When you power up the
storage system, each SP boots and enables the Access Logix capability within VNX
Operating Environment (VNX OE ). Access Logix cannot be disabled.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 56
Access Logix allows multiple hosts to connect to the VNX storage system while maintaining
exclusive access to storage resources for each connected host. In effect, it presents a
‘virtual storage system’ to each host. The host sees the equivalent of a storage system
dedicated to it alone, with only its own LUNs visible to it. It does this by “masking” certain
LUNs from hosts that are not authorized to see them, and presents those LUNs only to the
servers that are authorized to see them.
Another task that Access Logix performs is the mapping of VNX LUNs, often called Array
Logical Units (ALUs), to host LUNs or Host Logical Units (HLUs). It determines which
physical addresses, in this case the device numbers, each attached host will use for its
LUNs.
Access to LUNs is controlled by information stored in the Access Logix database, which is
resident in a reserved area of VNX disk - the PSM (Persistent Storage Manager) LUN. When
host agents in the VNX environment start up, typically shortly after host boot time, they
send initiator information to all storage systems they are connected to and this information
gets stored in the Access Logix Database which is managed by the Access Logix Software.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 57
This slide shows a conceptual diagram of a storage system attached to 2 hosts. Each host
has a storage group associated with it – storage group A for Server A, and storage group B
for Server B. In this example, the LUNs used on the storage system are sequential, from 0
through 7. However, they don’t need to be. Each LUN on the storage system (ALU, or Array
Logical Unit) has been mapped to a LUN number (sometimes called the LUN alias) as seen
by the host (HLU, or Host Logical Unit). It is important to note that each host sees LUN 0,
LUN 1, etc, and that there is no conflict due to multiple instances of the LUN number being
used. The mappings are stored in a Access Control List, which is part of the Access Logix
database.
Each server sees the LUNs presented to it as though they are the only LUNs on the ‘virtual
storage system’, represented by the storage group.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 58
There are limits to the number of connections inside any VNX environment. Some of those limits are
directly related to Access Logix while others are hardware-related. The hardware limits generally
affect Access Logix and are covered here. No distinction is made between software and hardware
limits.
First, note that any host may be connected to only 1 storage group on any storage system. This does
not imply that only 1 host may be connected to a storage group; where clustering is involved, 2 or
more hosts may share the same storage group.
No host may be connected to more than 4 storage groups. This means that any host may not use
LUNs from more than 4 storage systems. There may be more storage systems in the environment,
and the host may even be zoned to make them visible at the Fibre Channel level, but connection to
storage groups should not be allowed for those storage systems.
There are also limits to the number of hosts that may attach to a storage system, and those limits
depend on the storage system type. Always consult the latest EMC VNX Open Systems Configuration
Guide for the updated limits. Storage groups are resident on a single storage system and may not
span storage systems. The number of LUNs contained in a storage group is also dependent on the
VNX model.
EMC recommends that any host connected to a VNX storage system have the host agent running. The
advantage to the user is that administration is easier – hosts are identified by hostname and IP
address rather than by WWN, and the host addressing of the LUN, e.g. c0t1d2, or H:, is available to
Unisphere.
If all users were allowed to make changes to the Access Logix configuration, security and privacy
issues would be a concern. With Unisphere, users must be authenticated and have the correct
privileges before any storage system configuration values may be changed. With legacy systems
running classic Navisphere CLI, the user must have an entry in the SP privileged user list to be
allowed to make configuration changes. This entry specifies both the username and the hostname,
which may be used for storage system configuration. If the Navisphere Secure CLI is used, then the
user must either have a Security File created, or must specify a username:password:scope
combination on the command line.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 59
When using Fibre Channel, access to the LUNs is controlled by an Access Control List (ACL )
which contains the 128-bit Globally Unique ID (UID) of the LUN, and the 128-bit Unique IDs
of the HBAs in the host. The HBA UID consists of the 64-bit World-Wide Node Name
(WWNN) followed by the 64-bit World-Wide Port Name (WWPN). The LUN UID is assigned to
the LUN when it is bound and includes time-related information. If the LUN is unbound and
an identical LUN bound again, they will have different UIDs.
Each request for LUN access references the ACL in order to determine whether or not a host
should be allowed access. If this meant that each request required access to the disk-based
Access Logix database, the lookups would slow the storage system significantly;
accordingly, the database is cached in SP memory (not in read or write cache) and
operations are fast.
Because of the disk-based nature of the database, it is persistent and will survive power
and SP failures. If an SP fails and is replaced, the new SP assumes the WWPNs of the failed
SP and no changes need be made to the database. If a host HBA fails and is replaced, and if
the replacement has a different WWN (which will be the case unless it can be changed by
means of software), then that host’s entry in the database will be incorrect. The information
for the old HBA needs to be removed from the database, and the information for the new
HBA needs to be entered. These processes are the de-registration and registration
processes respectively.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 60
Regardless of the type of access (FC or iSCSI), the LUN UID is used and has the same
characteristics as discussed in the previous slide.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 61
This lesson covers the procedure to create Pools, Pool LUNs, RAID Group, and RAID Group
LUNs.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 62
Once you have determined the underlying architecture of your VNX storage, you can now
create LUNs that can be provisioned to hosts.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 63
The slide shows the Storage Configuration page from Unisphere. The page offers several
options for configuring the storage on the system. The primary method of configuring
storage is to use the Storage Pools option from the main Unisphere screen. Provisioning
wizards are also available from the right-side task pane. Whichever method is chosen, disks
in the system are grouped together in Pools or RAID Groups and LUNs are created from
their disk space. Storage configured from Pools use Advanced Data Services such as FAST,
Thin and Thick LUNs, and Compression. The default storage configuration option is Pools.
LUNs created from Pools or RAID Groups are assigned as block devices to either the VNX
File or other block hosts.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 64
When creating Pools, each tier provides a RAID configuration drop-down and a number of
disks drop-down. The example shows a storage array with three available drive types,
(Extreme Performance, Performance and Capacity).
VNX systems allow administrators to mix different drive types within the same pool. The
selection of drives are user selectable with RAID 5 (default), RAID 6, and 1/0 available.
The Scheduled Auto-Tiering box is visible only when the FAST enabler is installed and the
Pool radio button is selected. Select Scheduled Auto-Tiering to include this storage pool in
the auto-tiering schedule.
For the level of performance selection, multiple combo boxes will be shown (if there are
different disk types available), one for each disk type in the array such as Flash, SAS, or
NL-SAS. If a certain type of disk is not present in the array, the combo box for that type will
not be displayed.
The Advanced tab can be selected to configure Thresholds and FAST Cache. If the FAST
Cache enabler is present when the Pool is created, the Enabled box is selected by default,
and will apply to all LUNs in the pool. The Snapshots section will be available if the
Snapshot enabler is loaded. This option determines whether the system should monitor
space used in the pool and automatically delete snapshots if required. If enabled, the
specified thresholds indicate when to start deleting snapshots and when to stop, assuming
that the lower free space value can be achieved by using automatic deletion. If the lower
value cannot be reached, the system pauses the automatic delete operation

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 65
The Storage Pool properties window is launched from the Storage > Storage Configuration
> Storage Pools page, by selecting the pool and clicking the properties button. The Storage
Pool properties window has 4 tabs with information about the selected pool: General, Disks,
Advanced, and Tiering.
The General tab which includes information about the name and status of the pool, its
physical and its virtual capacities.
The Disks tab displays the physical disks that make up the storage pool. The pool can also
be expanded by using the Expand button on the page.
From the Advanced tab it is possible to set the Percent Full Threshold value, enable FAST
Cache, and configure the automatic deletion of old snapshots.
The Tiering tab displays capacity details for all disk tiers within the pool and reports the
status of any data relocation operations and is visible only for VNX Arrays with the FAST
enabler installed.
Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 66
Creating Pool LUNs can be done by navigating to Storage > LUNs from the Unisphere navigation bar or optionally,
the LUN can be created by right clicking on the storage pool in which you want to create the LUN.
The General tab allows the user to create either a Pool LUN (default) or a RAID Group LUN using the appropriate
Storage Pool Type radio button. Thick and/or thin LUNs can be created within the same pool, by default the Thin
option is checked. The user must uncheck the box to create a Thick (fully provisioned) LUN. By default the LUN ID
is used for its name. A Name radio button option is provided for customized LUN naming. Note: a LUN name can
be changed after its creation from its Properties window.
Illustrated is the creation of a single 1 GB thin pool LUN created from the Block Pool with a LUN ID of 10 and a
Mixed RAID Type.
The Advanced tab is shown with the default values for a Thin or Thick pool LUN with FAST enabler installed.
Pool LUNs by default, are automatically assigned (Auto radio button) to a specific SP unless the Default Owner is
changed.
FAST settings are shown if the FAST enabler is installed. Tiering Polices can be selected from the dropdown list and
are as follows:
Start High then Auto-Tier (recommended) — First sets the preferred tier for data relocation to the highest
performing disk drives with available space, then relocates the LUNs data based on the LUNs performance statistics
and the auto-tiering algorithm.
Auto-Tier — Sets the initial data placement to Optimized Pool and then relocates the LUNs data based on the LUNs
performance statistics and the auto-tiering algorithm.
Highest Available Tier — Sets the preferred tier for initial data placement and data relocation to the highest
performing disk drives with available space.
Lowest Available Tier — Sets the preferred tier for initial data placement and data relocation to the most cost
effective disk drives with available space.
Initial Data Placement - Displays tier setting that corresponds to the selection in the Tiering Policy drop-down list
box.
Data Movement - Displays whether or not data is dynamically moved between storage tiers in the pool.
Snapshot Auto-Delete Policy – This option enables or disables automatic deletion of snapshots on this LUN. The
default is enabled.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 67
From the Storage > Storage Configuration > Storage Pools page, select the LUN from the Details
section and either click the properties button or right-click over the LUN and select properties.
Optionally the LUN properties window can be launched by selecting the Storage > LUNs page and
hitting the Properties button with the LUN selected. The Pool LUN Properties window is displayed with
all the information about the selected LUN.
The Pool LUN Properties window is comprised of several tabs: General, Tiering, Statistics, Hosts,
Folders, Compression, Snapshots, and Deduplication.
The General tab displays the current configuration of the selected LUN, its ID information and current
state, the pool name and capacity information of the LUN, and its ownership information.
The Tiering tab displays the FAST VP Tiering Policy settings and allows it to be changed for the LUN.
The Statistics tab displays the LUN statistics if statistics logging is enabled for the SP that owns the
LUN.
The Hosts tab displays information about the host or hosts connected to the LUN; the host name, IP
address, OS, and the logical and physical device naming of the LUN. For virtualized environments, the
virtual machine components that use the LUN are also listed.
The Folders tab displays a list of folders to which the LUN belongs.
The Compression tab lets the user turn the feature on or off, pause or resume the compression and
change the compression rate.
The Snapshots tab displays information for created VNX snapshots and allows the user to create
Snapshots and Snapshot Mount Points.
The Deduplication tab allows the feature to be turned on or off and displays deduplication details.
Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 68
To configure a RAID group storage pool, select Storage > Storage Configuration > Storage
Pools from the Unisphere navigation bar.
By default the Pool tab is highlighted so users must select the RAID Groups tab and then
Create, to launch the Create Storage Pool window. Optionally the user can hit the Create
button from the Pool tab and select the RAID Group option on Create Storage Pool window.
Users configure the RAID group parameters for Storage Pool ID, RAID Type, and the
Number of Disks using the dropdown menus available from the General tab.
By default, Unisphere selects the disks that will be used in the group automatically as
shown by the Automatic radio button. However, users have the option to select the disk
manually, as well, if needed.
Note the checkbox for Use Power Saving Eligible Disks which is not present when creating a
Pool
The RAID Group Advanced tab contains two configurable parameters. Users can select the
Allow Power Savings parameters for the RAID Group if the group contains eligible disks. It
is also possible to set the stripe element size of the LUN to either Default (128) or High
Bandwidth Reads (1024). Some read intensive applications may benefit from a larger stripe
element size. The high bandwidth stripe element size is available for RAID 5 RAID groups
that contain 5 disks (4+1). The High Bandwidth Reads option is not supported on Flash
drives.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 69
The RAID Group properties window is launched from the Storage > Storage Configuration >
Storage Pools > RAID Group page, by selecting the RAID Group and clicking the properties
button. The Storage Pool properties window has 3 tabs with information about the selected
RAID Group: General, Disks, and Partitions.
The General tab displays the RAID Group information and various capacity information.
The Disks tab shows the physical disks which make up the RAID Group, as well its current
state.
The Partitions tab displays a graphic representation of bound and unbound contiguous
space associated with the RAID Group.
Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 70
Creating RAID Group LUNs can be done by navigating to Storage > LUNs from the Unisphere
navigation bar and clicking on the Create button. Optionally, the LUN can be created by right clicking
on the storage pool in which you want to create the LUN and selecting Create LUN.
The Create LUN window General tab is launched with the RAID Group radio button selected. The
window lets users create one or more LUNs of a specified size within a selected RAID group .
The Capacity section displays the available and consumed space of the selected RAID group for the
new LUN. The user can use then use the information to define the size and amount of LUNs to create.
Optionally the user can define a names (prefix) and ID to associate with the LUN.
Some or all LUN properties display N/A if the storage system is unsupported or if the RAID type of the
LUN is RAID 1, Disk, or Hot Spare. If the LUNs you are creating reside on a storage system connected
to a VMware ESX server, and these LUNs will be used with layered applications such as SnapView,
configure the LUNs as raw device mapping volumes set to physical compatibility mode.
The Advanced tab options for a RAID Group LUN is are also shown here.
Use SP Write Cache: Enables (default) or disables write caching for this LUN. For write caching to
occur, the storage system must have two SPs and each must have adequate memory capacity. Also,
write caching must be enabled for the storage system.
FAST Cache: Enables or disables the FAST Cache for this LUN. Displayed only if the FAST Cache
enabler is installed. If you enable the FAST Cache for Flash disk LUNs, the software displays a warning
message. You should disable the FAST Cache for write intent log LUNs and Clone Private LUNs.
Enabling the FAST Cache for these LUNs is a suboptimal use of the FAST Cache and may degrade the
cache's performance for other LUNs.
No Initial Verify: When unchecked, performs a background verification.
Default Owner: Sets the SP that is the default owner of the LUN. The default owner has control of the
LUN when the storage system is powered up: SP A, SP B and Auto.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 71
From the Storage > Storage Configuration > Storage Pools page, RAID Groups tab select
the RAID Group and its related LUN from the Details section. Then either click the
properties button or right-click over the LUN and select properties. Optionally the LUN
properties window can be launched by selecting the Storage > LUNs page and hitting the
Properties button with the LUN selected. The Pool LUN Properties window is displayed with
all the information about the selected LUN.
The RAID Group LUN Properties window is comprised of several tabs: General, Cache,
Statistics, Hosts, Disks, Folders, and Compression.
The General tab displays the current configuration of the selected LUN, its ID information
and current state, the RAID Group, element size and capacity information of the LUN, and
its ownership information.
The Cache tab displays information about the LUN cache settings and allows the user to
enable or disable write caching for the selected LUN.
The Statistics tab displays the LUN statistics if statistics logging is enabled for the SP that
owns the LUN.
The Hosts tab displays information about the host or hosts connected to the LUN; the host
name, IP address, OS, and the logical and physical device naming of the LUN. For
virtualized environments, the virtual machine components that use the LUN are also listed.
The Disks tab lets the user view information about the disks on which the LUN is bound.
The Folders tab displays a list of folders to which the LUN belongs.
The Compression tab lets the user turn the feature on and will migrate the LUN to a Pool
LUN.
Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 72
Having created LUNs from either a Pool or RAID Group, the next step is to provision the
storage to a host. This operation is effectively configuring Access Logix on the VNX where a
registered host or hosts will be granted access to the created LUN or LUNs.
The provisioning operation starts with the VNX Storage Group and is accessed by navigating
to Hosts > Storage Groups from the Unisphere navigation bar. A wizard is available from
the right-side task pane to provision storage to block hosts.
The ~filestorage Storage Group is always present on the VNX and is the Storage Group
defined for VNX File. Additional Storage Groups can be created for the registered block
hosts by clicking the Create button.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 73
The configuration of a Storage Group is a two-part task that involves adding created LUNs
and registered hosts. Each task is performed from its own tabbed page.
Adding LUNs is done from the LUNs tab by selecting LUNs from an Available LUNs tree and
clicking the Add button. LUNs can be selected by expanding tree objects to view the LUNs.
It is possible to multiple select LUNs within the tree. The LUNs will be added to the Selected
LUNs section of the page where the Storage Group HLU value can be optionally set. This
option is exposed by moving the mouse to the Host LUN ID area for a selected LUN and
clicking the right-mouse button.
Adding registered hosts is done from the Hosts tab by selecting a host or hosts from a list
of available hosts and clicking the right arrow button.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 74
The Properties page of a Storage Group will display its current configuration and is
presented by navigating in Unisphere to Hosts > Storage Groups and selecting a specific
storage group. The window is comprised of three tabs: General, LUNs and Hosts.
The General tab displays the current configuration of the selected Storage Group, its WWN
and name.
The LUNs tab will present an Available LUNs tree that can be expanded to select additional
LUNs to add to the Storage Group. The Selected LUNs section displays the current LUNs
configured into the Storage Group. LUNs can be selected from this section for removal from
the Storage Group.
The Hosts tab displays the list if available registered hosts that can be added to the Storage
Group as well as listing the current configured registered hosts for the Storage Group. The
configured hosts can be selected for removal from the Storage Group.
Detailed information is available for each tabbed page by accessing its Help button.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 75
When storage has been provisioned to a host, the host needs to discover the new block
storage presented to it. This is typically done by a SCSI rescan on the host and is done in
various manners for different hosts OSs. Illustrated here is how VNX File discovers
provisioned storage. Block host storage discovery specifics are covered in the appropriate
block host connectivity guide.
To discover newly provisioned storage to VNX File, in Unisphere select Storage from the
navigation bar. On the right-side tasks pain scroll down to the File Storage section and click
the Rescan Storage Systems link. This initiates a SCSI bus rescan for each Data Mover in
the VNX system. It is a long running task and you can verify its completion by navigating to
System > Background Tasks for File. When complete the newly discovered storage will
automatically be configured as disk volumes by the volume manager within VNX File.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 76
With the storage LUNs now provisioned to the hosts within the Storage Group, the next
step is to ready the hosts to use the storage for applications and users.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 77
This instructor performed demonstration covers the provisioning of storage. Disks from the
VNX are used to create two Pools; one pool that will be used for creating Thick LUNs and be
provisioned for File storage, and another pool for creating Thick and Thin LUNs for other
block hosts. Disks from the VNX are also used to create RAID Groups for creating Classic
LUNs. LUNs created will be assigned to use with advanced storage features for lab exercises
later in this course
Reference videos of the storage provisioning is also included and can be used for future
reference. To launch the videos, use the following URLs:

Creating Pools and Pool LUNs

https://edutube.emc.com/Player.aspx?vno=L1l3uClTNZmFAX7HP2O+Qg==&autoplay=true

Creating a RAID Group and Classic LUNs

https://edutube.emc.com/Player.aspx?vno=s/dAs/D/VgiDpC03OlBN6Q==&autoplay=true

Provisioning Storage for File

https://edutube.emc.com/Player.aspx?vno=sYF/frALloGIYd4ArmccXg==&autoplay=true

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 78
This lesson covers the steps taken at a connected host to ready the provisioned storage for
use by applications or users. We will discuss these tasks for the Windows, Linux, and ESXi
operating systems.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 79
The slide outlines the steps to ready storage at the Windows host.
Once LUNs are created add the LUN to a Storage Group and connect the Windows host to
the Storage Group:
1. Align the data (if necessary)
2. Use the Disk Management on the Windows host to discover LUNs
3. Initialize devices
4. Assign drive letters
5. Format drives
6. Write data

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 80
Data alignment refers to the alignment of user data on the LUN.
Misalignment of data affects both reads and writes. Additional I/O caused by disk crossings
slows down the overall system, while multiple accesses to disk make individual I/Os slower.
In the case of cached writes, the misalignment causes slower flushing, which leads to
reduced performance.
Note that it is not Windows itself that causes this issue, but the Intel architecture’s use of a
Master Boot Record at the beginning of the disk. Linux systems (as well as VMware, etc) on
the Intel architecture will be similarly affected.
Note that Windows 2008 and 2012 automatically aligns partitions at the 1 MB boundary, so
no user intervention is required. For other Windows versions, alignment at the 1 MB
boundary is recommended (see Microsoft KB Article 929491.) Windows Server 2003, which
was the last Windows version requiring data alignment, is schedule to be End of Service Life
July 14, 2015.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 81
If it is necessary to align a Windows host, you can do this using the Microsoft utility
‘diskpart’ which is available as part of the Windows OS.
Diskpart allows the size of the reserved area to be selected manually. The Microsoft-
recommended value of 2,048 blocks (1 MB) ensures alignment of the Windows file system
blocks and the VNX 64 KB elements. This offset value causes user data to be written from
the beginning of the first element of the fifth stripe. This is acceptable for random and
sequential I/O.
The assumption made in the diagram is that the NTFS partition has been formatted with 4
KB blocks, though the mechanism is valid for 8 kB or even 64 kB blocks. This block size fits
evenly into a VNX 64 KB element. Diskpart should be used on the raw partition before
formatting; it will destroy all data on a formatted partition.
The use of diskpart allows the start of the raw LUN and the start of the partition to be on 64
KB boundaries. Host accesses as well as VNX Replication Software access will be aligned to
the physical disk element.
VNX allows the configuration of a LUN Offset value when binding striped LUNs (RAID-0,
RAID-3, RAID-1/0, RAID-5, RAID-6). The selection of any value other than 0 causes an
extra stripe to be bound on the LUN. The ‘logical start’ of the LUN is marked as being ‘value’
blocks from the end of the ‘-1’ stripe ( 63 blocks in a W2K/W2K3 environment). The OS
writes metadata in that space, and application data will be written at the start of stripe 0,
and will therefore be aligned with the LUN elements. Performance will be appreciably better
than with misaligned data.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 82
Use these steps to align data with diskpart:
• Use the select disk command to set the focus to the disk that has the specified
Microsoft Windows NT disk number. The example shows disk 5 has been selected. If
you do not specify a disk number, the command displays the current disk that is in
focus.
• The list disk command provides summary information about each disk that is
installed on the computer.
• The list volume displays the volumes on the physical disk. (disk 5)
• The list partitions can be used to display the disk partitions, the example displays
the disk has been partitioned with an offset of 1024 KB (create partition primary
align=1024)

Notes
 When you type this command, you may receive a message that resembles the
following: DiskPart succeeded in creating the specified partition.
 The align= number parameter is typically used together with hardware RAID
Logical Unit Numbers (LUNs) to improve performance when the logical units are
not cylinder aligned. This parameter aligns a primary partition that is not cylinder
aligned at the beginning of a disk and then rounds the offset to the closest
alignment boundary. number is the number of kilobytes (KB) from the beginning
of the disk to the closest alignment boundary. The command fails if the primary
partition is not at the beginning of the disk. If you use the command together
with the offset =number option, the offset is within the first usable cylinder on
the disk.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 83
Once the VNX Storage Group is created with attached LUNs and connected Hosts, the
windows Server Manager utility can be used to scan for new disks, initialize and format the
disk for use.
This example shows a Windows 2012 host running the Server Manager utility. The utility is
launched from server manager icon on the bottom task bar. Once in Server Manager
navigate to File and Storage Services > Disks. If your new disks do not appear, click the
tasks pull down menu on the left hand side and choose Rescan Storage.
This will search for new disks and present them in the Disks window. The example shows
the LUN (Disk 6) which we added to the Storage Group previously. As you can see it is
showing as offline and unknown, so users will need to initialize the disk, assign a device
letter and format the disk. Right-click on the desired disk and choose Bring Online and then
click “Yes” at Bring Disk Online window.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 84
To create a new volume on a disk, right-click on the disk that was just brought online and
choose New Volume. This will launch the New Volume wizard.
The wizard will walk the user through several steps after selecting Next.
Users will be asked to bring the disk online and initialize it, ask for the size of the new
volume, drive letter, and file system type. Once completed, it will ask you to confirm your
settings and if they are set click create to create the new volume. As you can see, once the
volume is created, disk 6 is now online and partitioned.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 85
The slide details the steps to ready storage for use on Linux hosts.
Once LUNs are created, add the LUN to a Storage Group and connect the Windows host to
the Storage Group. Scan to discover LUNs, align the data (if necessary) partition the
volume, and create and mount the file system for use.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 86
After assigning LUNs and Linux hosts to Storage Groups, the hosts with the newly assigned
LUNs will need to rescan their SCSI bus to recognize the new devices. Linux provides
multiple mechanisms to rescan the SCSI bus and recognize SCSI devices presented to the
system. In the 2.4 kernel solutions, these mechanisms were generally disruptive to the I/O
since the dynamic LUN scanning mechanisms were not consistent.
With the 2.6 kernel, significant improvements have been made and dynamic LUN scanning
mechanisms are available. Linux currently lacks a kernel command that allows for a
dynamic SCSI channel reconfiguration like drvconfig or ioscan.
The mechanisms for reconfiguring devices on a Linux host include:
• System reboot: Most reliable way to detect newly added devices
• Unloading and reloading the modular HBA driver: Host Bus Adapter driver in
the 2.6 kernel exports the scan function to the /sys directory
• Echoing the SCSI device list in /proc
• Executing a SCSI scan function through attributes exposed to /sys
• Executing a SCSI scan function through HBA vendor scripts
EMC recommends that all I/O on the SCSI devices should be quiesced prior to attempting to
rescan the SCSI bus.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 87
After a bus rescan or reboot, verify the LUN is available to the OS by opening the scsi file
from the /proc/scsi directory.
As you can see from the Storage Groups view in Unisphere, the LUN that was assigned to
the group has a host LUN ID of 60, and when looking at the /proc/scsi directory you see
there that LUN 60 is now available

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 88
Here we see an example of how to use fdisk on Linux to align the first (and only) partition
of the /dev/emcpowerj1 device.
The starting logical block for the partition has been moved from 62 to 128.
After the change is made, the partition table needs to be written (with the final “w”
command). At that point, the partition can be put into production, usually via addition to a
volume group, or alternatively by building a filesystem directly on it.
To create an aligned file system larger than 2 TB use the GUID Partition Table (GPT). GPT
provides a more flexible way for partitioning than the older Master Boot Record (MBR)
The GPT does not require any alignment value. The partition will be set to 0 in the MBR
entry as the first sector on the disk followed by a Primary Table Header, the actual
beginning of the GPT.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 89
For Linux, the final steps in the process are to create and mount a file system to the Linux
host. The example shows the mkfs command to make a file system on an emcpower device
partition and then create and mount the file system using the mkdir and mount commands.
Once the file system is created and mounted, the touch command can be used to write a
data file to the device.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 90
The phase of readying the host to use storage is somewhat different in virtualized
environments. The step to be performed by the hypervisor, in this case vSphere, is to scan
for storage so that it can then be assigned to the virtual machine.
Although these initial steps are brief, there is still much that can be done to manage the
connected storage. This will be discussed in the following module.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 91
After the LUNs and ESxi hosts are added to the Storage Group. To make the LUNs visible to
the ESXi host, perform a rescan from the vSphere client Storage Adapters > Configuration
page. Once on the configuration pane, right click the HBA the LUNs were assigned to and
choose Rescan. Once the Rescan is completed, the LUNs will appear in the details pane as
seen on the slide. You can also use the rescan all link at the top right of the page. If this
link is selected, it will automatically rescan all adapters that are present.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 92
Here is a list with functions and utilities to use when dealing with ESXi servers. The use of
these functions and utilities is optional; they are listed for reference only.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 93
Now that we can integrate hosts with VNX storage at a basic level, we can move on to
discuss other aspects of VNX storage.
Path management includes planning for path failures, load balancing across data paths, and
working with host-based utilities.
Advanced features can be employed. These include migrating data, FAST VP, and FAST
Cache.
VNX provides options for block data replication, VNX Snapshot and SnapView.
There is also full serviced File capabilities for NAS environments to provide high availability
and file-level replication features, such as SnapSure.
As we move forward in this course, we will discuss these topics.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 94
This module covered the three major stages of integrating hosts with VNX storage.
Storage networking involves options for FC or iSCSI, each with some requirements. FC
networking requires the host HBA and VNX Front End ports to be zoned together. Once
hosts can access the array, we need to confirm in Unisphere that the hosts are registered.
Provisioning storage involves choosing storage media types as well as selecting between
Storage Pools or traditional RAID Groups. LUNs configured on each of these are then added
to the Storage Group along with the registered host.
Once the storage has been provisioned to the host, some additional tasks may be required
to ready the storage at the host. These tasks include bringing the volume online and
creating a usable file system. Older systems may also require disk alignment.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 95
This lab covers the storage configuration of the VNX. This exercise verifies the pre-
configured storage configuration of the lab VNX; the storage that is provisioned for VNX
File, the configuration of Pools and RAID Groups. Classic LUNs are created from the pre-
configured RAID Groups. Thick and Thin LUNs are created from a pre-configured pool. And
finally the system’s hot spare policy is verified.

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 96
This lab covered storage configuration of the VNX. It verified the pre-provisioned storage
for File and the pre-configured RAID Groups and Pools. The creation of Classic, Thick and
Thin LUNs are performed. And the verification of the system’s hot spare policy was done.
Please discuss as a group your experience with the lab exercise. Were there any issues or
problems encountered in doing the lab exercise? Are there relevant use cases that the lab
exercise objectives could apply to? What are some of the concerns relating to the lab
subject?

Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 97
Copyright 2015 EMC Corporation. All rights reserved. Module: Host Integration to Block Storage 98

You might also like