EMC VNX Data
EMC VNX Data
SAN Storage manufacturers: EMC, Netapp, HP, Hitachi, IBM, Dell, Tintri, Orcale, and etc.
Storage Terminology
There are some familiar words will be remained in the Storage Platform.
LUN: LUN is known as Logical Unit Number. It’s a slice of space from a hard drive.
Raid Group: A collection of 16 drives in a group with same drive type from where the LUN is created.
Storage Pool: A collection of drives in a pool with same or different type of drives from where the LUN is
created.
Masking: It means particular LUN is visible to particular Host. In clear, A LUN can be visible to only one
Storage Group/Host.
Lun Masking & Lun Mapping
Masking refers to making a LUN visible to some servers and not visible to others.
Mapping refers to the assignment of a number to a LUN. It can then be presented to a host.
Storage Group: It’s nothing but a Host name. Storage group is a collection of one or more LUNs (or
meta LUNs) to which you connect one or more servers.
Meta LUN: The meta LUN feature allows Traditional LUNs to be aggregated in order to increase the size
or performance of the base LUN. LUN will be expanded by the addition of other LUNs. The LUNs that
make up a meta LUN are called meta members and the base LUN is known as Meta head. We can add
255 meta members to 1 meta head (256 LUNs).
Access logix: Access Logix provides LUN masking that allows sharing of storage system.
PSM: The Persistent Storage Manager LUN stores the configuration information about the VNX/Clariion
such as Disks, Raid Groups, Luns, Access Logix information, SnapView configuration, MirrorView and
SanCopy configuration as well.
Migration: Storage migration you can move storage from one location to another without interrupting
the workload of the virtual machine, if it is running. You can also use storage migration to move, service,
or upgrade storage resources, or for migration of a standalone or cluster virtual machine
Archiving: Data archiving is the process of moving data that is no longer actively used to a separate
storage device for long-term retention. Archive data consists of older data that is still important to the
organization and may be needed for future reference, as well as data that must be retained for
regulatory compliance
Description:
How to provision storage from VNX Block? / How to assign LUN to Host and Storage Group from VNX?
Resolution:
Follow the steps below to perform storage provisioning from VNX (Allocate LUN from VNX to a particular Host):
1. Make sure the initiators are Registered and Logged in to the VNX, also the Failover Mode and Initiator type should
be rightly set.
2. Make sure there are LUNs, created from either RAID group or a Storage pool.
3. Login to Unisphere, go to Hosts > Storage Groups. Click on Create button to Create a Storage Group.
4. Provide a Storage Name and see if the Storage System for which you want to create a Storage Group is correct.
5. Once you click on Apply, click on Yes and confirm the operation.
6. The new pop up says, Do you wish to add LUNs or Connect Hosts, click Yes if you add them at the same time, else
click No, as the same can be done using Connect LUNs and Connect Hosts or Properties tab.
7. If you clicked Yes, select the LUNs you wish to present to the server under Available LUNs window and click Add.
8. Once the LUN is visible in Selected LUNs list, under the Host LUN ID (HLU), select the HLU ID you wish to be
assigned to the server.
9. In the Hosts tab in the same window of Storage Group Properties, select the Host from the Available Hosts list in
the left side and move it to the Host to be Connected window in the right side.
10. Click on Apply and OK. The new Storage Group will be visible under the Storage Groups list. Properties tab can
be used to make any changes if required.
The same can be done using Command Line Interface (CLI) as well. Here are the steps:
Note:
If a LUN is already assigned to another server and you want to share that LUN to the current server, select "All"
under the drop-down list in the Show LUNs tab so that the LUN is visible for assignment.
A Host can be part of only one Storage Group at any point of time.
The following blog explains how Read and Write Cache functions within Clariion.
The Diagram on the left explains Read Caching (Only One SP involved in
this process)
Step A: The host is requesting data from the active path to the Clariion.
Step B: If the data is in Cache, the data is sent over to the host.
Step Aa: This step comes into picture if the requested data is not in cache and is
now requested from the Disk.
Step Ab: The Disk reads the data in the cache and Step B is performed completing
the request.
The Diagram on the right explains Write Caching (Both the SP’s are
involved in this process)
Step A: The host writes data to the disk (lun) through the active path between Host
and Clariion. The data is written in cache.
Step B: The Data from above is in Cache (example SPA) is now copied over to the
Cache of SPB using the Clariion Messaging Interface (CMI)
Step C: At this point an acknowledgement is sent to the host that the Write is now
complete.
Step D: Using the Cache flushing techniques the data is written to the Clariion Disk
(lun)
Zoning can be done in 7 simple steps, the pictorial diagram is as follows.
Steps_to_perform_zoning
Steps to perform zoning
Zoning steps:-
Zonecreate
4. We need to check whether active configurations is present or not by using the command.
Cfgactvshow
5. If an active configuration already exits we just need to add the zone to this, by using the command.
Cfgactvadd
6. If not there we need to create new active configuration by using the command.
Cfgcreate
alicreate "storage_hba","22:22:22:22:22:22:22:22"
cfgsave
Brocade switches uses both web and CLI, the table below displays some but not all the CLI
commands.
help
prints available commands
switchdisabled
disable the switch
switchenable
enable the switch
licensehelp
license commands
diaghelp
diagnostic commands
configure
change switch parameters (BB credits, etc)
diagshow
POST results since last boot
routehelp
routing commands
switchshow
display switch show (normally first command to run to obtain switch configuration)
supportshow
full detailed switch info
portshow
display port info
nsshow
namesever contents
nsallshow
NS for full fabric
fabricshow
Fabric information
version
firmware code revision
reboot
full reboot with POST
fastboot
reboot without POST
zonecreate (zone)
create a zone
zoneshow
shows defined and effective zones and configurations
zoneadd
adds a member to a zone
zoneremove
removes a member from a zone
zonedelete
delete a zone
cfgcreate (zoneset)
create a zoneset configuration
cfgadd
adds a zone to a zone configuration
cfgshow
display the zoning information
cfgenable
enable a zone set
cfgsave
saves defined config to all switches in fabric across reboots
cfgremove
removes a zone from a zone configuration
cfgdelete
deletes a zone from a zone configuration
cfgclear
clears all zoning information (must disable the effective config first)
cfgdisable
disables the effective zone set
Caching
From the chart above, the amount of Cache that a Clariion contains is based on the model.
Read Caching
First, we will describe the process of when a host issues a request for data from the Clariion.
1.The host issues the request for data to the Storage Processor that owns the LUN. If that data is sitting in
Cache on the Storage Processor,
If however, the data is not in Cache, the Storage Processor must go to disk now to retrieve the data. (Step 1
½ ). It reads the data from the LUN into Read Cache of the owning Storage Processor. (Step 1 ¾ ) before
it sends the data to the host.
Write Caching
1.The host writes a block of data to the LUN’s owning Storage Processor.
2.The Storage Processor MIRRORs that data to the other Storage Processor.
3.The owning Storage Processor then sends the Acknowledgement back to the host, that the data is “on
disk.”
4.At a later time, the data will be “flushed” from Cache on the SP out to the LUN.
Why does Write Cache MIRROR the data to the other Storage Processor before it sends the
acknowledgement back to the host?
This is done to ensure that both Storage Processors have the data in Cache in the event of an SP failure.
Let’s say that the owning Storage Processor crashed (again, never happens). If that data was not written to
the other Storage Processor’s Cache, that data would be lost. But, because it was written to the other SP
Cache, that Storage Processor can now write that data out to the LUN.
This MIRRORing of Write Cache is done through the CMI (Clariion Messaging Interface) Channel which
lives on the Clariion.
Vault Drives
All Clariions have Vault Drives. They are the first five (5) disks in all Clariions. Disks 0_0_0 through
0_0_4. The Vault drives on the Clariion are going to contain some internal information that is pre-
configured before you start putting data on the Clariion. The diagram will show what information is
stored on the Vault Disks.
The Vault.
The vault is a ‘save area’ across the first five disks to store write cache from the Storage Processors in the
event of a Power Failure to the Clariion, or a Storage Processor Failure. The goal here is to place write
cache on disk before the Clariion powers off, therefore ensuring that you don’t lose the data that was
committed to the Clariion and acknowledged to the host. The Clariions have the Standby Power Supplies
that will keep the Storage Processors running as well as the first enclosure of disks in the event of a power
failure. If there is a Storage Processor Failure, the Clariion will go into a ‘panic’ mode and fear that it may
lose the other Storage Processor. To ensure that it does not lose write cache data, the Clariion will also
dump write cache to the Vault Drives.
The above slide illustrates the concept of creating a RAID Group and the supported RAID types of the
Clariions.
RAID Groups
The concept of a RAID Group on a Clariion is to group together a number of disks on the Clariion into one
big group. Let’s say that we need a 1 TB LUN. The disks we have a 200 GB in size. We would have to group
together five (5) disks to get to the 1 TB size needed for the LUN. I know we haven’t taken into account for
parity and what the RAW capacity of a drive is, but that is just a very basic idea of what we mean by a
RAID Group. RAID Groups also allow you to configure the Clariion in a way so that you will know what
LUNs, Applications, etc…live on what set of disks in the back of the Clariion. For instance, you wouldn’t
want an Oracle Database LUN on the same RAID Group (Disks) as a SQL Database running on the same
Clariion. This allows you to create a RAID Group of a # of disks for the Oracle Database, and another
RAID Group of a different set of disks for the SQL Database.
RAID Types
RAID 0 – Striping Data with NO Data Protection. The Clariions Cache will write the data out to disk in
blocks (chunks) that we will discuss later. For RAID 0, the Clariion writes/stripes the data across all of the
disks in the RAID Group. This is fantastic for performance, but if one of the disks fail in the RAID 0
Group, then the data will be lost because there is no protection of that data (i.e. mirroring, parity).
RAID 1 – Mirroring. The Clariion will write the Data out to the first disk in the RAID Group, and write the
exact data to another disk in that RAID 1 Group. This is great in terms of data protection because if you
were to lose the data disk, the mirror would have the exact copy of the data disk, allowing the user to
access the disk.
RAID 1_0 – Mirroring and Striping Data. This is the best of both worlds if set up properly. This type of
RAID Group will allow the Clariion to stripe data and mirror the data onto other disks. However, the
illustration above of RAID 1_0, is not the best way of configuring that type of RAID Group. The next slide
will go into detail as to why this isn’t the best method of configuring RAID 1_0.
RAID 3 – Striping Data with a Dedicated Parity Drive. This type of RAID Group allows the Clariion to
stripe data the first X number of disks in the RAID Group, and dedicate the last disk in the RAID Group
for Parity of the data stripe. In the event of a single drive failure in this RAID Group, the failed disk can be
rebuilt from the remaining disks in the RAID Group.
RAID 5 – Striping Data with Distributed Parity. RAID type 5 allows the Clariion to distribute the Parity
information to rebuild a failed disk across the disks that make up the RAID Group. As in RAID 3, in the
event of a single drive failure in this RAID Group, the failed disk can be rebuilt from the remaining disks
in the RAID Group.
RAID 6 – Striping Data with Double Parity. This is new to Clariion world starting in Flare Code 26 of
Navisphere. The simplest explanation of RAID 6 we can use for RAID 6 is the RAID Group uses striping,
such as RAID 5, with double the parity. This allows a RAID 6 RAID Group to be able to have two drive
failures in the RAID Group, while maintaining access to the LUNs.
HOT SPARE – A Dedicated Single Disk that Acts as a Failed Disk. A Hot Spare is created as a single disk
RAID Group, and is bound/created as a HOT SPARE in Navisphere. The purpose of this disk is to act as
the failed disk in the event of a drive failure. Once a disk is set as a HOT SPARE, it is always a HOT
SPARE, even after the failed disk is replaced. In the slide above, we list the steps of a HOT SPARE taking
over in the event of a disk failure in the Clariion.
1. A disk fails – a disk fails in a RAID Group somewhere in the back of the Clariion.
2. Hot Spare is Invoked – a Clariion dedicated HOT SPARE acts as the failed disk in Navisphere. It will
assume the identity of the failed disk’s Bus_Enclosure_Disk Address.
3. Data is REBUILT Completely onto the Hot Spare from the other disks in the RAID Group – The
Clariion begins to recalculate and rebuild the failed disk onto the Hot Spare from the other disks in the
RAID Group, whether it be copying from the MIRRORed copy of the disk, or through parity and data
calculations of a RAID 3 or RAID 5 Group.
4. Disk is replaced – Somewhere throughout the process, the failed drive is replaced.
5. Data is Copied back to new disk – The data is then copied back to the new disk that was replaced. This
will take place automatically, and will not begin until the failed disk is completely rebuilt onto the Hot
Spare.
6. Hot Spare is back to a Hot Spare – Once the data is written from the Hot Spare back to the failed disk,
the Hot Spare goes back to being a Hot Spare waiting for another disk failure.
Size. The Hot Spare must be at least the same size as the largest size disk in the Clariion. A Hot Spare will
replace a drive that is the same size or a smaller size drive. The Clariion does not allow multiple smaller
Hot Spares replace a failed disk.
Drive Type Specific. If your Clariion has a mixture of Drive Types, such as Fibre and S.ATA disks, you will
need Hot Spares of those particular Drive Types. A Fibre Hot Spare will not replace a failed S.ATA disk
and vice versa.
Hot Spares are not assigned to any particular RAID Group. They are used by the Clariion in the event of
any failure of that Drive Type. The recommendation for Hot Spares is one (1) Hot Spare for every thirty
(30) disks.
There are multiple ways to create a RAID Group. One is via the Navisphere GUI, and the other is through
the Command Line Interface. In later slides we will list the commands to create a RAID Group.
LUN Migration
The process of a LUN Migration has been available in Navisphere as of Flare Code or Release 16. The LUN
Migration is a move of a LUN within a Clariion from one location to another location. It is a two step
process. First it is a block by block copy of a “Source LUN” to its new location “Destination LUN”. After
the copy is complete, it then moves the “Source” LUNs location to its new place in the Clariion.
Again, this type of LUN Migration is an internal move of a LUN, not like a SANCopy where a Data
Migration occurs between a Clariion and another storage device. In the illustration above, we are showing
that we are moving Exchange off of the Vault drives onto Raid Group 10 on another Enclosure in the
Clariion. We will first discuss the process of the Migration, and then the Rules of the Migration.
1. Create a Destination LUN. This is going to be the Source LUN’s new location in the Clariion on the
disks. The Destination LUN is a LUN which can be on a different Raid Group, on a different BUS, on a
different Enclosure. The reason for a LUN Migration might be an instance where we may want to offload a
LUN from a busy Raid Group for performance issues. Or, we want to move a LUN from Fibre Drives to
ATA Drives. This we will discuss in the RULES portion.
2. Start the Migration from the Source LUN. From the LUN in Navisphere, we simply right-click and
select Migrate. Navisphere gives us a window that displays the current information about the Source LUN,
and a selection window of the Destination LUN. Once we select the Destination LUN and click Apply, the
migration begins. The migration process is actually a two step process. It is a copy first, then a move. Once
the migration begins, it is a block for block copy from the Source LUN (Original Location) to the
Destination LUN (New Location). This is important to know because the Source LUN does not have to be
offline while this process is running. The host will continue to read and write to the Source LUN, which
will write to Cache, then Cache writing out to the disk. Because it is a copy, any new write to the source lun
will also write to the destination lun. At any time during this process, you may cancel the Migration if the
wrong LUN was selected, or to wait until a later time. A priority level is also available to speed up or slow
down the process.
3. Migration Completes. When the migration completes, the Source LUN will then MOVE to it’s new
location in the Clariion. Again, there is nothing that needs to be done from the host, as it is still the same
LUN as it was to begin with, just in a new space on the Clariion. The host doesn’t even know that the LUN
is on a Clariion. It thinks the LUN is a local disk. The Destination LUN ID that you give a LUN when
creating, will disappear. To the Clariion, that LUN never existed. The Source LUN will occupy the space of
the Destination LUN, taking with it the same LUN ID, SP Ownership, and host connectivity. The only
things that may or may not change based on your selection of the Destination might be the Raid type,
Raid Group, size of the LUN, or Drive Type. The original space that the Source LUN once occupied is
going to show as FREE Space in Navisphere on the Clariion. If you were to look at the Raid Group where
the Source LUN used to live, under the Partitions tab, you will see the space the original LUN occupied as
a Free. The Source LUN is still in the same Storage Group, assigned to the Host as it was before.
Migration Rules
1. Equal to in size or larger. You can migrate a LUN to a LUN that is the exact same block count size, or to
a LUN that is larger in size, so long as the host has the ability to see the additional space once the
migration has completed. Windows would need a rescan, reboot of the disks to see the additional space,
then using Diskpart, extend the Volume on the host. A host that doesn’t haven’t the ability to extend a
volume, would need a Volume Manager software to grow a filesystem, etc.
2. The same or a different drive type. A destination LUN can be on the same type of drives as the source,
or a different type of drive. For instance, you can migrate a LUN from Fibre Drives to ATA Drives when
the Source LUN no longer needs the faster type drives. This is a LUN to LUN copy/move, so again, disk
types will not stop a migration from happening, although it may slow the process from completing.
3. The same or a different raid type. Again, because it is a LUN to LUN copy, raid types don’t matter. You
can move a LUN from Raid 1_0 to Raid 5 and reclaim some of the space on the Raid 1_0 disks. Or find
that Raid 1_0 better suits your needs for performance and redundancy than Raid 5.
4. A Regular LUN or MetaLUN. The destination LUN only has to be equal in size, so whether it is a regular
LUN on a 5 disk Raid 5 group or a Striped MetaLUN spread across multiple enclosures, buses, raid groups
for performance is completely up to you.
2. A SnapView, MirrorView, or SanCopy LUN. Because these LUNs are being used by the Clariion to
replicate data for local recoveries, replicate data to another Clariion for Disaster Recovery, or to move the
data to/from another storage device, they are not available as a Destination LUN.
3. In a Storage Group. If a LUN is in a Storage Group, it is believed to belong to a Host. Therefore, the
Clariion will not let you write over a LUN that potentially belongs to another host.
MetaLUNs
The purpose of a MetaLUN is that a Clariion can grow the size of a LUN on the ‘fly’. Let’s say that a host is
running out of space on a LUN. From Navisphere, we can “Expand” a LUN by adding more LUNs to the
LUN that the host has access to. To the host, we are not adding more LUNs. All the host is going to see is
that the LUN has grown in size. We will explain later how to make space available to the host.
There are two types of MetaLUNs, Concatenated and Striped. Each has their advantages and
disadvantages, but the end result which ever you use, is that you are growing, “expanding” a LUN.
A Concatenated MetaLUN is advantageous because it allows a LUN to be “grown” quickly and the space
made available to the host rather quickly as well. The other advantage is that the Component LUNs that
are added to the LUN assigned to the Host can be of a different RAID type and of a different size.
The host writes to Cache on the Storage Processor, the Storage Processor then flushes out to the disk.
With a Concatenated MetaLUN, the Clariion only writes to one LUN at a time. The Clariion is going to
write to LUN 6 first. Once the Clariion fills LUN 6 with data, it then begins writing to the next LUN in the
MetaLUN, which is LUN 23. The Clariion will continue writing to LUN 23 until it is full, then write to
LUN 73. Because of this writing process, there is no performance gain. The Clariion is still only writing to
one LUN at a time.
A Striped MetaLUN is advantageous because if setup properly could enhance performance as well as
protection. Let’s look first at how the MetaLUN is setup and written to, and how performance can be
gained. With the Striped MetaLUN, the Clariion writes to all LUNs that make up the MetaLUN, not just
one at a time. The advantage of this is more spindles/disks. The Clariion will stripe the data across all of
the LUNs in the MetaLUN, and if the LUNs are on different Raid Groups, on different Buses, this will
allow the application to be striped across fifteen (15) disks, and in the example above, three back-end
buses of the Clariion. The workload of the application is being spread out across the back-end of the
Clariion, thereby possibly increasing speed. As illustrated above, the first Data Stripe (Data Stripe 1) that
the Clariion writes out to disk will go across the five disks on Raid Group 5 where LUN 6 lives. The next
stripe of data (Data Stripe 2), is striped across the five disks that make up RAID Group 10 where LUN23
lives. And finally, the third stripe of data (Data Stripe 3) is striped across the five disks that make up Raid
Group 20 where LUN 73 lives. And then the Clariion starts the process all over again with LUN6, then
LUN 23, then LUN 73. This gives the application 15 disks to be spread across, and three buses.
As for data protection, this would be similar to building a 15 disk raid group. The problem with a 15 disk
raid group is that if one disk where to fail, it would take a considerable amount of time to rebuild the
failed disk from the other 14 disks. Also, if there were two disks to fail in this raid group, and it was RAID
5, data would be lost. In the drawing above, each of the LUNs is on a different RAID group. That would
mean that we could lose a disk in RAID Group 5, RAID Group 10, and RAID Group 20 at the same time,
and still have access to the data. The other advantage of this configuration is that the rebuilds are
occurring within each individual RAID Group. Rebuilding from four disks is going to be much faster than
the 14 disks in a fifteen disk RAID Group.
The disadvantage of using a Striped MetaLUN is that it takes time to create. When a component LUN is
added to the MetaLUN, the Clariion must restripe the data across the existing LUN(s) and the new LUN.
This takes time and resources of the Clariion. There may be a performance impact while a Striped
MetaLUN is re-striping the data. Also, the space is not available to the host until the MetaLUN has
completed re-striping the data.
Access Logix
Access Logix, often referred to as ‘LUN Masking’, is the Clariion term for:
2. Making sure that hosts cannot see every LUN in the Clariion
Let’s talk about making sure that every host cannot see every LUN in the Clariion first.
Access Logix is an enabler on the Clariion that allows hosts to connect to the Clariion, but not have the
ability to just go out and take ownership of every LUN. Think of this situation. You have ten Window’s
Hosts attached to the Clariion, five Solaris Hosts, eight HP Hosts, etc… If all of the hosts were attached to
the Clariion (zoning), and there was no such thing as Access Logix, every host could potentially see every
LUN after a rescan or 17 reboots by Window’s. Probably not a good thing to have more than one host
writing to a LUN at a time, let alone different Operating Systems writing to the same LUNs.
Now, in order for a host to see a LUN, a few things must be done first in Navisphere.
1. For a Host, a Storage Group must be created. In the illustration above, the ‘Storage Group’ is like a
bucket.
From the illustration above, let’s start with the Windows Host on the far left side. We created a Storage
Group for the Windows Host. You can name the Storage Group whatever you want in Navisphere. It
would make sense to name the Storage Group the same as the Host name. Second, we connected the host
to the Storage Group. Finally, we added LUNs to the Storage Group. Now, the host has the ability to see
the LUNs, after a rescan, or a reboot.
However, in the Storage Group, when the LUNs are added to the Storage Group, there is a column on the
bottom right-side of the Storage Group window that is labeled Host ID. You will notice that as the LUNs
are placed into the Storage Group, Navisphere gives each LUN a Host ID number. The host ID number
starts at 0, and continues to 255. We can place up to 256 LUNs into a Storage Group. The reason for this,
is that the Host has no idea that the LUN is on a Clariion. The host believes that the LUN is a Local Disk.
For the host, this is fine. In Windows, the host is going to rescan, and pick up the LUNs as the next
available disk. In the example above, the Windows Host picks up LUNs 6 and 23, but to the host, after a
rescan/reboot, the host is going to see the LUNs as Disk 4 and Disk 5, which we can now initialize, add a
drive letter, format, create the partition, and make the LUN visible through the host.
In the case of the Solaris Host’s Storage Group, when we added the LUNs to the Storage Group, we
changed LUN 9s host id to 9, and LUN 15s host id to 15. This allows the Solaris host to see the Clariion
LUN 9 as c_t_d 9, and LUN 15 as c_t_d 15. If we hadn’t changed the Host ID number for the LUNs
however, Navisphere would have assigned LUN 9 with the Host ID of 0, and LUN 15 with the Host ID of 1.
Then the host would see LUN 9 as c_t_d 0 and LUN 15 as c_t_d 15.
The last drawing is an example of a Clustered environment. The blue server is the Active Node of the
cluster, and the orange server is the Standby/Passive Node of the cluster. In this example, we created a
Storage Group in Navisphere for each host in the Cluster. Into the Active Node Storage Group, we place
LUN 8. LUN 8 also went into the Passive Node Storage Group. A LUN can belong to multiple storage
groups. The reason for this, is if we only placed LUN 8 in to the Active Node Storage Group, not into the
Passive Node Storage Group, and the Cluster failed over to the Passive Node for some reason, there would
be no LUN to see. A host can only see what is in it’s storage group. That is why LUN 8 is in both Storage
Groups.
Now, if this is not a Clustered Environment, this brings up another problem. The Clariion does not limit
who has access, or read/write privileges to a LUN. When a LUN is assigned to a Storage Group, the LUN
belongs to the host. If we assign a LUN out to two hosts, with no Cluster setup, we are giving simultaneous
access of a LUN to two different servers. This means that each server would assume ownership of the
LUN, and constantly be overwriting each other’s data.
We also added LUN 73 to the Active Node Storage Group, and LUN 74 to the Passive Node Storage Group.
This allows each server to see LUN 8 for failover purposes, but LUN 73 only belonging to the Active Node
Host, and LUN 74 belonging to the Passive Node Host. If the cluster fails over to the Passive Node, the
Passive Node will see LUN 8 and LUN 74, not LUN 73 because it is not in the Storage Group.
Notice that LUN 28 is in the Clariion, but not assigned to anyone at the time. No host has the ability to
access LUN 28.