Storage Area Network (SAN) Workshop Day 1
Storage Area Network (SAN) Workshop Day 1
Definition
A dedicated network that interconnects hosts and storage devices, usually via fibre channel (FC) switches and optical fiber
media. A storage device is a disk subsystem or array with a set of disk drives. As more and more storage devices are added to
the SAN, they become accessible to all servers attached to the SAN. Since any server can access the storage, more server power
is available for applications and user data residing on the shared storage. SAN accesses data at the bock level; NAS accesses
data at the file level.
Network attached storage (NAS)—A NAS server is a network device that manages arrays of disks connected to it and shares
the local volumes with hosts on the network using network-based protocols such as CIFS, HTTP, and NFS.
Storage area network (SAN)—This is a dedicated network where servers are connected via fibre channel switches or hubs to
storage devices such as tapes or RAID arrays. SAN is easy to manage, flexible, and scalable. The servers are equipped with
fibre channel (FC) adapters. As more storage devices are linked to the switch or hub, they too become accessible from all the
servers. FC can transfer large blocks of data over long distances.
Internet SCSI (iSCSI)—iSCSI is a networking protocol for SCSI-3 traffic over TCP/IP networks. IP connected hosts can use
iSCSI to access storage over an IP network.
Fibre channel over IP (FCIP) —Enables fibre channel SANs to be interconnected over an IP-based network.
Benefits of SAN
The advantages and business benefits you derive from a SAN depends on several things. Do you have a lot of storage devices
and servers that will use the SAN? Are you designing for disaster recovery or local clustering? How busy is your network
traffic? Do your applications have lots of read or write functions to the disks?
SAN based LAN-Free backup(figure right) removes burden on the servers and network in a traditional LAN based backup (left)
M
tt
/
ffi
,O
z
B
W
"
k
0
1
(
x
E
y
R
D
U
N
v
.T
)
L
A
F
5
2
d
3
I-
C
S
ti
lu
n
a
m
o
f
r
P
g
H
>
e
c
s
p
b
G
h
it
w
SAN Components
FC cable is a high-speed data transmission technology used to connect
multiple hosts and storage devices over fiber-optic or copper cables. FC (1)
carries much more information than conventional copper wire because
optical fiber is not subject to electromagnetic interference, (2) can map
mature channel based protocols like SCSI as a higher layer protocol (called
FCP), (3) is capable of several peak transfer speeds (1,2…,10) Gigabit per
second but (4) its inner glass strands
require more protection than copper-
based cables.
Host bus adapters (HBAs) also referred to as FC interface cards are installed into
hosts. Although these are FC adapters, they are like SCSI adapters. OS-specific device
drivers are usually installed for these adapters.
HBA
Storage devices such as tape libraries, RAID – “redundant array of inexpensive disks”
storage reliability through redundancy, combining multiple low-cost, less-reliable disk
drives components into a logical unit where all drives in the array are interdependent
JBODs-"Just a Bunch Of Drives", where all disks are independently addressed, with no
collective properties.
Fiber Channel Cable
Interconnect devices such as fibre channel hubs or switches. Other special-purpose devices are SAN gateways, bridges,
routers, and extenders.
and
SAN Topologies
SANs are implemented using any of the following three topologies:
Point-to-point topology is similar to SCSI. A fiber HBA is installed inside a server and directly cabled to a fiber adapter within
a disk enclosure, storage subsystem, or tape library. The advantage of a point-to-point topology is that it is free from congestion
or collisions and has the maximum bandwidth between the devices. However, it is extremely limited, with each storage unit tied
behind a server.
Fibre channel–arbitrated loop (FC-AL) topology, the servers and storage devices are configured in a ring or loop. The
drawback is performance. The bandwidth is shared by devices in the loop similar to a token ring. When a node needs to
communicate with another, the transmitting node arbitrates for control of the loop (hence, an “arbitrated” loop). If the loop is
not in use, it takes control and notifies the receiving node about the impending communication. The two nodes access each
other at the full bandwidth of the media (just like a point-to-point connection) for the duration of the data transfer. Bandwidth
within a loop is shared. Multiple simultaneous communications are not possible. The advantage is that the hub is cheaper than a
switch and is used where cost must be low and low performance is acceptable.
Switched-fabric topology, The requirement for a switched-fabric topology is exactly what it sounds like: You need one or
more FC switch. All devices (servers, storage, hubs, and so on) must be connected to one or more SAN switch. An FC fabric is
one or more fabric switch in a single or extended configuration. A fabric has higher flexibility and performance. Unlike a loop,
a fabric node can be added, removed, replaced, and shut down without disrupting other nodes. Problems with a device are
localized and not transmitted to other fabric nodes. Unlike FC-AL, bandwidth in a fabric is not shared. All device-to-device
links are capable of simultaneously delivering the full bandwidth (100 MB/sec for 1 Gbps fibre channel). Data transfer between
any two nodes is independent of communication between other nodes. This increases scalability.
Many disk enclosures have RAID controllers that let you configure a RAID group with drives within the enclosure. The RAID
group can be divided into logical units (LUNs). The operating system on the attached hosts views each LUN as a single physical
disk and usually does not know about the RAID configuration of the LUNs. There are hardware-based and software-based
RAID volumes.
RAID
Hardware RAID is configured using device drivers for subsystem controllers or host-based adapters (HBAs). The vendor
provides utilities to create and resolve problems with the RAID volumes. The advantage is lower host overhead, since RAID
parity calculations are offloaded to the controller. But the RAID volumes are limited to the set of disks attached to the
controller.
Software-based RAID volumes are created using special logical volume management (LVM) software installed on the host
operating system. The problem with software RAID is the additional load that it places on the host CPUs and memory.
RAID 1 – Mirroring also known as "drive mirroring", simultaneously copies data to a second drive. The mirroring method
offers data protection and good performance in the case where a mirrored drive fails. RAID 1 is the simplest RAID
configuration, requiring only a minimum of two drives with equal capacity, and also that the drives be added in pairs. The main
disadvantage of RAID 1 is that it uses 100% drive overhead (the highest of all RAID levels), which can be considered an
inefficient use of drive capacity.
RAID 3 - Striping Plus Parity stripes data across multiple drives, with an additional drive dedicated to parity for error
correction/recovery. This configuration offers very high data transfer rates and only requires a small percentage of ECC (parity)
to data drives. However, RAID 3 requires a complicated controller design and the configuration may be difficult to rebuild after
a drive failure.
RAID 5 - Independent Striping Plus Distributed Parity, each block of data is written on a data drive and parity
information is then striped across all drives. RAID 5 is the most popular of the RAID levels because it delivers data protection
and good performance with a small overhead for parity. RAID 5 offers the most efficient use of drive capacity of all the
redundant RAID levels. This configuration requires at least three drives of equal size, which can be added one at a time.
RAID 0+1 - Combination of RAID 1 and RAID 0 (Mirrored Stripes) is made up of two or more
striped (RAID-0) sets that are mirrored together. In the figure (right), the first stripe set and the second stripe set are each made
up of three disks. The two sets are mirrored to make the RAID 0+1 volume.
If a disk in a striped set is lost, the entire stripe is disabled. Loss of disk A causes stripe set 1 to be off-lined. Now the entire
volume is nothing but a simple RAID-0 volume. The bad disk should be replaced as soon as possible. After disk replacement,
data must be synced to the striped set. Hopefully a member of the second striped set will not fail while all this is happening.
Should any disk in the live stripe set fail before or while replacing a disk and syncing data, all information in the volume will be
lost.
Hot-swap drive CRU: You can install up to 14 hot-swap drive Controller Back
customer replaceable units (CRUs) in the storage subsystem. Each drive
CRU consists of a hard disk drive and tray. The drive CRU is also
referred to as disk drive modules (DDM).
RAID controller: Each RAID controller contains three ports for SFP
modules that connect to the fibre channel cables. Two of the ports,
which are labeled Host 1 and Host 2, are used to connect to host servers.
The third port, which is labeled Expansion and is available on the
DS4300 model 60U and 60X controllers only, is used to connect
additional expansion units to the storage subsystem.
Host ports The host ports are used to connect a fibre channel cable from the host systems. You first insert an SFP into the port
and then connect a fibre channel cable. The two host ports in each controller are independent.
Ethernet port The Ethernet port is for an RJ-45 10BASE-T or 100BASE-T Ethernet connection. Use the Ethernet connection
to directly manage storage subsystems.
Set up the Following Configuration and follow the steps given below:
NS-5200 SCSI
Network
attached
storage
Launch Storage Manager. It show your DS4300 devices. Double click your device. (If your device is not shown, you might
need to run Automatic Discovery. From the menu choose Tools > Automatic Discovery).
At the top, there are two tabs. One says “Logical/Physical View”. The other says “Mappings View”. For now, let’s focus on
the “Logical/Physical View”.
On the right side of the screen depicts your physical system. Each little rectangle represents an individual disk. The white-and-
gray rectangles are not-yet assigned or “unconfigured” disks. The fully gray rectangles are assigned disks. The white-and-red
ones are hot spares. The white dotted line rectangles indicates an empty slot without a disk. Each row of disks is an enclosure or
a shelf. Each enclosure has an ID number. If you look at your physical DS4300, the enclosure numbers are on the back of the
shelf. On the left side of the screen depicts the logical assignment of your system. The double cylinder represents a logical array
(not to be confused with a physical enclosure). The single cylinder represents a logical drive.
1. Designating Hot Spares
1.1. Navigate to Drive > Hot Spare coverage in Storage Manager
1.2. Choose Automatically Assign Drives
1.3. Adjust the layout/assignment of hot spares based on your preferences, if you want
When a physical disk fails, a hot spare will automatically jump in and take the place of the failed disk. The automatic
assignment of hot spares is a good starting point. Adjust from there depending on your risk tolerance and your planned RAID
type (i.e. mirrored arrays are “safer” than RAID 5 ones).
Each logical drive must be in exactly one array. It cannot span multiple arrays, nor can it be larger than its parent array.
You can expand a logical drive, but you cannot shrink it.
It’s preferable to create the drive at the right size, rather than counting on expanding it
SAN Addressing
A WWN is a 64-bit address that is
analogous to the MAC address and starts
with a hex 5 or 6 in the first half-byte
followed by the vendor identifier in the
next 3 bytes. The vendor-specific
information is then contained in the
following fields.
It guarantees uniqueness within a large SAN fabric. There are two types of WWNs:
Node WWNs—These are allocated to the entire adapter.
Port WWNs—These are assigned to each port within an adapter. If an adapter has four FC ports, each port will have
port WWNs. These are used for SAN zoning.
The 24-bit address scheme removes the overhead of manual administration of addresses by allowing the topology itself to
assign addresses. This is not like WWN addressing, in which the addresses are assigned to the manufacturers by the IEEE
standards committee, and are built in to the device at the time of manufacture. If the topology itself assigns the 24-bit addresses,
then somebody has to be responsible for the addressing scheme from WWN addressing to port addressing.
In the switched fabric environment, the switch itself is responsible for assigning and maintaining the port addresses. When the
device with its WWN logs into the switch on a specific port, the switch will assign the port address to that port and the switch
will also maintain the correlation between the port address and the WWN address of the device of that port. This function of the
switch is implemented by using the Name Server. The Name Server is a component of the fabric operating system, which runs
inside the switch. It is essentially a database of objects in which fabric-attached device registers its values. Dynamic addressing
also removes the partial element of human error in addressing maintenance, and provides more flexibility in additions, moves,
and changes in the SAN. A 24-bit port address consists of three parts:
Domain (from bits 23 to 16)
Area (from bits 15 to 08)
Port or Arbitrated Loop physical address: AL_PA (from bits 07 to 00)
The significance of some of the bits that make up the port address in the are:
Domain : The most significant byte of the port address is the domain. This is the address of the switch itself. One byte allows
up to 256 possible addresses. Because some of these are reserved, as for the one for broadcast, there are only 239 addresses
available. This means that you can theoretically have as many as 239 switches in your SAN environment. The domain number
allows each switch to have a unique identifier if you have multiple interconnected switches in your environment.
Area: The area field provides 256 addresses. This part of the address is used to identify a group of ports, for example, a card
with more ports on it. This means that each group of ports has a different area number, even if there is only one port in the
group.
Port: The final part of the address provides 256 addresses for identifying attached ports.
Zoning
There are two ways to prevent the hosts from accessing the disks:
You can either disconnect their HBAs from the SAN, or you can use
the zoning capabilities of your fabric switch to prevent access to the
switch. Zoning means grouping together ports on your switch so that
the devices connected to those ports form a virtual private storage
network. Ports that are members of a group or zone can
communicate with each other but are isolated from ports in other
zones.
For example, it might be desirable to separate a Microsoft Windows NT environment from a UNIX environment. This is very
useful because of the manner in which Windows attempts to claim all available storage for itself. Because not all storage
devices are capable of protecting their resources from any host seeking available resources, it makes sound business sense to
protect the environment in another manner. We show an example of zoning in this figure where AIX is separated from NT by
creating Zone 1 and Zone 2. This diagram also shows how a device can be in more than one zone.
With software zoning there is no need to worry about the physical connections to the switch. If you use WWNs for the zone
members, even when a device is connected to another physical port, it will still remain in the same zoning definition, because
the device’s WWN remains the same. The zone follows the WWN.
Shown in right is an example of WWN-based soft zoning. In this example, symbolic names are defined for each WWN in the
SAN to implement the zoning requirements.
Zone_1 contains the aliases alex, ben, and sam, and is restricted to only these devices.
Zone_2 contains the aliases robyn and ellen, and is restricted to only these devices.
Zone_3 contains the aliases matthew, max, and ellen, and is restricted to only these devices.
Fabric OS is the firmware for Brocade's Fibre Channel switches and directors. The set of commands that will be used in this
activity are given beow:
alicreate : Creates new zone aliases. Assigns a name to a list of alias members. Alias members can include port numbers (for
example; 1,2 indicates switch 1, port 2) or WorldWide names. If there are multiple alias members, all members must be
separated by semicolons within the list of alias members.
zonecreate : Groups a list of zone members under a zone name. Zone members can include port numbers, WorldWide?
names, or aliases created by the alicreate command. Multiple zone members must be separated by semicolons within the list
of zone members.
cfgcreate : Assigns a configuration name to a list of configuration members. The configuration members are all zones
Multiple configuration members are separated by semicolons.
cfgenable : Checks a specified configuration for errors and makes it the active configuration of the switch. Note: No zoning is
applied to the switch until the cfgenable command completes successfully. If there is already an active configuration,
cfgenable will replace the active configuration with the specified configuration.
cfgsave : Saves zoning information to switch's flash memory. Configuration will not survive a reboot of the switch until the
configuration is saved.
cfgclear : Deactivates and removes all zoning information from active memory. To remove a configuration from flash
memory, run cfgsave after cfgclear.
switchshow : Shows overall configuration of switch. WWNs, login types (F-port, L-Port etc.) port status and some general
switch info show up in switchshow.
All zoning in the Brocade environment is done within configurations. A configuration contains aliases and zones.
An alias is a tool to simplify repetitive port numbers or WWN entries into an easy-to-remember name. For instance: rather
than typing in the WWN "50:00:0e:10:00:00:00:17" in zoning operations, one could use an alias to identify the WWN as being
"DF350_intfc_0". Aliases can also be used to identify multiple ports or WWNs by a single name. All zoning commands accept
aliases as valid parameters.
1. login: admin
Password: Follow menu instructions
2. sw2400_68:admin> switchshow
switchName: sw2400_68
switchType: 3.2
switchState: Online
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:60:69:20:13:c0
port 0: sw No_Light
port 1: sw No_Light
port 2: sw No_Light
port 3: sw No_Light
port 4: sw Online F-Port 50:00:0e:10:00:00:00:5d
port 5: sw Online F-Port 50:00:0e:10:00:00:00:17
port 6: sw Online F-Port 20:00:00:e0:69:c0:08:23
port 7: sw Online F-Port 20:00:00:e0:69:c0:08:45
11. Create a new alias for this device (Using Web Interface)
11.1.Click New Alias button
11.2.Follow menu instructions
14. Ensure that the zone is in Zone Config in the Zone Config tab
After a reconfiguration boot, this is what the host sees when the " c2config " configuration is used on the switch.
# format
Searching for disks...done
To change the configuration, simply log into the switch and activate the " c3config " configuration that we created earlier.
1. sw2400_68:admin> cfgenable "c3config"
zone config "c3config" is in effect
2. After a reconfiguration boot, the Sun Solaris host now sees the new configuration. The LUNs that used to be presented to
controller 2 are now presented to controller 3:
3. # format
Searching for disks...done
4. c3t0d1: configured with capacity of 32.45GB
c3t0d2: configured with capacity of 15.51GB
5. AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
/pci@1f,4000/scsi@3/sd@0,0
1. c3t0d0 <HITACHI-DF350F-0000 cyl 17 alt 2 hd 8 sec 65>
/pci@6,4000/fibre-channel@3/sd@0,0
2. c3t0d1 <HITACHI-DF350F-0000 cyl 65438 alt 2 hd 16 sec 65>
/pci@6,4000/fibre-channel@3/sd@0,1
3. c3t0d2 <HITACHI-DF350F-0000 cyl 62539 alt 2 hd 8 sec 65>
/pci@6,4000/fibre-channel@3/sd@0,2