0% found this document useful (0 votes)
122 views14 pages

Storage Area Network (SAN) Workshop Day 1

This document provides an overview of a Storage Area Network (SAN) workshop. It defines SANs and describes how they provide shared storage accessible by multiple servers over a dedicated network. Storage access modes like DAS, NAS, and SAN are explained. Key SAN components like fibre channel cables, SFP modules, HBAs, storage devices, and interconnect devices are defined. Common SAN topologies of point-to-point, FC-AL loop, and switched fabric are summarized. The document also covers fault-tolerant disk subsystems, hardware and software RAID, and examples of RAID levels 0 and 1.

Uploaded by

rana_m_zeeshan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views14 pages

Storage Area Network (SAN) Workshop Day 1

This document provides an overview of a Storage Area Network (SAN) workshop. It defines SANs and describes how they provide shared storage accessible by multiple servers over a dedicated network. Storage access modes like DAS, NAS, and SAN are explained. Key SAN components like fibre channel cables, SFP modules, HBAs, storage devices, and interconnect devices are defined. Common SAN topologies of point-to-point, FC-AL loop, and switched fabric are summarized. The document also covers fault-tolerant disk subsystems, hardware and software RAID, and examples of RAID levels 0 and 1.

Uploaded by

rana_m_zeeshan
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

National University of Sciences and Technology

School of Electrical Engineering and Computer Science


Storage Area Network (SAN) Workshop Day 1

Definition
A dedicated network that interconnects hosts and storage devices, usually via fibre channel (FC) switches and optical fiber
media. A storage device is a disk subsystem or array with a set of disk drives. As more and more storage devices are added to
the SAN, they become accessible to all servers attached to the SAN. Since any server can access the storage, more server power
is available for applications and user data residing on the shared storage. SAN accesses data at the bock level; NAS accesses
data at the file level.

Storage Access Modes


 Direct attached storage (DAS)—A disk subsystem is attached directly to HBAs in one or more hosts. This is also called
server-centric storage because it is captured and physically cabled to the server.

 Network attached storage (NAS)—A NAS server is a network device that manages arrays of disks connected to it and shares
the local volumes with hosts on the network using network-based protocols such as CIFS, HTTP, and NFS.

 Storage area network (SAN)—This is a dedicated network where servers are connected via fibre channel switches or hubs to
storage devices such as tapes or RAID arrays. SAN is easy to manage, flexible, and scalable. The servers are equipped with
fibre channel (FC) adapters. As more storage devices are linked to the switch or hub, they too become accessible from all the
servers. FC can transfer large blocks of data over long distances.

 Internet SCSI (iSCSI)—iSCSI is a networking protocol for SCSI-3 traffic over TCP/IP networks. IP connected hosts can use
iSCSI to access storage over an IP network.

 Fibre channel over IP (FCIP) —Enables fibre channel SANs to be interconnected over an IP-based network.

Benefits of SAN
The advantages and business benefits you derive from a SAN depends on several things. Do you have a lot of storage devices
and servers that will use the SAN? Are you designing for disaster recovery or local clustering? How busy is your network
traffic? Do your applications have lots of read or write functions to the disks?

SAN based LAN-Free backup(figure right) removes burden on the servers and network in a traditional LAN based backup (left)
M
tt
/
ffi
,O
z
B
W
"
k
0
1
(
x
E
y
R
D
U
N
v
.T
)
L
A
F
5
2
d
3
I-
C
S
ti
lu
n
a
m
o
f
r
P
g
H
>
e
c
s
p
b
G
h
it
w
SAN Components
FC cable is a high-speed data transmission technology used to connect
multiple hosts and storage devices over fiber-optic or copper cables. FC (1)
carries much more information than conventional copper wire because
optical fiber is not subject to electromagnetic interference, (2) can map
mature channel based protocols like SCSI as a higher layer protocol (called
FCP), (3) is capable of several peak transfer speeds (1,2…,10) Gigabit per
second but (4) its inner glass strands
require more protection than copper-
based cables.

Small Form-Factor Pluggable (SFP)


modules are transceivers that interface a
motherboard with a FC cable and are
used to covert electrical signals to
optical signals required for fibre channel
transmission to and from devices such as
RAID controllers, switches and other fibre channel devices
SFP Module

Host bus adapters (HBAs) also referred to as FC interface cards are installed into
hosts. Although these are FC adapters, they are like SCSI adapters. OS-specific device
drivers are usually installed for these adapters.
HBA

Storage devices such as tape libraries, RAID – “redundant array of inexpensive disks”
storage reliability through redundancy, combining multiple low-cost, less-reliable disk
drives components into a logical unit where all drives in the array are interdependent
JBODs-"Just a Bunch Of Drives", where all disks are independently addressed, with no
collective properties.
Fiber Channel Cable

Interconnect devices such as fibre channel hubs or switches. Other special-purpose devices are SAN gateways, bridges,
routers, and extenders.
and
SAN Topologies
SANs are implemented using any of the following three topologies:

Point-to-point topology is similar to SCSI. A fiber HBA is installed inside a server and directly cabled to a fiber adapter within
a disk enclosure, storage subsystem, or tape library. The advantage of a point-to-point topology is that it is free from congestion
or collisions and has the maximum bandwidth between the devices. However, it is extremely limited, with each storage unit tied
behind a server.

Fibre channel–arbitrated loop (FC-AL) topology, the servers and storage devices are configured in a ring or loop. The
drawback is performance. The bandwidth is shared by devices in the loop similar to a token ring. When a node needs to
communicate with another, the transmitting node arbitrates for control of the loop (hence, an “arbitrated” loop). If the loop is
not in use, it takes control and notifies the receiving node about the impending communication. The two nodes access each
other at the full bandwidth of the media (just like a point-to-point connection) for the duration of the data transfer. Bandwidth
within a loop is shared. Multiple simultaneous communications are not possible. The advantage is that the hub is cheaper than a
switch and is used where cost must be low and low performance is acceptable.

Switched-fabric topology, The requirement for a switched-fabric topology is exactly what it sounds like: You need one or
more FC switch. All devices (servers, storage, hubs, and so on) must be connected to one or more SAN switch. An FC fabric is
one or more fabric switch in a single or extended configuration. A fabric has higher flexibility and performance. Unlike a loop,
a fabric node can be added, removed, replaced, and shut down without disrupting other nodes. Problems with a device are
localized and not transmitted to other fabric nodes. Unlike FC-AL, bandwidth in a fabric is not shared. All device-to-device
links are capable of simultaneously delivering the full bandwidth (100 MB/sec for 1 Gbps fibre channel). Data transfer between
any two nodes is independent of communication between other nodes. This increases scalability.

Fault-Tolerant Disk Subsystems


Arrays of disks have stringent cooling and power requirements, making them prime candidates for a fault-tolerant design. A
fault-tolerant disk enclosure or subsystem allows disks to be connected to an internal bus and presented to the external
controller ports as simple drives or RAID volumes. Disks in a fault-tolerant enclosure should be hot-swappable. Hot-swappable
components are those that can be removed or replaced without interrupting server operations or data access. Hot-spare
components are those that are online and configured as standbys into the system and are used as automatic replacements for
failed components. Hot sparing, unlike hot swapping, does not require a user intervention. Disk Redundancy is built into these
subsystems using RAID which enables you to implement redundancy among a set of disks so that failure of a disk does not
interrupt availability or cause data loss. Disks have moving parts and are susceptible to failures. The platters are rotating at
speeds of up to 15,000 rpm. A broken head, a speck of dust, or a loose splinter on a platter quickly damages the magnetic
surfaces.

Many disk enclosures have RAID controllers that let you configure a RAID group with drives within the enclosure. The RAID
group can be divided into logical units (LUNs). The operating system on the attached hosts views each LUN as a single physical
disk and usually does not know about the RAID configuration of the LUNs. There are hardware-based and software-based
RAID volumes.

RAID

Hardware RAID is configured using device drivers for subsystem controllers or host-based adapters (HBAs). The vendor
provides utilities to create and resolve problems with the RAID volumes. The advantage is lower host overhead, since RAID
parity calculations are offloaded to the controller. But the RAID volumes are limited to the set of disks attached to the
controller.

Software-based RAID volumes are created using special logical volume management (LVM) software installed on the host
operating system. The problem with software RAID is the additional load that it places on the host CPUs and memory.

RAID 0 - Disk Striping is a configuration where data is written in sequential sections


across more than two drives. RAID 0 is easy to implement, and it can dramatically
improve performance. Several drives can be accessed at once, minimizing the overall
"seek" time of larger files. This configuration has no data redundancy and therefore no
protection against data loss, however, so it should not be used for business-critical
applications.

RAID 1 – Mirroring also known as "drive mirroring", simultaneously copies data to a second drive. The mirroring method
offers data protection and good performance in the case where a mirrored drive fails. RAID 1 is the simplest RAID
configuration, requiring only a minimum of two drives with equal capacity, and also that the drives be added in pairs. The main
disadvantage of RAID 1 is that it uses 100% drive overhead (the highest of all RAID levels), which can be considered an
inefficient use of drive capacity.

RAID 3 - Striping Plus Parity stripes data across multiple drives, with an additional drive dedicated to parity for error
correction/recovery. This configuration offers very high data transfer rates and only requires a small percentage of ECC (parity)
to data drives. However, RAID 3 requires a complicated controller design and the configuration may be difficult to rebuild after
a drive failure.

RAID 5 - Independent Striping Plus Distributed Parity, each block of data is written on a data drive and parity
information is then striped across all drives. RAID 5 is the most popular of the RAID levels because it delivers data protection
and good performance with a small overhead for parity. RAID 5 offers the most efficient use of drive capacity of all the
redundant RAID levels. This configuration requires at least three drives of equal size, which can be added one at a time.

RAID 0+1 - Combination of RAID 1 and RAID 0 (Mirrored Stripes) is made up of two or more
striped (RAID-0) sets that are mirrored together. In the figure (right), the first stripe set and the second stripe set are each made
up of three disks. The two sets are mirrored to make the RAID 0+1 volume.
If a disk in a striped set is lost, the entire stripe is disabled. Loss of disk A causes stripe set 1 to be off-lined. Now the entire
volume is nothing but a simple RAID-0 volume. The bad disk should be replaced as soon as possible. After disk replacement,
data must be synced to the striped set. Hopefully a member of the second striped set will not fail while all this is happening.
Should any disk in the live stripe set fail before or while replacing a disk and syncing data, all information in the volume will be
lost.

Workshop Activity 1: Setting up a RAID SAN Controller


The DS4300 dual-controller storage subsystem (Model Controller Front
60U/60X) includes two RAID controllers, two power supplies,
and two cooling units and provides dual, redundant
controllers, redundant cooling, redundant power, and battery
backup of the RAID controller cache. The DS4300 storage
subsystem is designed to provide maximum host- and drive-
side redundancy. Each RAID controller supports direct
attachment of up to two hosts containing two fibre channel
host adapters.

Hot-swap drive CRU: You can install up to 14 hot-swap drive Controller Back
customer replaceable units (CRUs) in the storage subsystem. Each drive
CRU consists of a hard disk drive and tray. The drive CRU is also
referred to as disk drive modules (DDM).

RAID controller: Each RAID controller contains three ports for SFP
modules that connect to the fibre channel cables. Two of the ports,
which are labeled Host 1 and Host 2, are used to connect to host servers.
The third port, which is labeled Expansion and is available on the
DS4300 model 60U and 60X controllers only, is used to connect
additional expansion units to the storage subsystem.
Host ports The host ports are used to connect a fibre channel cable from the host systems. You first insert an SFP into the port
and then connect a fibre channel cable. The two host ports in each controller are independent.

Ethernet port The Ethernet port is for an RJ-45 10BASE-T or 100BASE-T Ethernet connection. Use the Ethernet connection
to directly manage storage subsystems.

Set up the Following Configuration and follow the steps given below:

NS-5200 SCSI
Network
attached
storage
Launch Storage Manager.  It show your DS4300 devices.  Double click your device. (If your device is not shown, you might
need to run Automatic Discovery.  From the menu choose Tools > Automatic Discovery).

It should bring up a screen this:

At the top, there are two tabs.  One says “Logical/Physical View”.  The other says “Mappings View”.  For now, let’s focus on
the “Logical/Physical View”.

On the right side of the screen depicts your physical system.  Each little rectangle represents an individual disk. The white-and-
gray rectangles are not-yet assigned or “unconfigured” disks. The fully gray rectangles are assigned disks. The white-and-red
ones are hot spares. The white dotted line rectangles indicates an empty slot without a disk. Each row of disks is an enclosure or
a shelf. Each enclosure has an ID number.  If you look at your physical DS4300, the enclosure numbers are on the back of the
shelf. On the left side of the screen depicts the logical assignment of your system. The double cylinder represents a logical array
(not to be confused with a physical enclosure). The single cylinder represents a logical drive.
1. Designating Hot Spares
1.1. Navigate to Drive > Hot Spare coverage in Storage Manager
1.2. Choose Automatically Assign Drives
1.3. Adjust the layout/assignment of hot spares based on your preferences, if you want

When a physical disk fails, a hot spare will automatically jump in and take the place of the failed disk.   The automatic
assignment of hot spares is a good starting point.  Adjust from there depending on your risk tolerance and your planned RAID
type (i.e. mirrored arrays are “safer” than RAID 5 ones).

2. Assign Physical Disks to Logical Arrays


2.1. Right click on the Unconfigured Capacity icon on the left side of the screen, and select Create Logical Drive,
follow the wizard to the Specify Array screen.
2.2. Choose your RAID level
2.3. Choose your Array Capacity and select Automatic drive selection.
2.4. Create your first logical drive.  Decide on its size.  (It does not need to fill the entire array)
2.5. Name your logical drive
2.6. Choose Map Later using the Mappings View
2.7. Your new Logical Array should appear on the left side of the screen.  Click on it and examine your physical
mapping.  If you don’t like it, now is the time to make your changes.

IMPORTANT NOTE: For logical arrays:

 Each disk can only be in one logical array


 It’s preferable to have your arrays made of similar disks (i.e. all SATA or all FC, and all the same size)
 If you don’t like your array, it’s easier to delete and recreate it than to change its size — as long as the array is not used.
 When you click on your logical array, the blue/purple dots indicates the physical disks that are assigned to that
particular array.
 You can expand a logical array, but you cannot shrink it.

3. Split up your Logical Arrays into Logical Drives


3.1. If you have extra space left on a logical drive, there should be an icon that says Free Capacity.
3.2. Right click on that icon and choose Create Logical Drive
3.3. Follow the wizard and create the drives, as you did in steps 2.4 through 2.6.

IMPORTANT NOTE: For logical drives:

 Each logical drive must be in exactly one array.  It cannot span multiple arrays, nor can it be larger than its parent array.
 You can expand a logical drive, but you cannot shrink it.
 It’s preferable to create the drive at the right size, rather than counting on expanding it

4. Map each Logical Drive to a LUN and Set Permissions


4.1. Click on your Mappings View tab
4.2. If the left side of your screen doesn’t show your host (the server that will connect to the SAN), create your host and
host groups.
4.3. Expand Undefined Mappings on the left side of your screen.  All of your unmapped logical drives should be here.
4.4. Right click on one of your logical drives and select Define Additional Mapping…
4.5. Select the host or host group that should have access to this logical drive
4.6. Assign a logical unit number (LUN) and press add.

Congratulations.  Your logical drive is now mapped to your selected host(s).

5. Scan for new disks from your server


5.1. Log into your server and scan for new hardware.In AIX:
5.2. AIX# cfgmgr
AIX# fget_config -Av

SAN Addressing
A WWN is a 64-bit address that is
analogous to the MAC address and starts
with a hex 5 or 6 in the first half-byte
followed by the vendor identifier in the
next 3 bytes. The vendor-specific
information is then contained in the
following fields.

World Wide Name Addressing Scheme

It guarantees uniqueness within a large SAN fabric. There are two types of WWNs:
 Node WWNs—These are allocated to the entire adapter.
 Port WWNs—These are assigned to each port within an adapter. If an adapter has four FC ports, each port will have
port WWNs. These are used for SAN zoning.

Because of the potential impact on


routing performance by using 64-bit
addressing, there is another addressing
scheme used in Fibre Channel networks.
This scheme is used to address ports in
the switched fabric. Each port in the
switched fabric has its own unique 24-
bit address.
24-bit Port Addressing Scheme

The 24-bit address scheme removes the overhead of manual administration of addresses by allowing the topology itself to
assign addresses. This is not like WWN addressing, in which the addresses are assigned to the manufacturers by the IEEE
standards committee, and are built in to the device at the time of manufacture. If the topology itself assigns the 24-bit addresses,
then somebody has to be responsible for the addressing scheme from WWN addressing to port addressing.

In the switched fabric environment, the switch itself is responsible for assigning and maintaining the port addresses. When the
device with its WWN logs into the switch on a specific port, the switch will assign the port address to that port and the switch
will also maintain the correlation between the port address and the WWN address of the device of that port. This function of the
switch is implemented by using the Name Server. The Name Server is a component of the fabric operating system, which runs
inside the switch. It is essentially a database of objects in which fabric-attached device registers its values. Dynamic addressing
also removes the partial element of human error in addressing maintenance, and provides more flexibility in additions, moves,
and changes in the SAN. A 24-bit port address consists of three parts:
 Domain (from bits 23 to 16)
 Area (from bits 15 to 08)
 Port or Arbitrated Loop physical address: AL_PA (from bits 07 to 00)

The significance of some of the bits that make up the port address in the are:

Domain : The most significant byte of the port address is the domain. This is the address of the switch itself. One byte allows
up to 256 possible addresses. Because some of these are reserved, as for the one for broadcast, there are only 239 addresses
available. This means that you can theoretically have as many as 239 switches in your SAN environment. The domain number
allows each switch to have a unique identifier if you have multiple interconnected switches in your environment.

Area: The area field provides 256 addresses. This part of the address is used to identify a group of ports, for example, a card
with more ports on it. This means that each group of ports has a different area number, even if there is only one port in the
group.

Port: The final part of the address provides 256 addresses for identifying attached ports.
Zoning
There are two ways to prevent the hosts from accessing the disks:
You can either disconnect their HBAs from the SAN, or you can use
the zoning capabilities of your fabric switch to prevent access to the
switch. Zoning means grouping together ports on your switch so that
the devices connected to those ports form a virtual private storage
network. Ports that are members of a group or zone can
communicate with each other but are isolated from ports in other
zones.

SAN zoning is a process by which switch ports or node world-wide


numbers (WWNs) are grouped together so that they can
communicate with each other. Zones allow a finer partitioning of the
fabric and are commonly used to separate different operating
systems. A fabric can be zoned into functional sets. It simplifies Zoning
implementation and enhances security. Members within a zone can
communicate only with other members within the same zone.

For example, it might be desirable to separate a Microsoft Windows NT environment from a UNIX environment. This is very
useful because of the manner in which Windows attempts to claim all available storage for itself. Because not all storage
devices are capable of protecting their resources from any host seeking available resources, it makes sound business sense to
protect the environment in another manner. We show an example of zoning in this figure where AIX is separated from NT by
creating Zone 1 and Zone 2. This diagram also shows how a device can be in more than one zone.

Hard versus Soft Zoning


One way to explain soft and soft zoning is to think in terms on your unlisted home phone number. It might not be in the phone
book or the operator might tell you that it is unlisted. However, your phone will still ring if someone dialed the correct number.
For hard zoning, think of caller ID blocking. Even if you do know the number, there is no access.

Hardware zoning is based on the physical fabric port


number. The members of a zone are physical ports on the
fabric switch. In the example (right), port-based zoning is
used to restrict Server A to only see storage devices that
are zoned to port 1: ports 4 and 5. Server B is also zoned
so that it can only see from port 2 to port 6. Server C is
zoned so that it can see both ports 6 and 7, even though
port 6 is also a member of another zone. A single port can
also belong to multiple zones. In a hardware-enforced
zone, switch hardware, usually at the ASIC level, ensures
that there is no data transferred between unauthorized zone
members. However, devices can transfer data between
ports within the same zone. Consequently, hard zoning
provides the highest level of security. The availability of
hardware-enforced zoning and the methods to create
hardware-enforced zones depends on the switch hardware.
One of the disadvantages of hardware zoning is that
Hard Zoning
devices have to be connected to a specific port, and the
whole zoning configuration could become unusable when
the device is connected to a different port.

Software zoning is implemented by the fabric operating


systems within the fabric switches. They are almost
always implemented by a combination of the name server
and the Fibre Channel Protocol. when a port contacts the
name server, the name server will only reply with
information about ports in the same zone as the requesting
port. A soft zone, or software zone, is not enforced by
hardware. What this means is that if a frame is incorrectly
delivered (addressed) to a port that it was not intended to, then it will be delivered to that port. This is in contrast to hard zones.
When using software zoning the members of the zone can be defined using their World Wide Names (Node WWN and Port
WWN). Usually, zoning software also allows you to create symbolic names for the zone members and for the zones themselves.
Dealing with the symbolic name or aliases for a device is often easier than trying to use the WWN address.

With software zoning there is no need to worry about the physical connections to the switch. If you use WWNs for the zone
members, even when a device is connected to another physical port, it will still remain in the same zoning definition, because
the device’s WWN remains the same. The zone follows the WWN.

Shown in right is an example of WWN-based soft zoning. In this example, symbolic names are defined for each WWN in the
SAN to implement the zoning requirements.

 Zone_1 contains the aliases alex, ben, and sam, and is restricted to only these devices.
 Zone_2 contains the aliases robyn and ellen, and is restricted to only these devices.
 Zone_3 contains the aliases matthew, max, and ellen, and is restricted to only these devices.

Workshop Activity 2: Zoning a SAN Switch

Fabric OS is the firmware for Brocade's Fibre Channel switches and directors. The set of commands that will be used in this
activity are given beow:
alicreate : Creates new zone aliases. Assigns a name to a list of alias members. Alias members can include port numbers (for
example; 1,2 indicates switch 1, port 2) or WorldWide names. If there are multiple alias members, all members must be
separated by semicolons within the list of alias members.

example : alicreate "df350_intfc_0", "50:00:0e:10:00:00:00:17"

zonecreate : Groups a list of zone members under a zone name. Zone members can include port numbers, WorldWide?
names, or aliases created by the alicreate command. Multiple zone members must be separated by semicolons within the list
of zone members.

example : zonecreate "c2zone", "snowtop_c2; df350_intfc_1"

cfgcreate : Assigns a configuration name to a list of configuration members. The configuration members are all zones
Multiple configuration members are separated by semicolons.

example : cfgcreate "c2config", "c2zone"

cfgenable : Checks a specified configuration for errors and makes it the active configuration of the switch. Note: No zoning is
applied to the switch until the cfgenable command completes successfully. If there is already an active configuration,
cfgenable will replace the active configuration with the specified configuration.

example : cfgenable "c2config"

cfgdisable : Deactivates a specified configuration

example : cfgdisable "c2config"

cfgsave : Saves zoning information to switch's flash memory. Configuration will not survive a reboot of the switch until the
configuration is saved.

cfgclear : Deactivates and removes all zoning information from active memory. To remove a configuration from flash
memory, run cfgsave after cfgclear.

zoneshow : Shows zoning configuration.

switchshow : Shows overall configuration of switch. WWNs, login types (F-port, L-Port etc.) port status and some general
switch info show up in switchshow.

6. Plug in the FC Connector into an open port on the switch.


7. Login to the server and verify the HBA connection. It should see the switch but not the storage device.
8. Login to the Brocade Switch GUI interface. You’ll need Java enabled on your browser. (see figure below)
9. Check the Brocade Switch Port.
9.1. On the visual depiction of the switch, click on the port where you plugged in the FC connector.
9.2. The Port Administration Services screen should pop up. You’ll need to enable the pop-up.
9.3. Verify that the Port Status is “Online”. Note the port number.
9.4. Close the Port Administration Services screen.
10. Find the WWN of your new device
10.1.Navigate back to the original GUI page.
10.2.Select Zone Admin, an icon on the bottom left of the screen. It looks like two squares and a rectangle.
10.3.Expand the Ports & Attaching Devices under the Member Selection List.

10.4.Expand the appropriate port number. Note the attached WWN.

All zoning in the Brocade environment is done within configurations. A configuration contains aliases and zones.

An alias is a tool to simplify repetitive port numbers or WWN entries into an easy-to-remember name. For instance: rather
than typing in the WWN "50:00:0e:10:00:00:00:17" in zoning operations, one could use an alias to identify the WWN as being
"DF350_intfc_0". Aliases can also be used to identify multiple ports or WWNs by a single name. All zoning commands accept
aliases as valid parameters.

Find the WWN (using Telnet)

1. login: admin
Password: Follow menu instructions
2. sw2400_68:admin> switchshow
switchName: sw2400_68
switchType: 3.2
switchState: Online
switchRole: Principal
switchDomain: 1
switchId: fffc01
switchWwn: 10:00:00:60:69:20:13:c0
port 0: sw No_Light
port 1: sw No_Light
port 2: sw No_Light
port 3: sw No_Light
port 4: sw Online F-Port 50:00:0e:10:00:00:00:5d
port 5: sw Online F-Port 50:00:0e:10:00:00:00:17
port 6: sw Online F-Port 20:00:00:e0:69:c0:08:23
port 7: sw Online F-Port 20:00:00:e0:69:c0:08:45

11. Create a new alias for this device (Using Web Interface)
11.1.Click New Alias button
11.2.Follow menu instructions

Create a new alias for this device (Using Telnet)

1. sw2400_68:admin> alicreate "df350_intfc_0", "50:00:0e:10:00:00:00:17"


sw2400_68:admin> alicreate "df350_intfc_1", "50:00:0e:10:00:00:00:5d"
sw2400_68:admin> alicreate "snowtop_c2", "20:00:00:e0:69:c0:08:45"
sw2400_68:admin> alicreate "snowtop_c3", "20:00:00:e0:69:c0:08:23"
sw2400_68:admin> zoneshow
Defined configuration:
alias: df350_intfc_0
50:00:0e:10:00:00:00:17
alias: df350_intfc_1
50:00:0e:10:00:00:00:5d
alias: snowtop_c2
20:00:00:e0:69:c0:08:45
alias: snowtop_c3
20:00:00:e0:69:c0:08:23
2. Effective configuration:
no configuration in effect

12. Add the appropriate WWN to the alias


1. Select your new device name from the Name drop down menu
2. Expand the WWNs under Member Selection List
3. Highlight the appropriate WWN
4. Select Add Member

Add the appropriate WWN to the alias (Using Telnet)


1. sw2400_68:admin> zonecreate "c2zone", "snowtop_c2; df350_intfc_1"
sw2400_68:admin> zonecreate "c3zone", "snowtop_c3; df350_intfc_1"
sw2400_68:admin> zoneshow
Defined configuration:
zone: c2zone snowtop_c2; df350_intfc_1
zone: c3zone snowtop_c3; df350_intfc_1
alias: df350_intfc_0
50:00:0e:10:00:00:00:17
alias: df350_intfc_1
50:00:0e:10:00:00:00:5d
alias: snowtop_c2
20:00:00:e0:69:c0:08:45
alias: snowtop_c3
20:00:00:e0:69:c0:08:23
2. Effective configuration:
no configuration in effect

13. Add the alias to the appropriate zone

1. Select the Zone tab


2. Select the appropriate zone from the Name drop down menu
3. Select the appropriate alias from the Member Selection List
4. Click Add Member

Add the alias to the appropriate zone (Using Telnet)

1. sw2400_68:admin> cfgcreate "c2config", "c2zone"


sw2400_68:admin> cfgcreate "c3config", "c3zone"
sw2400_68:admin> zoneshow
Defined configuration:
cfg: c2config
c2zone
cfg: c3config
c3zone
zone: c2zone snowtop_c2; df350_intfc_1
zone: c3zone snowtop_c3; df350_intfc_1
alias: df350_intfc_0
50:00:0e:10:00:00:00:17
alias: df350_intfc_1
50:00:0e:10:00:00:00:5d
alias: snowtop_c2
20:00:00:e0:69:c0:08:45
alias: snowtop_c3
20:00:00:e0:69:c0:08:23
2. Effective configuration:
no configuration in effect

14. Ensure that the zone is in Zone Config in the Zone Config tab

1. sw2400_68:admin> cfgenable "c2zone"


error: "c2zone" is not a configuration
sw2400_68:admin> cfgenable "c2config"
zone config "c2config" is in effect
sw2400_68:admin> zoneshow
Defined configuration:
cfg: c2config
c2zone
cfg: c3config
c3zone
zone: c2zone snowtop_c2; df350_intfc_1
zone: c3zone snowtop_c3; df350_intfc_1
alias: df350_intfc_0
50:00:0e:10:00:00:00:17
alias: df350_intfc_1
50:00:0e:10:00:00:00:5d
alias: snowtop_c2
20:00:00:e0:69:c0:08:45
alias: snowtop_c3
20:00:00:e0:69:c0:08:23
2. Effective configuration:
cfg: c2config
zone: c2zone 20:00:00:e0:69:c0:08:45
50:00:0e:10:00:00:00:5d

15. Save your changes by selecting ZoningActions -> Enable Config


16. Login back in to the server to verify. It should now see the storage devices.
17. How does the configuration Look on the host?

After a reconfiguration boot, this is what the host sees when the " c2config " configuration is used on the switch.
# format
Searching for disks...done

c2t0d1: configured with capacity of 32.45GB


c2t0d2: configured with capacity of 15.51GB

AVAILABLE DISK SELECTIONS:


0. c0t0d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
/pci@1f,4000/scsi@3/sd@0,0
1. c2t0d0 <HITACHI-DF350F-0000 cyl 17 alt 2 hd 8 sec 65>
/pci@6,4000/fibre-channel@2/sd@0,0
2. c2t0d1 <HITACHI-DF350F-0000 cyl 65438 alt 2 hd 16 sec 65>
/pci@6,4000/fibre-channel@2/sd@0,1
3. c2t0d2 <HITACHI-DF350F-0000 cyl 62539 alt 2 hd 8 sec 65>
/pci@6,4000/fibre-channel@2/sd@0,2

18. How does the configuration Look on the host?

To change the configuration, simply log into the switch and activate the " c3config " configuration that we created earlier.
1. sw2400_68:admin> cfgenable "c3config"
zone config "c3config" is in effect
2. After a reconfiguration boot, the Sun Solaris host now sees the new configuration. The LUNs that used to be presented to
controller 2 are now presented to controller 3:
3. # format
Searching for disks...done
4. c3t0d1: configured with capacity of 32.45GB
c3t0d2: configured with capacity of 15.51GB
5. AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
/pci@1f,4000/scsi@3/sd@0,0
1. c3t0d0 <HITACHI-DF350F-0000 cyl 17 alt 2 hd 8 sec 65>
/pci@6,4000/fibre-channel@3/sd@0,0
2. c3t0d1 <HITACHI-DF350F-0000 cyl 65438 alt 2 hd 16 sec 65>
/pci@6,4000/fibre-channel@3/sd@0,1
3. c3t0d2 <HITACHI-DF350F-0000 cyl 62539 alt 2 hd 8 sec 65>
/pci@6,4000/fibre-channel@3/sd@0,2

You might also like