0% found this document useful (0 votes)
31 views9 pages

Paper 2 SAN

The document discusses storage area networks and their components. SANs use fibre channel technology to connect servers and storage over dedicated networks. The document describes the evolution of SANs from early fibre channel arbitrated loop configurations to modern switched fabrics. It also outlines the key components of fibre channel over Ethernet including converged network adapters, cables, and FCoE switches.

Uploaded by

shravaniph88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views9 pages

Paper 2 SAN

The document discusses storage area networks and their components. SANs use fibre channel technology to connect servers and storage over dedicated networks. The document describes the evolution of SANs from early fibre channel arbitrated loop configurations to modern switched fabrics. It also outlines the key components of fibre channel over Ethernet including converged network adapters, cables, and FCoE switches.

Uploaded by

shravaniph88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

FIBRE CHANNEL STORAGE AREA NETWORKS

SAN is a high-speed dedicated network of servers and shared storage. Commom SAN
deployments are:
FCSAN
IP SAN
Fibre Channel: Overview
> The FC architecture forms the fundamental construct of the SAN infrastructure.
> Fibre Channel is a high-speed network technology that runs on high-speed optical fiber
cables (preferred for front-end SAN Connectivity) and serial copper cables (preferred for
back-end disk connectivity).
> The FC technology was created to meet the demand for increased speeds of data transfer
among computers, servers, and mass storage subsystems.

> High data transmission speed is an important feature of the FC networking technology.
The initial implementation offered a throughput of 200 MB/s (equivalent to a raw bit
rate of 1Gb/s), which was greater than the speeds of Ultra SCSI (20 MB/s), commonly
used in DAS environments

The FC architecture is highly scalable, and theoretically, a single FC network can


accommodate approximately 15 million devices.
The SAN andIts Evolution
> A SAN carries data between servers (or hosts) and storage devices through Fibre
Channel network (Figure ).
A SAN enables storage consolidation and enables storage be shared across

multiple servers.
> This improves the utilization of storage resources compared to direct-attached storage
architecture and reduces the total amount of storage an organization needs to
purchase and manage
> SAN also enables organizations to connect geographically dispersed servers and
storage.

Storage Area Networks MODULE 2

Servers
AA

Hypervisor
Server Server

FC SAN

Storage Array Storage Array

Figure :FC SAN implementation

> In its earliest implementation, the FC SAN was a simple grouping of hosts and storage
devices connected to a network using an FC hub as a connectivity device.
This configuration of an FC SAN is known as a Filbre Channel Arbitrated Loop (FC-AL).
Use of hubs resulted in isolated FC-AL SAN islands because hubs provide limited
connectivity and bandwidth.
The inherent limitations associated with hubs gave way to high-performance FC switches.
> Use of switches in SAN improved connectivity and performance and enabled FC SANs to be
highly scalable. This enhanced data accessibility to applications across the enterprise.
> Now, FC-AL has been almost abandoned for FC SANS due to its limitations but still survives
as a back-end connectivity option to disk drives.
Below Figure illustrates the FC SAN evolution from FC-AL to enterprise SANs.
Components of FCOE Network
The key components of FCOE are
> Converged Network Adaptors(CNA)
> Cables
> FCOE Switches

Converged Network Adaptors(CNA)


> A CNA provides the functionality of both a standard NIC and an FC HBA in a single adapter
and consolidates both types of traffic. CNA eliminates the need to deploy separate adapters and
cables for FC and Ethernet comnmunications, thereby reducing the required number of server
slots and switch ports.

Storage Area Networks Module 3

> As shown in Fig below, a CNA contains separate modules for 10 Gigabit Ethernet, Fibre
Channel, and FCoE Application Specific Integrated Circuits (ASICs). The FCoE ASIC
encapsulates FC frames into Ethernet frames. One end of this ASIC is connected to 10GbE and
FC ASICs for server connectivity, while the other end provides a 10GbE interface to connect to
an FCoE switch.
10GE/FCoE

1!
FCoE
ASIC

PCIe Bus

Fig: Converged Network Adapter


Cables

> There are two options available for FCoE cabling:


1. Copper based Twinax
2. Standard fiber optical cables.

> A Twinax cable is composed of two pairs of copper cables covered with a shielded cas
Twinax cable can transmit data at the speed of 10 Gbps over shorter distances up to 1
Twinax cables require less power and are less expensive than fiber optic cables.
º The Small Form Factor Pluggable Plus (SFP+) connector is the primary connector used for
FCoE links and can be Used with both
FCoE Switches

An FCoE switch has both Ethernet switch and Fibre Channel switch functionalities.

As shown in Fig below, FCoE switch consists of:

1. Fibre Channel Forwarder (FCF),


2. Ethermet Bridge,

3. set of Ethernet ports

4. optional FC ports

> The function of the FCF is to encapsulate the FC frames, received from the FC port, into the
FCoE frames and also to de-encapsulate the FCoE frames, received from the Ethernet Bridge, to
the FC frames.

Upon receiving the incoming traffic, the FCoE switch inspects the Ethertype (used to indicate
which protocol is encapsulated in the payload of an Ethernet frame) of the incoming frames and
uses that to determine the destination.

If the Ethertype of the frame is FCoE, the switch recognizes that the frame contains an
FC payload and forwards it to the FCF. From there, the FC is extracted from the FCoE
frame and transmitted to FC SAN over the FC ports.
If the Ethertype is not FCoE, the switch handles the traffic as usual Ethernet traffic and
forwards it over the Ethernet ports.

Storage Area Networks Module 3

FC Port FC Port FC Port FC Port

Fibre Channel Forwarder (FCF)

Ethernet Bridge

Ethernet Ethernet Ethernet Ethernet


Port Port Port Port

Fig FCoE switch generic architecture


RAID Lvels
> RAID Level selection is determined by below factors:

Application performance
data availability requirements
cost

> RAD Levels are defined on the basis of:

V Striping
Miroring
Parity techniques
> Some RAID levels use a single technique whereas others use a combination of techniques.
> Table shows the commonly used RAID levels
Table : RAID Levels

LEVELS BRIEF DESCRIPTION

RAID 0 Striped set with no faut tolerance


RAID 1 Disk mirroring
Nested Combinations of RAID levels. Example: RAID 1 + RAID0
RAID 3 Striped set with parallel access and a dedicated parity disk
RAID 4 Striped set with independent disk access and a dedicated parity disk
RAID 5 Striped set with independent disk access and distributed parity
RAID 6 Striped set with independent disk access and dual distributed parity

RAID O

> RAIDO configuration uses data striping techniques, where data is striped across all the disks
within a RAID set. Therefore it utilizes the full storage capacity of a RAID set.
> To read data, all the strips are put back together by the controller.
> Fig shows RAID O in an array in which data is striped across five disks.

Storage Area Networks Module-2

Data from Host

RAID Controler

Diskcs

Fig: RAID 0
When the number of drives in the RAID set increases, performance improves because more
data can be read or written simultaneously.
> RAIDO is a good option for applications that need high VO throughput.
during drive failures, RAID Odoes not
> However, if these applications require high availability
provide data protection and availability.
RAID 1

> RAID 1is based on the mirroring technique.


> In this RAID configuration, data is mirrored to provide fault tolerance (see Fig). A
> RAID l set consists of two disk drives and every write is written to both disks.
> The mirroring is transparent to the host.
> During disk failure, the impact on data recovery in RAID lis the least among all RAID
implementations. This is because the RAID controller uses the mirror drive for data recovery.
> RAID lis suitable for applications that require high availability and cost is no constraint.

Storage Area Networks Module-2

Data from Host

RAID Controller

Mirror Set Miror Set

Disks
Fig: RAID I
Benefits of NAS
NAS offers the following benefits:

Comprehensive access to information: Enables efficient file sharing and supports many-to
one and one-to-many configurations. The many-to-one configuration enables a NAS device to

21

Storage Area Networks Module 3

serve many clients simultaneously. The one-to-many configuration enables one client to
connect with many NAS devices simultaneously.
Improved efficiency: NAS delivers better performance compared to a general-purpose file
server because NAS uses an operating system specialized for file serving.
Improved flexibility: Compatible with clients on both UNIX and Windows platforms using
industry-standard protocols. NAS is flexible and can serve requests from different types of
clients from the same source.
Centralized storage: Centralizes data storage to minimize data duplication on client
workstations, and ensure greater data protection
Simplified management: Provides a centralized console that makes it possible to manage
file systems efficiently
Scalability: Scales well with different utilization profiles and types of business applications
because of the high-performance and low-latency design
High availability: Offers efficient replication and recovery options, enabling high data
availability. NAS uses redundant components that provide maximum connectivity options.
A NAS device supports clustering tech- nology for failover.
Security: Ensures security, user authentication, and file locking with industry-standard
security schemas
Low cost: NAS uses commonly available and inexpensive Ethernet components.
Ease of deployment: Configuration at the client is minimal, because the clients have
required NAS Connection software built in.
Types of Intelligent Storage Systems
> An intelligent storage system is divided into following two categories:
1. High-end storage systems
2. Midrange storage systems
High-end storage systems have been implemented with active-active configuration,
whereas midrange storage systems have been implemented with active-passive
configuration.
The distinctions between these two implementations are becoming increasingly
insignificant.
High-end Storage Systems
> High-end storage systems, referred to as active-active arrays, are generally aimed at large
enterprises for centralizing corporate data. These arrays are designed with a large number
of controllers and cache memory.

An active-active array implies that the host can perform VOs to its LUNS across any of the
available paths (see Fig ).

Storage Area Networks Module-2

F
Active

LUN

Active

Host

Stor age Array

Fig :Active-active configuration

Advantages of High-end storage:


> Large storage capacity
> Large amounts of cache to service host VOs optimally
> Fault tolerance architecture to improve data availability
Connectivity to mainframe computers and open systems hosts Availability of multiple
front-end ports and interface protocols to serve a large number of hosts
> Availability of multiple back-end Fibre Channel or SCSI RAID controllers to manage disk
processing
> Scalability to support increased connectivity, performance, and storage
> capacity requirements
> Ability to handle large amounts of concurrent I/Os from a number of servers and
applications
Support for array-based local and remote replication
Midrange Storage System
> Midrange storage systems are also referred to as Active-Passive Arrays and they are best
suited for small- and medium-sized enterprises.
They also provide optimal storage solutions at a lower cost.
> In an active-passive array, a host can perform /Os to a LUN only through the paths to the
owning controller of that LUN. These paths are called Active Paths. The other paths are
passive with respect to this LUN,

Storage Area Networks MODULE 2


Controller
A

Active

LUN

Passive

Host

Storage Array

Fig :Active-passive configuration

> As shown in Fig, the host can perform reads or writes to the LUN only through the
path to controller A, as controller A is the owner of that LUN.
> The path to controller B remains Passive and no /O activity is performed through this
path.
Midrange storage systems are typically designed with two controllers, each of which
contains host interfaces, cache, RAID controller:s, and disk drive interfaces.
Midrange arrays are designed to meet the requirements of small and medium
enterprise applications; therefore, they host less storage capacity and cache than high
end storage arrays.

There are also fewer front-end ports for connection to hosts.


> But they ensure high redundancy and high performance for applications with
predictable workloads.
> They also support array-based local and remote replication.
RAID Gm pososrd

RAID
Min
slonage Real Dufe
pufamaho Perast
100.. balh No
No
Sequenia! Reay
501 Befon Not
high Slewn than
Beto than Sige single disk . Modnt
disk
bg e y
mW be
Com1Wed to al
dss k
trcn-xtpoodntafetu Radom. Poarte fe Snall Hgh Faul
fartt
Squutd eaas Sandom e DS,
Sequtal saidd
Moduata ¬suglrtal
Faiwu
4 3 nandemu Sequel tigh
JHiadi aids

3 High

|ln-)bjan Madnal
Xardem cotss
bct
M thas
o

You might also like