0% found this document useful (0 votes)
2K views119 pages

SAN EMC Clariion Foundations

The document provides an overview of SAN foundations and EMC Clariion administration. It defines key SAN components like switches, directors, HBAs and storage arrays. It also explains concepts like zoning, RAID levels and Fibre Channel protocols. The objectives are to differentiate storage networking technologies and administer a Clariion array.

Uploaded by

Ravi Vashisht
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views119 pages

SAN EMC Clariion Foundations

The document provides an overview of SAN foundations and EMC Clariion administration. It defines key SAN components like switches, directors, HBAs and storage arrays. It also explains concepts like zoning, RAID levels and Fibre Channel protocols. The objectives are to differentiate storage networking technologies and administer a Clariion array.

Uploaded by

Ravi Vashisht
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 119

SAN Foundations

EMC Clariion Administration


Course Objectives

Upon the completion of program you will be able to:


 Differ between SAN NAS DAS
 Understand & Describe SAN Components
 Administer Clariion as L1+ resource

2
SAN Foundations
Network

 Network is defined area where Hosts and Clients talk /


communicate to each other.

 Components of Network.

1.
2.
3.
4.
5.
6.

4
Local Area Network

Testing Server

Exchange Server
Switch / Hub / Router

Windows Client Windows Client

Windows Client Windows Client Windows Client

5
RAID0 - Striped set without parity or Striping.
RAID1 - Mirrored set without parity or Mirroring.
RAID3 - Striped set with dedicated parity byte level parity.
RAID4 - Block level parity.
RAID5 - Striped set with distributed parity or interleave parity.
RAID6 - Striped set with dual distributed parity.
Nested (hybrid) RAID

 RAID 0+1: striped sets in a mirrored set (minimum four disks; even number of
disks) provides fault tolerance and improved performance but increases complexity.
The key difference from RAID 1+0 is that RAID 0+1 creates a second striped set to
mirror a primary striped set. The array continues to operate with one or more drives
failed in the same mirror set, but if drives fail on both sides of the mirror the data on
the RAID system is lost.
 RAID 1+0: mirrored sets in a striped set (minimum two disks but more commonly
four disks to take advantage of speed benefits; even number of disks) provides fault
tolerance and improved performance but increases complexity.

 The key difference from RAID 0+1 is that RAID 1+0 creates a
striped set from a series of mirrored drives. In a failed disk
situation, RAID 1+0 performs better because all the remaining
disks continue to be used. The array can sustain multiple drive
losses so long as no mirror loses all its drives.
Storage Area Network

 Storage Area Network (SAN) is network of host and storage


array that are often connected over Fibre Channel Fabrics

 Components of SAN.

1. HOST
2. FC Cables
3. HBA – Host Bus Adapter
4. FC Switch
5. Storage Array
6. FCP

10
Fibre Channel

Fibre Channel is a gigabit-speed network technology primarily


used for Storage Networking.

Fibre Channel Protocol (FCP) is the interface protocol of SCSI


on the Fibre Channel.

Fabric is defined when a Host is connected to Storage Array via at


least one FC Switch.

11
SAN Fabric

Target

FC Switch
Source

FC Initiator:
FC Responder:
HBA
CLARiion SP Ports
Or
HP EVA Controller Ports

12
SAN

Switch / Hub / Router

13
Connectors

 The Lucent Connector has


an RJ-style latch in a
connector body and is half
the size of conventional
SC.

 The Standard Connector is


a push pull connector.

14
HBA (Host Bus Adapter)

 An HBA is refered to Fibre Channel interface card.

 An HBA is an I/O adapter that sits between the


host computer's bus and the Fibre Channel loop,
and manages the transfer of information
between the two channels.
 Fail-over
 Load balance
 Major HBA manufacturers are Emulex, Qlogic,
JNI, LSI, ATTO Technology.

15
Fibre Channel

Fibre Channel is a serial data transfer interface intended for


connecting high-speed storage devices to computers

16
World Wide Name

 World Wide Name - WWN, is a 64-bit unique address used


to uniquely identify elements of SAN.

 WWN is assigned to a Host Bus Adapter or switch port by


the vendor at the time of manufacture.

WWN to HBA is what MAC is to NIC

 Every FC Manufacturing Company Registers itself with SNIA


(Storage Governing Body similar to IEEE).

17
World Wide Name

1 0 0 0 0 0 0 0C9 2 0CD4 0
Example: Emulex HBA’s World Wide Name

2 0 0 0 0 0 2 0 3 7 E 2 8 8BE
Example: Qlogic HBA’s World Wide Name

5 0 0 6 0 1 6 0 0 0 6 0 0 1B2
Example: EMC Clariion HBA’s World Wide Name

5 0 0 6 0B0 0 0 0C2 6 2 0 2
Example: HP EVA HBA’s World Wide Name

18
Switches

 A Fibre Channel switch is a device that routes data between


host bus adapters and fibre ports on storage systems

Cisco MDS 9020


Brocade 4900

19
Switch Ports

The following types of ports are defined by Fibre Channel:

 E_port is the connection between two fibre channel switches. Also


known as an Expansion port. When E_ports between two switches
form a link, that link is referred to as an InterSwitch Link or ISL.
 F_port is a fabric connection in a switched fabric topology. Also known
as Fabric port. An F_port is not loop capable.
 N_port is the node connection pertaining to hosts or storage devices in
a Point-to-Point or switched fabric topology. Also known as Node port
 TE_port is a term used for multiple E_ports trunked together to create
high bandwidth between switches. Also known as Trunking Expansion
port.

20
Directors

 Directors are considered to be more highly available than switches

Cisco MDS 9509 Brocade 4800 Director

21
Fibre Channel SAN Switches and Directors

Switches Directors
 Redundant fans and power supplies  “Redundant everything” provides

 High availability through redundant optimal serviceability and highest

deployment availability

 Departmental and data-center  Data-center deployment

deployment  Maximum scalability

 Lower number of ports  Large Fabrics

 High performance  Highest port count

 Web-based management features  Highest performance

 Web-based and/or console-based


management features

22
Storage

Storage can be internal or external:

 Internal storage — Internal storage consist of disks located within the host server
that has a basic RAID controller. The disks themselves, in most cases, are the same
as those used in external storage shelves, using SCSI and Fibre Channel
technologies.

23
External Storage Array

External storage
External storage connects to a
physically separate storage cabinet or
shelf. The interface is through an HBA
located in the host server normally EMC Clariion
using a Fibre Channel interface or SCSI
interface

HP EVA

24
Physical and Logical Topologies

 The Fibre Channel environment consists of physical


topology and logical topology

Physical
Topology Logical
Topology Physical
Topology

Windows
Server

Fibre Channel
Switch Storage
Exchange
Server

25
Physical Topology

 SANs are scalable from two to 14 million ports in one system, with multiple topology choices such
as:
 Point-to-point — A dedicated and direct connection exists between two SAN devices.
 Arbitrated loop — SAN devices are connected in the form of a ring.
 Switched fabric — SAN devices are connected using a fabric switch. This enables a SAN device to
connect and communicate with multiple SAN devices simultaneously.

26
Zoning

 Partitions a Fibre Channel switched Fabric into subsets of


logical devices
 Zones contain a set of members that are permitted to
access each other
 A member can be identified by its Source ID (SID), its
World Wide Name (WWN), or a combination of both

27
WWN Zoning

WWPN = 10:00:00:60:69:40:8E:41
Domain ID = 21
Port = 1

WWPN = 10:00:00:00:C9:20:DC:40 FC
Switch
WWPN = 10:00:00:60:69:40:DD:A1
Domain ID = 25
Port = 3

Host

Fabric

FC
Switch
Storage
WWPN = 50:06:04:82:E8:91:2B:9E

WWN Zone 1 = 10:00:00:00:C9:20:DC:40; 50:06:04:82:E8:91:2B:9E

28
Port Zoning

WWPN = 10:00:00:60:69:40:8E:41
Domain ID = 21
Port = 1

WWPN = 10:00:00:00:C9:20:DC:40 FC
Switch
WWPN = 10:00:00:60:69:40:DD:A1
Domain ID = 25
Port = 3

Host

Fabric

FC
Switch
Storage
WWPN = 50:06:04:82:E8:91:2B:9E

Port Zone 1 = 21,1; 25,3

29
RAID

 RAID 0 – Data striping


 No parity protection, least-expensive storage
 Applications using read-only data that require quick access, such as
data down-loading
 RAID 1 – Mirroring between two disks
 Excellent availability, but expensive storage
 Transaction, logging or record keeping applications
 RAID 1/0 – Data striping with mirroring
 Excellent availability, but expensive storage
 Provides the best balance of performance and availability
 RAID 3 – Data striping with dedicated parity disk
 RAID 5 – Data striping/parity spread across all drives
 Very good availability and inexpensive storage
 Support mixed types of RAID in the same chassis

30
LUN

 A logical unit (LUN) is a grouping of one or more disks into one


span of disk storage space. A LUN looks like an individual disk to
the server’s OS. It has a RAID type and properties that define it.

5 disk RAID-5 group 4 disk RAID-1/0 group

31
SAN

Switch / Hub / Router

32
SAN Vendors

Note: Inrange was acquired by CNT,


Who was acquired by McData

(source: www.byteandswitch.com)
33
Data Storage Solutions

 Direct-attached storage (DAS)


 Network-attached storage (NAS)
 Storage Area Network (SAN)

34
DAS – Direct Attach Storage

DAS is storage connected to a server. The storage itself can be external to the server
connected by a cable to a controller with an external port, or the storage can be
internal to the server. Some internal storage devices use high-availability features
such as adding redundant component capabilities.

 The DAS configuration starts with a server.


 An HBA is being installed in the server, so the server can communicate
with the external storage.
 A storage disk drive is installed into a storage subsystem.
 The server and storage are connected with cables.

35
NAS – Network Attached Storage

 NAS is storage that resides on the LAN behind the servers.


NAS storage devices require special storage cabinets providing specialized file access, security, and
network connectivity. NAS storage devices require special storage cabinets providing specialized file
access, security, and network connectivity.
 Requires network connectivity.
 Requires a network interface card (NIC) on the server to access the storage.
 Provides client access at the file level using network protocols.
 Does not require the server to have a SCSI HBA and cable for storage access.
 Supports FAT, NTFS, and NFS file systems.

36
SAN – Storage Area Network

SAN is a high-speed network with heterogeneous (mixed vendor or platforms) servers


accessing a common or shared pool of heterogeneous storage devices. SAN
environments provide any-to-any communication between servers and storage resources,
including multiple paths.

37
SAN Benefits

SAN benefits provide high return on investment (ROI) and reduce the total cost
of ownership (TCO) by increasing performance, manageability, and scalability.

Some key benefits of SANs are:


 Reduced data center rack and floor space — Because you do not need to buy big
servers with room for many disks, you can buy fewer, smaller servers, which takes
less room in the data center.
 Disaster recovery capabilities — SAN devices can mirror the data on the disk to
another location.
 Increased I/O performance — SANs operate faster than internal drives or devices
attached to a LAN.
 Modular Scalability — enabling changes to the infrastructure as business needs
evolve
 Consolidated Storage — reduces the cost of storage management, and better
utilization of available resources.

38
Storage Solution Comparison Table

  DAS NAS SAN


Storage for application
Applications Any File serving
servers

Server and
General purpose Optimized General purpose
operating
systems
Internal or External direct-
Storage Storage
devices Solution Comparison Table External shared
external dedicated attached

Management
Labor intensive Centralized Centralized

Workgroup or Workgroup or Small workgroup to


Data centers
departmental departmental enterprise data centers
Increased
Performance Network traffic network Higher bandwidth
performance
Limited
Distance None Greater distances
distances
Improved
Speed Bottlenecks Greater speeds
bottlenecks
Offers no-single-point-
High availability Limited Limited of-failure storage and
data path protection
Higher cost, but greater
Cost Low cost Affordable
benefits

39
Course Summary

Key points covered in this course:


 SAN
 Fibre Channel Layer
 World Wide Name
 SAN Topologies
 Zoning

40
Course Summary

Key points covered in this course:


 SAN Topologies
 FC-AL
 FC-SW

 Zoning
 Single Initiator
 Port
 WWN

41
CLARiiON
CLARiiON Foundations

CLARIION RANGE

43
CLARiiON Foundations

44
CLARiiON Timeline

SCSI FC5500 FC5700 FC5300 FC4500 FC4700 CX200 CX300 CX300i CX3-20
CLARiiONs 1997 1998 1999 2000 2001 CX400 CX500 CX500i CX3-40
Pre-1997 CX600 CX700 2005 CX3-80
2002 2003 2006

45
High-End Storage: The New Definition

High-End Then High-End Today


 Simple redundancy  Non-disruptive everything
 Automated Fail-over  Upgrades, operation, and
service
 Benchmark performance (IOPs
and MB/s)  Predictable performance…
 Single and/or simple workloads
unpredictable world
 Complex, dynamic workloads
 Basic local and remote
data replication  Replicate any amount,
 disaster recovery
any time, anywhere
 Replicate any amount data,
 Scalability across any distance, without
 Capacity impact to service levels

 Manage the storage system  Flexibility


 Easy configuration, simple  Capacity, performance, multi
operation, minimal tuning protocol connectivity,
workloads, etc.
 Manage service levels
 Centralized management of the
storage environment

46
Flexible, High Availability Design

 SnapView and
 Fully redundant MirrorView replication
architecture software
 Power, cooling, data  SAN Copy
paths, SPS
 No single points of
 Non-stop operation
 Online software upgrades
failure, modular
 Online hardware changes architecture
 Continuous diagnostics  Fibre Channel and ATA
 Data and system integrity RAID
 CLARalert Phone Home  From 5 to 240 disks
 Dual I/O paths with  Flexibility
non-disruptive failover
 Individual Disk
 Leader in data integrity
 RAID levels 0, 1, 1/0, 3, 5
 Mirrored write cache
 SNiiFF Verify
 Mix drive types
 Background Verify-Per  Mix RAID levels
RAID Group  Up to 16 GB of memory
 8 GB per Storage Processor
 Configurable read and
write cache size
47
CLARiiON CX Series

 Sixth generation, full Fibre Channel networked


storage running FLARE Operating Environment
 Flexible connectivity and bandwidth
 Up to 8 FC-AL or FC-SW host ports
 1 Gb/2 Gb Fibre Channel host connections

 Scalable processing power


 Dual or Quad-processors supporting advanced storage-based
functionality

 Industry-leading performance and availability


 1 GB, 2 GB, 4 GB, 8 GB or 16 GB memory and dual or quad
redundant 2 Gb back-end storage connections

 Cross-generational software support


 Non-disruptive hardware replacement and software
upgrades

48
CLARiiON Foundations

CLARIION COMPONENTS

49
Modular Building Blocks
 Disk Array Enclosure (DAE)
 CX family uses DAE2 with up to (15) 2Gb
FC Drives
 FC family uses DAE with up to (10) 1Gb
FC drives
 DAE2-ATA contains up to 15 ATA drives
(Advanced Technology Attachment)

 Disk Array Enclosure ( DAE2P)


 CX family only on 300, 500, and 700
series
 Replacement for DAE2 with code upgrade
 Houses up to 15 2GB FC Drives

 Disk Array Enclosure ( DAE3P)


 CX 3 series
 Replacement for DAE2 with code upgrade
 Houses up to 15 2GB or 4GB FC Drives

 Disk Processor Enclosure (DPE)


 Some CX series use DPE2 that contains
Storage Processors and up to (15) 2Gb FC
Drives
 FC family uses DPE that contains Storage
Processors and up to (10) 1Gb FC Drives

 Storage Processor Enclosure (SPE)


 Contains two dual CPU Storage
Processors and no drives

 Standby Power Supply (SPS)


 Provides battery backup protection
50
CLARiiON ATA (Advanced Technology Attachment)

 Lower $/MB for backup  Full HA features


or bulk storage
 Dual-ported access
 Alternative to Fibre Channel HDAs
 Redundant power and
 Same Software capability as FC
LCCs
 Uses FC interconnect
 Mix FC and ATA enclosures  Hot swap capability
 First shelf must be Fibre Channel

51
CX 600 Architecture

2Gb Fibre Channel Front End

CLARiiON Messaging Interface (CMI)


Storage Storage
Processor Processor
2Gb Fibre Channel Back End
LCC
LCC

LCC
LCC

LCC
LCC

LCC
LCC

 Storage Processor based architecture


 Modular design
52
CX 3 Architecture
UltraScale 1/2/4Gb/s Fibre Channel Front End UltraScale
Storage Processor Storage Processor
CLARiiON Messaging Interface (CMI)
Multi-Lane PCI-Express bridge link
Fibre Channel Fibre Channel
Mirrored cache Power supply Mirrored cache
CPU CPU Fan Fan Fan Fan CPU CPU
FC FC FC FC SPS Power supply SPS FC FC FC FC

2/4 Gb/s Fibre 4GLCC 4GLCC 2/4 Gb/s Fibre


Channel Back End Channel Back End

4GLCC 4GLCC

4GLCC 4GLCC

4GLCC 4GLCC

Up to 480 drives max per 53storage system (CX3-80)


Storage Processor Introduction

 Storage processors are configured


in pairs for maximum availability
 One or two processors per Storage
Processor board
 Two or four Fibre Channel front-
end ports for host connectivity
 1Gb or 2Gb or 4 Gb
4 Fibre Channel Ports
 Arbitrated loop or switched
fabric CMI
 Dual-ported Fibre Channel Disk Mirrored Cache
drives at the back-end
 Two or Four Arbitrated Loop CPU CPU
connections
 Maximum of 8GB of memory per SP
FC-AL FC-AL
 Write Cache is mirrored
between Storage Processors
for availability using the CMI LCC
(CLARiiON Messaging
Interface)
 Write Caching accelerates host
writes LCC
 Ethernet connection for
management

Storage Processor

54
Persistent Storage Manager (PSM)

 PSM is a hidden LUN that records configuration information


 Both SPs access a single PSM so environmental records are
in sync
 If one SP receives new configuration info, that info is written
to the PSM and the other SP instantaneously updates itself

 If one SP needs to be replaced, the new one can easily find


the unique environmental information on the PSM
 Enables SPs to be completely field-replaceable

55
Data Units on a Disk
User Data (512 Bytes)

Sec Sec Sec Sec Sec Sec Sec Sec Sec Sec Sec Sec
e.0 e.1 e.2 e.3 e.4 e.5 e.122 e.123 e.124 e.125 e.126 e.127

Element s.0 Element s.1 Element s.2 Parity s. Element s.3 Element s.4 Element s.5

 Sector
 520 Bytes
 512 bytes of user data
 8 bytes of administrative data

56
CLARiiON Foundations

DATA AVAILABILITY
DATA PROTECTION

57
Mirrored Write Caching

 Write cache size is user


SP-A SP-B configurable and is
allocated in pages
Software Base Software  How much write cache is
SP-A SP-A used by each SP is
Write Cache dynamically adjusted
Write Cache based on workload
Mirror
 All write requests to a
given SP are copied to the
other SP
 Data integrity ensured
CMI through hardware failure
events
 CLARiiON Messaging
SP-B SP-B Interface (CMI) used to
communicate between SPs
Write Cache Write Cache
Mirror

Read Cache Read Cache

Storage System

58
Advanced Availability: LUN Ownership
Model
 Only one Storage Processor “owns” a LUN at any point in time
 Assigned when LUN is created but can also be changed using Navisphere
Manager or CLI

 If the Storage Processor, Host Bus Adapter, cable, or any component


in the I/O path fails, ownership of the LUN can be moved to the
surviving SP
 Process is called LUN Trespassing
 CLARiiON originally used host based ATF (Application Transparent Failover)
to automate path-failover. Today, EMC PowerPath provides this function

 For maximum availability, careful design of I/O path for no Single-


Point-Of-Failure is required

59
Write Cache Protected by “Vault”

 The “vault” is a reserved area found on specific protected disks


 At the first sign of an event which could potentially compromise
the integrity of the data in write cache, cache data is dumped to
the vault area
 After the data is dumped to the vault, it will be migrated to the
LUNs where it belongs
 When power is restored data is migrated from the vault back to
cache (if an SPS has a charged battery)

 Other failures such as a SP or Cache failure will disable write


cache

60
Host Connectivity Redundancy PowerPath –
Failover Software

 Host resident program for


automatic detection and Applicatio
Applicationn
management of failed paths
PowerPath
 Host will typically be configured Request Request
with multiple paths to LUN
sd sd
 If HBA, cable or Switch fails, HBA HBA
PowerPath will redirect I/O over
surviving path
 If Storage Processor fails,
PowerPath will “Trespass” LUN FC Switch FC Switch
to surviving Storage Processor
and redirect I/O
 Dynamic load balancing across
HBA and Fabric – Not Storage 0 1 0 1
Processors SP A SP B

61
Course Summary

Key points covered in this course:


 The basic architecture of a CLARiiON Disk Array
 The architectures of the various CLARiiON models
 The data protection options available on CLARiiON on
Storage Processor.
 The relationship between CLARiiON physical disk drives and
LUNs
 High availability features of the CLARiiON and how this
potentially impacts data availability

62
CLARiiON Foundations

SOFTWARE AND MANAGEMENT


ENVIRONMENT

63
FLARE Operating Environment

 FLARE Operating Environment


is the “Base Software“ that
runs in the CLARiiON Storage
Processor
 I/O handling, RAID algorithms
EMC ControlCenter 

End-to-end data protection
Cache implementation
CLARiiON Based Applications  Access Logix provides LUN
masking that allows sharing of
storage system
 Navisphere middleware
Navisphere provides common interface for
managing CLARiiON
 CLARiiON optional software
including
 MirrorView, SnapView, SAN Copy
FLARE Operating Environment  EMC ControlCenter provides
end-to-end management of a
CLARiiON

CLARiiON Hardware

64
FLARE Versions

 Generation 1: CX200, CX400, CX600


 Generation 2: CX300, CX500, CX700 including the iSCSI flavors
 Generation 3: CX3-10, CX3-20, CX3-40, CX3-80
 Generation 4: CX4-120, CX4-240, CX4-480, CX4-960
 FLARE Code is broken down as follows (Please see the color coded scheme below).
 1.14.600.5.022 (32 Bit)
 2.16.700.5.031 (32 Bit) three digits are the number of drives it can support)
 2.24.700.5.031 (32 Bit)
 3.26.020.5.011 (32 Bit)
 4.28.480.5.010 (64 Bit)
 The first digit: 1, 2, 3 and 4 indicate the Generation of the machine this code level can be
installed on. These numbers will always increase as new Generations of Clariion machines
are added.
 The next two digits are the release numbers, 28 being the latest FLARE Code Version.
 The next 3 digits are the model number of the Clariion, like the CX600, CX700, CX3-20 and
CX4-480.
 The 5 here is unknown
 The last 3 digits are the Patch level of the FLARE Environment.
CLARiiON Management Options

 There are two CLARiiON management interfaces


 CLI (Command Line Interface)
 navicli commands can be entered from the command line and can
perform all management functions

 GUI (Graphical User Interface)


 Navisphere Manager is the graphical interface for all management
functions to the CLARiiON array

66
EMC Navisphere Management Software

 Centralized management for


CLARiiON storage throughout
the enterprise
 Centralized management
means a more effective staff
 Allows user to quickly adapt
to business changes
 Keeps business-critical
applications available
 Key features
 Java based interface has
familiar look and feel
 Multiple server support
 EMC ControlCenter integration
 Management framework  Navisphere Software Suite
integration  Navisphere Manager
 Navisphere Analyzer
 Navisphere CLI

67
Navisphere Manager

 Discover
 Discovers all managed CLARiiON
systems
 Monitor
 Show status of storage systems,
Storage Processors, disks,
snapshots, remote mirrors, and
other components
 Centralized alerting
 Apply and provision
 Configure volumes and assign
storage to hosts
 Configure snapshots and remote
mirrors
 Set system parameters
 Customize views via Navisphere
Organizer
 Report
 Provide extensive performance
statistics via Navisphere
Analyzer

68
Storage Configuration and Provisioning
 Understanding application and server
requirements and planning
configuration is critical!
Step 0 - Planning
 RAID Group is a collection of physical
disks
 RAID Protection level is assigned to all
Step 1 – Create RAID Groups
disks within the RAID group

 Binding LUNs is the creation of Logical Step 2 – Bind LUNs


Units from space within a RAID Group
 Storage groups are collections of
Step 3 – Create Storage Groups
LUNs that a host or group of hosts
have access to
Step 4 – Add LUNs to
Storage Groups
Step 5 – Connect Hosts with
Storage Groups
69
CLARiiON RAID Options

 Disk Step 0 - Planning


 No protection
 JBOD

 RAID-0: Stripe
 No protection
 Performance JBOD

 RAID-1: Mirroring
 Some performance gain by splitting read operations
 Protection against single disk failure
 Minimum performance hit during failure

70
CLARiiON RAID Options

 RAID-1/0: Striped Mirrors Step 0 - Planning


 Performance of stripes combined with split read operations
 Protection against single disk failure
 Minimum performance hit in failure mode

 RAID-3: Striped Elements – Sequential IOPS


 Each data element striped across disks - parity kept on the last disk in the RAID
group
 Extremely fast read access from the disk
 Used for streaming media
 Parity protection against single disk failure
 Performance penalty during failure

71
CLARiiON RAID Options

Step 0 - Planning
 RAID-5: Striping with Parity – Random IOPS
 Performance of striping
 Protection from single disk failure
 Parity distributed across member drives within the RAID Group
 Write performance penalty
 Performance impact if a disk fails in RAID Group
 Hot Spare
 Takes the place of failed disk within a RAID group
 Must have equal or greater capacity than the disk it replaces
 Can be located anywhere except on Vault disks
 When failing disk is replaced, the hot spare restores the data
to the replacement disk and returns to the hot spare pool

72
Which RAID Level Is Right

Step 0 - Planning
 RAID 0 – Data striping
 No parity protection, least-expensive storage
 Applications using read-only data that require quick access, such as
data down-loading
 RAID 1 – Mirroring between two disks
 Excellent availability, but expensive storage
 Transaction, logging or record keeping applications
 RAID 1/0 – Data striping with mirroring
 Excellent availability, but expensive storage
 Provides the best balance of performance and availability
 RAID 3 – Data striping with dedicated parity disk
 RAID 5 – Data striping/parity spread across all drives
 Very good availability and inexpensive storage
 Support mixed types of RAID in the same chassis

73
Creating RAID Groups

Step 1 – Create
 RAID protection levels are set
RAID Groups
through a RAID group
 Physical disks part of one RAID group only
 Drive types cannot be mixed in the RAID Group
 May include disks from any enclosure
 RAID types may be mixed in an array
 RAID groups may be expanded
 Users do not access RAID groups directly
5 disk RAID-5 group 4 disk RAID-1/0 group

74
Creating a RAID Group

Step 1 – Create
RAID Groups

75
Binding a LUN

Step 2 – Bind LUNs


 Binding is the process of building
LUNs onto RAID Groups
 May be up to 128 LUNs in a RAID Group
 May be up to 2048 LUNs per CLARiiON array
 LUNs are assigned to one SP at a time
 The SP owns the LUN
 The SP manages the RAID protection of the LUN
 The SP manages access to the LUN

 The LUN uses part of each disk in the RAID Group


 Same sectors on each disk

76
Bind Operation - Setting Parameters

Step 2 – Bind LUNs


 Fixed Bind parameters
 Disk numbers, RAID type, LUN #, element size
 Can’t be changed without unbinding and rebinding

 Variable parameters
 Cache enable, rebuild time, verify time, auto assignment
 Can change without unbinding

 Bind Operation
 Fastbind is the almost instantaneous bind achieved on a factory
system

77
Binding a LUN

Step 2 – Bind LUNs

78
LUN Properties - General

Step 2 – Bind LUNs

79
metaLUNs
 A metaLUN is created by combining LUNs
 Dynamically increase LUN capacity
 Can be done on-line while host I/O is in progress
 A LUN can be expanded to create a metaLUN and a metaLUN can be
further expanded by adding additional LUNs
 Striped or concatenated
 Data is restriped when a striped metaLUN is created
 Appears to host as a single LUN
 Added to storage group like any other LUN
 Can be used with MirrorView, SnapView, or SAN Copy
 Supported only on CX family with Navisphere 6.5+

metaLUN

+ + =

80
Storage Groups

 Storage Groups are a feature of Step 3 – Create


Storage Groups
Access Logix and used to
implement LUN Masking
 Storage Groups define the LUNs each host can access
 A Storage Group contains a subset of LUNs grouped for access by one or more
hosts and inaccessible to other hosts

 Without Storage Groups, all host can access all LUNs


 Storage groups can be viewed as a “bucket” that has dedicated and/or shared
LUNs accessible by a server or servers
 Access Logix controls which hosts have access to a storage group
 Host access the array and provide information through the Initiator
Registration Records process

 Storage Group planning is required

82
Creating a Storage Group

Step 3 – Create
Storage Groups

83
Storage Group Properties - LUNs

Step 4 – Add LUNs to


Storage Groups

84
Storage Group Properties - Hosts

Step 5 – Connect Hosts


to Storage Groups

85
LUN Migration

 Migration moves data from one LUN to another LUN


 – CX series CLARiiONs only
 – Any RAID type to any RAID type, FC to ATA or ATA to FC
 Neither LUN may be private LUNs or Hot Spares
 Neither LUN may be binding, expanding, or migrating
 Either or both may be metaLUNs
 Destination LUN may not be in a Storage Group
 Destination LUN may not be part of SnapView or
 MirrorView operations
 Destination LUN may be larger than Source LUN
 Data is copied from Source LUN to Destination LUN
 Source stays online and accepts I/O
 Destination assumes identity of Source when copy completes.
 LUN ID, WWN
 Storage Group membership
 Source LUN is unbound after copy completes
 The migration process is non-disruptive
 There may be a performance impact
 LUN Migration may be cancelled at any point
 Storage system returns to its previous state
LUN Migration Limits
Course Summary

Key points covered in this course:


 The Operating Environment of a CLARiiON Disk Array
 Storage Configuration and Provisioning
 Storage Administration
 Troubleshooting and Diagnosing Problems

89
Brocade Zoning

90
Switch Zoning

 Understanding requirements and Step 0 - Planning


planning resilient architecture is
critical! Step 1 – Create Alias, Add Member
 Alias – Recognizable name for
WWN of Host or Storage Step 2 – Create Zone, Add Member
 Zone – Map Initiator and Target
aliases Step 3 – Add Zones to Config Members Pan
 Config – Configuration file which
has zoning information Step 4 – Re-Check and
Enable Config

91
Switch Zoning

zoneshow : displays the cfg, zone & alias info.


zonecreate : create zone
switchshow : displays the F-Logi’s in the switch
alishow : displays alias
alicreate : create alias
configupload : upload the switch configuration to a host file.
switchdisable : disable switch
cfgclear : clear the current configuration
configdownload : download the switch configuration file from host
switchenable : enable switch
cfgdisable : disable cfg
cfgenable : enable cfg

92
BUSINESS CONTINUITY

93
Data Copy

94
SnapView

Optional array-based software

Creates an instant point-in-time copy of a LUN


 Copy and pointer-based design
 Utilizes “copy-on-first-write” technique
 Operates only on the array, no host cycles are expended
 8 Snapshot sessions per LUN

SnapView session resides on SP


 Session can contain multiple snapshots
 Can have multiple sessions on one source LUN

95
SnapView

LUN

Primary
access to LUN Snap

Snapshot

Access to
Snapshot
96
SnapView

• Access Logix required


• Separate server for snapshot visibility
• Snap is LUN level Server
Backup
Host

• Make snapshot
Backup
– Navisphere Manager GUI Unit

– NaviCLI Source LUN

SNAP
– admsnap
Snapshot

97
SnapView

98
SnapView

99
Summary

SnapView
 Point-in-time view

 Reduces time that application data is unavailable to users

100
Disaster Recovery

101
Mirror View

 Optional array-based software

 Allows array-to-array mirroring


 Focused on disaster recovery

 MirrorView integration
 Off-site backup
 Application testing

102
Mirror View Configuration

 Two Fibre Channel connections


 required between associated storage systems

 MirrorView runs in synchronous mode


 This is to ensure the highest in data integrity
 Byte-for-byte image copy

 Total number of mirrors per array


 Supports 50 primary images

103
Mirror View Configuration

• MirrorView setup
–MirrorView software
–Secondary LUN must be the same size as primary LUN
–Can be different RAID type
• Navisphere
–Provides ease of management
–GUI and CLI interface supports all operations

104
Site A Site B
Production Host Synchronous, Standby Host
bi-directional mirror

Direct
LongWave GBICs Check the
Extenders EMC
Production Extenders supported Mirror Support
A
DWDM ; Optera 5200, CNT Ultranet A
Matrix
ADVA FSP2000 Production
Mirror B
also B

IP – Extends greater than 60 km


300 m up to 60 km
Primary array Secondary array
105
Navisphere View

106
Summary

MirrorView

 Focus on disaster recovery

 Direct or extended connection

107
Data Migration

Data Migration

108
SAN Copy

 Optional array-based software

 Allows array-to-array Data Migration


 Focused on Off Loading Host Traffic
 Content Distribution

109
Off-Load Traffic on HOST
San Copy
- storage-system based data-mover application
- uses the SAN to copy data between storage systems.
- Data migration takes place on the SAN
- host not involved in the copy process

SAN

- eliminates need to move data to/ from attached hosts


110
- reserves host processing resources for users and applications
Content Distribution

In today’s business environment, it is common


for a company to have multiple data centers in
different regions.

Customers frequently need to distribute


data from headquarters
to regional offices and
collect data from local SAN
offices to headquarters. FC over IP FC over IP

Such applications are


defined as content distribution Extender
and are supported by EMC
SAN Copy.

Web content distribution is


also in this category, which
involves distributing content
to multiple servers on an
internal or external website.

111
Types of data migration

 CLARiiON to CLARiiON
 Symmetrix to CLARiiON
 Internally within a CLARiiON
 Compaq StorageWorks to CLARiiON

There are four different migration types. The most likely scenario is CLARiiON to
CLARiiON, Symm to CLARiiON and internally within a CLARiiON

Internally – covered in SAN Copy Administrator’s guide.


Symm to CLARiiON – procedure for setup in release notes and manual
Compaq Storage Works – done only by PS – escalations go to Engineering.

Check the EMC Support Matrix of eNavigator for the latest supported
configurations
112
Simultaneous Sessions

Maximum number of Maximum number of


concurrent sessions destination logical
Storage System Type per system units per session

CX400 8 50

CX600 16 100

FC4700 16 100

See latest eLab Navigator or EMC Support Matrix for info regarding newer model arrays.

San Copy lets you have more than one session active at the same time. The number of
supported concurrent active sessions and the number of logical units per session depends on
the storage system type.

113
SAN Copy Features
 Concurrent Copy sessions
- allows multiple source LUNs to simultaneously transfer data to multiple destination LUNs.
 Queued Copy sessions
- queued sessions are sessions that have been created but are not active or paused.
 Create/Modify Copy Sessions
- management tolls allow full control to create and modify sessions as seen fit.
 Multiple Destinations
- each source LUN may have multiple destinations
- up to 50 per session in a CX400 and 100 per session in the CX600 and FC4700.
- see eLab Navigator or EMC Support Matrix for newer model arrays.
 Pause/Resume/Abort
- control over an active session is in the hands of the administrator.
- possible to pause and later resume a session or abort a session before completion.
 Throttle
- resources used by SAN Copy sessions can be controlled through use of a throttle value.
 Checkpoint/Restart
- allows admin-defined time interval that lets SAN Copy resume an interrupted session from
the last checkpoint, rather than having to start the session over.

114
SAN Copy Operation

 SP becomes an initiator (looks like a host to SAN)


 Source is Read-only during copy
 Read from Source, Write to destination(s)
 Start n Reads from Source (n = # Buffers)
 When any Read completes, write to Destination
 When any Write completes, start another Read
For SAN Copy to operate the SP port must become an initiator and register with the non-SAN
Copy storage.

While a session is operational the source LUN is put into read-only mode. If this is
unacceptable, a SnapCopy, Clone or BCV in the case of Symmetrix can be created from the
source LUN and used as the source for the SAN Copy session.

Data is read from the source and written to the destinations. SAN Copy will initiate a number
of reads equal to the number of buffers allocated for the session. When any read to the buffer
is complete, SAN Copy will write the data to the target LUN.

When write is complete and buffer empty, SAN Copy will refill buffer with another read
115
SAN Copy Create Session Process Flow

 Make SAN Copy Connections


 Not needed for local copy
 Create Session
 Designate Source LUN(s)
 Designate Target LUN(s)
 Set Parameters
 Session Name
 Throttle Value

SAN Copy session can be set to copy data between (2) LUNs in a single array, between arrays
and between a CLARiiON array and a Symmetrix array. While there are many similiarities
when setting up different sessions there are also some differences. In the interest of clarity
each of these session types will be covered in full. The creation of a SAN Copy session
involves a number of steps.

If the source and destination lun(s) are located in different arrays, the source array must be
connected to the destination array(s) as an initiator. Source lun and destination lun are easily
selected. Destination must be at least as large as the source.

Each session requires unique name and priority of copy traffic can be set with throttle value.
116
Local SAN
Copy

Source LUN

A local SAN Copy will copy the data from one LUN in an array to
one or more LUNs in the same array. Target LUN

Because this transfer is entirely self contained, connecting and


verifying connections between this SAN Copy array and remote
array need not be performed.
117
Course Summary

Key points covered in this course:


The Layered Applications
 SnapView : Data Copy
 Mirror View : Disaster Recovery
 SanCopy : Data Migration

118
Thank You

119

You might also like