SAN EMC Clariion Foundations
SAN EMC Clariion Foundations
2
SAN Foundations
Network
Components of Network.
1.
2.
3.
4.
5.
6.
4
Local Area Network
Testing Server
Exchange Server
Switch / Hub / Router
5
RAID0 - Striped set without parity or Striping.
RAID1 - Mirrored set without parity or Mirroring.
RAID3 - Striped set with dedicated parity byte level parity.
RAID4 - Block level parity.
RAID5 - Striped set with distributed parity or interleave parity.
RAID6 - Striped set with dual distributed parity.
Nested (hybrid) RAID
RAID 0+1: striped sets in a mirrored set (minimum four disks; even number of
disks) provides fault tolerance and improved performance but increases complexity.
The key difference from RAID 1+0 is that RAID 0+1 creates a second striped set to
mirror a primary striped set. The array continues to operate with one or more drives
failed in the same mirror set, but if drives fail on both sides of the mirror the data on
the RAID system is lost.
RAID 1+0: mirrored sets in a striped set (minimum two disks but more commonly
four disks to take advantage of speed benefits; even number of disks) provides fault
tolerance and improved performance but increases complexity.
The key difference from RAID 0+1 is that RAID 1+0 creates a
striped set from a series of mirrored drives. In a failed disk
situation, RAID 1+0 performs better because all the remaining
disks continue to be used. The array can sustain multiple drive
losses so long as no mirror loses all its drives.
Storage Area Network
Components of SAN.
1. HOST
2. FC Cables
3. HBA – Host Bus Adapter
4. FC Switch
5. Storage Array
6. FCP
10
Fibre Channel
11
SAN Fabric
Target
FC Switch
Source
FC Initiator:
FC Responder:
HBA
CLARiion SP Ports
Or
HP EVA Controller Ports
12
SAN
13
Connectors
14
HBA (Host Bus Adapter)
15
Fibre Channel
16
World Wide Name
17
World Wide Name
1 0 0 0 0 0 0 0C9 2 0CD4 0
Example: Emulex HBA’s World Wide Name
2 0 0 0 0 0 2 0 3 7 E 2 8 8BE
Example: Qlogic HBA’s World Wide Name
5 0 0 6 0 1 6 0 0 0 6 0 0 1B2
Example: EMC Clariion HBA’s World Wide Name
5 0 0 6 0B0 0 0 0C2 6 2 0 2
Example: HP EVA HBA’s World Wide Name
18
Switches
19
Switch Ports
20
Directors
21
Fibre Channel SAN Switches and Directors
Switches Directors
Redundant fans and power supplies “Redundant everything” provides
deployment availability
22
Storage
Internal storage — Internal storage consist of disks located within the host server
that has a basic RAID controller. The disks themselves, in most cases, are the same
as those used in external storage shelves, using SCSI and Fibre Channel
technologies.
23
External Storage Array
External storage
External storage connects to a
physically separate storage cabinet or
shelf. The interface is through an HBA
located in the host server normally EMC Clariion
using a Fibre Channel interface or SCSI
interface
HP EVA
24
Physical and Logical Topologies
Physical
Topology Logical
Topology Physical
Topology
Windows
Server
Fibre Channel
Switch Storage
Exchange
Server
25
Physical Topology
SANs are scalable from two to 14 million ports in one system, with multiple topology choices such
as:
Point-to-point — A dedicated and direct connection exists between two SAN devices.
Arbitrated loop — SAN devices are connected in the form of a ring.
Switched fabric — SAN devices are connected using a fabric switch. This enables a SAN device to
connect and communicate with multiple SAN devices simultaneously.
26
Zoning
27
WWN Zoning
WWPN = 10:00:00:60:69:40:8E:41
Domain ID = 21
Port = 1
WWPN = 10:00:00:00:C9:20:DC:40 FC
Switch
WWPN = 10:00:00:60:69:40:DD:A1
Domain ID = 25
Port = 3
Host
Fabric
FC
Switch
Storage
WWPN = 50:06:04:82:E8:91:2B:9E
28
Port Zoning
WWPN = 10:00:00:60:69:40:8E:41
Domain ID = 21
Port = 1
WWPN = 10:00:00:00:C9:20:DC:40 FC
Switch
WWPN = 10:00:00:60:69:40:DD:A1
Domain ID = 25
Port = 3
Host
Fabric
FC
Switch
Storage
WWPN = 50:06:04:82:E8:91:2B:9E
29
RAID
30
LUN
31
SAN
32
SAN Vendors
(source: www.byteandswitch.com)
33
Data Storage Solutions
34
DAS – Direct Attach Storage
DAS is storage connected to a server. The storage itself can be external to the server
connected by a cable to a controller with an external port, or the storage can be
internal to the server. Some internal storage devices use high-availability features
such as adding redundant component capabilities.
35
NAS – Network Attached Storage
36
SAN – Storage Area Network
37
SAN Benefits
SAN benefits provide high return on investment (ROI) and reduce the total cost
of ownership (TCO) by increasing performance, manageability, and scalability.
38
Storage Solution Comparison Table
Server and
General purpose Optimized General purpose
operating
systems
Internal or External direct-
Storage Storage
devices Solution Comparison Table External shared
external dedicated attached
Management
Labor intensive Centralized Centralized
39
Course Summary
40
Course Summary
Zoning
Single Initiator
Port
WWN
41
CLARiiON
CLARiiON Foundations
CLARIION RANGE
43
CLARiiON Foundations
44
CLARiiON Timeline
SCSI FC5500 FC5700 FC5300 FC4500 FC4700 CX200 CX300 CX300i CX3-20
CLARiiONs 1997 1998 1999 2000 2001 CX400 CX500 CX500i CX3-40
Pre-1997 CX600 CX700 2005 CX3-80
2002 2003 2006
45
High-End Storage: The New Definition
46
Flexible, High Availability Design
SnapView and
Fully redundant MirrorView replication
architecture software
Power, cooling, data SAN Copy
paths, SPS
No single points of
Non-stop operation
Online software upgrades
failure, modular
Online hardware changes architecture
Continuous diagnostics Fibre Channel and ATA
Data and system integrity RAID
CLARalert Phone Home From 5 to 240 disks
Dual I/O paths with Flexibility
non-disruptive failover
Individual Disk
Leader in data integrity
RAID levels 0, 1, 1/0, 3, 5
Mirrored write cache
SNiiFF Verify
Mix drive types
Background Verify-Per Mix RAID levels
RAID Group Up to 16 GB of memory
8 GB per Storage Processor
Configurable read and
write cache size
47
CLARiiON CX Series
48
CLARiiON Foundations
CLARIION COMPONENTS
49
Modular Building Blocks
Disk Array Enclosure (DAE)
CX family uses DAE2 with up to (15) 2Gb
FC Drives
FC family uses DAE with up to (10) 1Gb
FC drives
DAE2-ATA contains up to 15 ATA drives
(Advanced Technology Attachment)
51
CX 600 Architecture
LCC
LCC
LCC
LCC
LCC
LCC
4GLCC 4GLCC
4GLCC 4GLCC
4GLCC 4GLCC
Storage Processor
54
Persistent Storage Manager (PSM)
55
Data Units on a Disk
User Data (512 Bytes)
Sec Sec Sec Sec Sec Sec Sec Sec Sec Sec Sec Sec
e.0 e.1 e.2 e.3 e.4 e.5 e.122 e.123 e.124 e.125 e.126 e.127
Element s.0 Element s.1 Element s.2 Parity s. Element s.3 Element s.4 Element s.5
Sector
520 Bytes
512 bytes of user data
8 bytes of administrative data
56
CLARiiON Foundations
DATA AVAILABILITY
DATA PROTECTION
57
Mirrored Write Caching
Storage System
58
Advanced Availability: LUN Ownership
Model
Only one Storage Processor “owns” a LUN at any point in time
Assigned when LUN is created but can also be changed using Navisphere
Manager or CLI
59
Write Cache Protected by “Vault”
60
Host Connectivity Redundancy PowerPath –
Failover Software
61
Course Summary
62
CLARiiON Foundations
63
FLARE Operating Environment
CLARiiON Hardware
64
FLARE Versions
66
EMC Navisphere Management Software
67
Navisphere Manager
Discover
Discovers all managed CLARiiON
systems
Monitor
Show status of storage systems,
Storage Processors, disks,
snapshots, remote mirrors, and
other components
Centralized alerting
Apply and provision
Configure volumes and assign
storage to hosts
Configure snapshots and remote
mirrors
Set system parameters
Customize views via Navisphere
Organizer
Report
Provide extensive performance
statistics via Navisphere
Analyzer
68
Storage Configuration and Provisioning
Understanding application and server
requirements and planning
configuration is critical!
Step 0 - Planning
RAID Group is a collection of physical
disks
RAID Protection level is assigned to all
Step 1 – Create RAID Groups
disks within the RAID group
RAID-0: Stripe
No protection
Performance JBOD
RAID-1: Mirroring
Some performance gain by splitting read operations
Protection against single disk failure
Minimum performance hit during failure
70
CLARiiON RAID Options
71
CLARiiON RAID Options
Step 0 - Planning
RAID-5: Striping with Parity – Random IOPS
Performance of striping
Protection from single disk failure
Parity distributed across member drives within the RAID Group
Write performance penalty
Performance impact if a disk fails in RAID Group
Hot Spare
Takes the place of failed disk within a RAID group
Must have equal or greater capacity than the disk it replaces
Can be located anywhere except on Vault disks
When failing disk is replaced, the hot spare restores the data
to the replacement disk and returns to the hot spare pool
72
Which RAID Level Is Right
Step 0 - Planning
RAID 0 – Data striping
No parity protection, least-expensive storage
Applications using read-only data that require quick access, such as
data down-loading
RAID 1 – Mirroring between two disks
Excellent availability, but expensive storage
Transaction, logging or record keeping applications
RAID 1/0 – Data striping with mirroring
Excellent availability, but expensive storage
Provides the best balance of performance and availability
RAID 3 – Data striping with dedicated parity disk
RAID 5 – Data striping/parity spread across all drives
Very good availability and inexpensive storage
Support mixed types of RAID in the same chassis
73
Creating RAID Groups
Step 1 – Create
RAID protection levels are set
RAID Groups
through a RAID group
Physical disks part of one RAID group only
Drive types cannot be mixed in the RAID Group
May include disks from any enclosure
RAID types may be mixed in an array
RAID groups may be expanded
Users do not access RAID groups directly
5 disk RAID-5 group 4 disk RAID-1/0 group
74
Creating a RAID Group
Step 1 – Create
RAID Groups
75
Binding a LUN
76
Bind Operation - Setting Parameters
Variable parameters
Cache enable, rebuild time, verify time, auto assignment
Can change without unbinding
Bind Operation
Fastbind is the almost instantaneous bind achieved on a factory
system
77
Binding a LUN
78
LUN Properties - General
79
metaLUNs
A metaLUN is created by combining LUNs
Dynamically increase LUN capacity
Can be done on-line while host I/O is in progress
A LUN can be expanded to create a metaLUN and a metaLUN can be
further expanded by adding additional LUNs
Striped or concatenated
Data is restriped when a striped metaLUN is created
Appears to host as a single LUN
Added to storage group like any other LUN
Can be used with MirrorView, SnapView, or SAN Copy
Supported only on CX family with Navisphere 6.5+
metaLUN
+ + =
80
Storage Groups
82
Creating a Storage Group
Step 3 – Create
Storage Groups
83
Storage Group Properties - LUNs
84
Storage Group Properties - Hosts
85
LUN Migration
89
Brocade Zoning
90
Switch Zoning
91
Switch Zoning
92
BUSINESS CONTINUITY
93
Data Copy
94
SnapView
95
SnapView
LUN
Primary
access to LUN Snap
Snapshot
Access to
Snapshot
96
SnapView
• Make snapshot
Backup
– Navisphere Manager GUI Unit
SNAP
– admsnap
Snapshot
97
SnapView
98
SnapView
99
Summary
SnapView
Point-in-time view
100
Disaster Recovery
101
Mirror View
MirrorView integration
Off-site backup
Application testing
102
Mirror View Configuration
103
Mirror View Configuration
• MirrorView setup
–MirrorView software
–Secondary LUN must be the same size as primary LUN
–Can be different RAID type
• Navisphere
–Provides ease of management
–GUI and CLI interface supports all operations
104
Site A Site B
Production Host Synchronous, Standby Host
bi-directional mirror
Direct
LongWave GBICs Check the
Extenders EMC
Production Extenders supported Mirror Support
A
DWDM ; Optera 5200, CNT Ultranet A
Matrix
ADVA FSP2000 Production
Mirror B
also B
106
Summary
MirrorView
107
Data Migration
Data Migration
108
SAN Copy
109
Off-Load Traffic on HOST
San Copy
- storage-system based data-mover application
- uses the SAN to copy data between storage systems.
- Data migration takes place on the SAN
- host not involved in the copy process
SAN
111
Types of data migration
CLARiiON to CLARiiON
Symmetrix to CLARiiON
Internally within a CLARiiON
Compaq StorageWorks to CLARiiON
There are four different migration types. The most likely scenario is CLARiiON to
CLARiiON, Symm to CLARiiON and internally within a CLARiiON
Check the EMC Support Matrix of eNavigator for the latest supported
configurations
112
Simultaneous Sessions
CX400 8 50
CX600 16 100
FC4700 16 100
See latest eLab Navigator or EMC Support Matrix for info regarding newer model arrays.
San Copy lets you have more than one session active at the same time. The number of
supported concurrent active sessions and the number of logical units per session depends on
the storage system type.
113
SAN Copy Features
Concurrent Copy sessions
- allows multiple source LUNs to simultaneously transfer data to multiple destination LUNs.
Queued Copy sessions
- queued sessions are sessions that have been created but are not active or paused.
Create/Modify Copy Sessions
- management tolls allow full control to create and modify sessions as seen fit.
Multiple Destinations
- each source LUN may have multiple destinations
- up to 50 per session in a CX400 and 100 per session in the CX600 and FC4700.
- see eLab Navigator or EMC Support Matrix for newer model arrays.
Pause/Resume/Abort
- control over an active session is in the hands of the administrator.
- possible to pause and later resume a session or abort a session before completion.
Throttle
- resources used by SAN Copy sessions can be controlled through use of a throttle value.
Checkpoint/Restart
- allows admin-defined time interval that lets SAN Copy resume an interrupted session from
the last checkpoint, rather than having to start the session over.
114
SAN Copy Operation
While a session is operational the source LUN is put into read-only mode. If this is
unacceptable, a SnapCopy, Clone or BCV in the case of Symmetrix can be created from the
source LUN and used as the source for the SAN Copy session.
Data is read from the source and written to the destinations. SAN Copy will initiate a number
of reads equal to the number of buffers allocated for the session. When any read to the buffer
is complete, SAN Copy will write the data to the target LUN.
When write is complete and buffer empty, SAN Copy will refill buffer with another read
115
SAN Copy Create Session Process Flow
SAN Copy session can be set to copy data between (2) LUNs in a single array, between arrays
and between a CLARiiON array and a Symmetrix array. While there are many similiarities
when setting up different sessions there are also some differences. In the interest of clarity
each of these session types will be covered in full. The creation of a SAN Copy session
involves a number of steps.
If the source and destination lun(s) are located in different arrays, the source array must be
connected to the destination array(s) as an initiator. Source lun and destination lun are easily
selected. Destination must be at least as large as the source.
Each session requires unique name and priority of copy traffic can be set with throttle value.
116
Local SAN
Copy
Source LUN
A local SAN Copy will copy the data from one LUN in an array to
one or more LUNs in the same array. Target LUN
118
Thank You
119