Storage-Sc2000 - Deployment Guide - En-Us
Storage-Sc2000 - Deployment Guide - En-Us
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you
how to avoid the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
Copyright © 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and
intellectual property laws. Dell™ and the Dell logo are trademarks of Dell Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
2015 - 04
Rev. A00
Contents
3
General Safety Precautions...........................................................................................................30
Install the Storage System in a Rack...................................................................................................30
3 Front-End Cabling.............................................................................................. 32
Types of Redundancy for Front-End Connections........................................................................... 32
Port Redundancy...........................................................................................................................32
Storage Controller Redundancy................................................................................................... 33
Multipath IO................................................................................................................................... 33
Cabling SAN-Attached Host Servers.................................................................................................. 34
Connecting to Fibre Channel Host Servers..................................................................................34
Connecting to iSCSI Host Servers................................................................................................ 46
Cabling Direct-Attached Host Servers............................................................................................... 55
Preparing Host Servers..................................................................................................................55
SAS Virtual Port Mode................................................................................................................... 56
Two Servers Connected to Dual 12 Gb 4–Port SAS Storage Controllers...................................56
Four Servers Connected to Dual 12 Gb 4–Port SAS Storage Controllers...................................57
Two Servers Connected to a Single 12 Gb 4–Port SAS Storage Controller............................... 59
Labeling the Front-End Cables.....................................................................................................60
Cabling the Ethernet Management Port.............................................................................................61
Labeling the Ethernet Management Cables................................................................................. 61
Cabling the Embedded Ports for iSCSI Replication........................................................................... 62
Cabling the Replication Port for iSCSI Replication...................................................................... 62
Cabling the Management Port and Replication Port for iSCSI Replication................................ 63
Cabling the Embedded Ports for iSCSI Host Connectivity................................................................64
Two iSCSI Networks with Dual Storage Controllers and Embedded Ethernet Ports.................64
One iSCSI Network with Dual Storage Controllers and Embedded Ethernet Ports...................65
4
Additional Storage Center Information........................................................................................ 75
Fibre Channel Zoning Information............................................................................................... 76
Locating Your Service Tag.............................................................................................................77
Supported Operating Systems for Storage Center Automated Setup ............................................. 77
Install and Use the Dell Storage Client............................................................................................... 77
Discover and Select an Uninitialized Storage Center........................................................................ 77
Set System Information.......................................................................................................................78
Set Administrator Information............................................................................................................ 78
Configure iSCSI Fault Domains...........................................................................................................79
Confirm the Storage Center Configuration....................................................................................... 79
Initialize the Storage Center............................................................................................................... 79
Review Fibre Channel Front-End Configuration............................................................................... 80
Review SAS Front-End Configuration................................................................................................ 80
Configure Time Settings.....................................................................................................................80
Configure SMTP Server Settings........................................................................................................ 80
Configure Key Management Server Settings......................................................................................81
Review the SupportAssist Data Collection and Storage Agreement.................................................81
Advantages and Benefits of Dell SupportAssist............................................................................ 81
Provide Contact Information..............................................................................................................82
Update Storage Center....................................................................................................................... 82
Complete Configuration and Perform Next Steps............................................................................ 82
Set Up a localhost or VMware Host................................................................................................... 83
Set Up a localhost from Initial Setup............................................................................................ 83
Set Up a VMware vSphere Host from Initial Setup.......................................................................83
Set Up a VMware vCenter Host from Initial Setup.......................................................................84
Configure Embedded iSCSI Ports.......................................................................................................85
5
Check the Current Disk Count before Adding Expansion Enclosures....................................... 90
Add the Expansion Enclosures to the A-Side Chain....................................................................90
Add the Expansion Enclosures to the B-Side Chain.................................................................... 91
Label the Back-End Cables...........................................................................................................93
Adding a Single Expansion Enclosure to a Chain Currently in Service.............................................94
Check the Disk Count before Adding an Expansion Enclosure.................................................. 95
Add an Expansion Enclosure to the A-side Chain....................................................................... 95
Add an Expansion Enclosure to the B-side Chain....................................................................... 97
Label the Back-End Cables...........................................................................................................99
Removing an Expansion Enclosure from a Chain Currently in Service..........................................100
Release the Disks in the Expansion Enclosure........................................................................... 101
Disconnect the A-Side Chain from the SC100/SC120 Expansion Enclosure...........................102
Disconnect the B-Side Chain from the SC100/SC120 Expansion Enclosure...........................103
6
Preface
About this Guide
This guide describes the features and technical specifications of an SCv2000/SCv2020 storage system.
Revision History
Document Number: 8X7FK
Audience
The information provided in this Deployment Guide is intended for storage or network administrators and
deployment personnel.
Contacting Dell
Dell provides several online and telephone-based support and service options. Availability varies by
country and product, and some services may not be available in your area.
To contact Dell for sales, technical support, or customer service issues go to www.dell.com/support.
• For customized support, enter your system Service Tag on the support page and click Submit.
• For general support, browse the product list on the support page and select your product.
Related Publications
The following documentation is available for the SCv2000/SCv2020 storage system.
• Dell Storage Center SCv2000 and SCv2020 Storage System Getting Started Guide
Provides information about an SCv2000/SCv2020 storage system, such as installation instructions
and technical specifications.
• Dell Storage Center SCv2000 and SCv2020 Storage System Owner’s Manual
Provides information about an SCv2000/SCv2020 storage system, such as hardware features,
replacing customer replaceable components, and technical specifications.
• Dell Storage Center SCv2000 Series Virtual Media Update Instructions
Describes how to install Storage Center software on an SCv2000/SCv2020 storage system using
virtual media. Installing Storage Center software using the Storage Center Virtual Media option is
intended for use only by sites that cannot update Storage Center using standard methods.
• Dell Storage Center Release Notes
Contains information about new features and known and resolved issues for the Storage Center
software.
7
• Dell Storage Client Administrator’s Guide
Provides information about the Dell Storage Client and how it can be used to manage a Storage
Center.
• Dell Storage Center Software Update Guide
Describes how to upgrade Storage Center software from an earlier version to the current version.
• Dell Storage Center Command Utility Reference Guide
Provides instructions for using the Storage Center Command Utility. The Command Utility provides a
command-line interface (CLI) to enable management of Storage Center functionality on Windows,
Linux, Solaris, and AIX platforms.
• Dell Storage Center Command Set for Windows PowerShell
Provides instructions for getting started with Windows PowerShell cmdlets and scripting objects that
interact with the Storage Center via the PowerShell interactive shell, scripts, and PowerShell hosting
applications. Help for individual cmdlets is available online.
• Dell TechCenter
Provides technical white papers, best practice guides, and frequently asked questions about Dell
Storage products. Go to: http://en.community.dell.com/techcenter/storage/.
8
1
About the SCv2000/SCv2020 Storage
System
The SCv2000/SCv2020 storage system provides the central processing capabilities for the Storage
Center Operating System (OS) and management of RAID storage.
The SCv2000/SCv2020 storage system holds the physical disks that provide storage for the Storage
Center. If additional storage is needed, the SCv2000/SCv2020 also supports SC100/SC120 expansion
enclosures.
NOTE: The cabling between the storage system, switches, and host servers is referred to as front‐
end connectivity. The cabling between the storage system and expansion enclosures is referred to
as back-end connectivity.
Switches
Dell offers enterprise-class switches as part of the total Storage Center solution.
The SCv2000/SCv2020 supports Fibre Channel (FC) and Ethernet switches, which provide robust
connectivity to servers and allow for the use of redundant transport paths. Fibre Channel (FC) or Ethernet
switches can provide connectivity to a remote Storage Center to allow for replication of data. In addition,
Ethernet switches provide connectivity to a management network to allow configuration, administration,
and management of the Storage Center.
• The SCv2000 supports up to thirteen SC100 expansion enclosures, up to six SC120 expansion
enclosures, or any combination of SC100/SC120 expansion enclosures as long as the total drive
count of the system does not exceed 168.
• The SCv2020 supports up to twelve SC100 expansion enclosures, up to six SC120 expansion
enclosures, or any combination of SC100/SC120 expansion enclosures as long as the total drive
count of the system does not exceed 168.
• For more information on installing an Enterprise Manager Data Collector, see the Dell Enterprise
Manager Installation Guide.
• For more information on managing the Data Collector and setting up replications, see the Dell
Enterprise Manager Administrator’s Guide.
Front-End Connectivity
Front-end connectivity provides IO paths from servers to a storage system and replication paths from
one Storage Center to another Storage Center. The SCv2000/SCv2020 provides three types of front-end
connectivity:
• Fibre Channel: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by
connecting to the storage system Fibre Channel ports through one or more Fibre Channel switches.
Connecting host servers directly to the storage system without using Fibre Channel switches is not
supported.
When replication is licensed, the SCv2000/SCv2020 can use the Fibre Channel ports to replicate data
to another Storage Center.
• iSCSI: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by connecting to
the storage system iSCSI ports through one or more Ethernet switches. Connecting host servers
directly to the storage system without using Ethernet switches is not supported.
When replication is licensed, the SCv2000/SCv2020 can use the iSCSI ports to replicate data to
another Storage Center
NOTE: If replication is licensed, the SCv2000/SCv2020 can use the embedded REPL port to
perform iSCSI replication to another SCv2000 series Storage Center.
If replication is licensed and the Flex Port license is installed, the SCv2000/SCv2020 can use the
embedded MGMT port to perform iSCSI replication to another SCv2000 series Storage Center.
In addition, the SCv2000/SCv2020 can use the embedded MGMT and REPL ports as front-end
iSCSI ports for connectivity to host servers when the Flex Port license is installed.
• SAS: Hosts or servers access storage by connecting directly to the storage system SAS ports.
NOTE: The front-end connectivity ports are located on the back of the storage system, but are
designated as front-end ports.
Back-End Connectivity
Back-end connectivity is strictly between the storage system and expansion enclosures, which hold the
physical drives that provide back-end expansion storage.
The SCv2000/SCv2020 supports SAS connectivity to multiple SC100/SC120 expansion enclosures.
NOTE: The baseboard management controller (BMC) does not have a separate physical port on the
SCv2000/SCv2020. The BMC is accessed through the same Ethernet management port that is used
for Storage Center configuration, administration, and management.
2 Status indicator Lights when at least one power supply is supplying power to
the storage system.
• Off: No power
• On steady blue: Power is on and firmware is running
• Blinking blue:Storage system is busy booting or updating
6 AC power status • Off: AC power is off, or the power is on but the module is
indicator (2) not in a controller, or it may indicate a hardware fault
• Steady green: AC power is on
• Blinking green: AC power is on and the PSU is in standby
mode
3 MGMT port — 10 Mbps, 100 Mbps, or 1 Gbps Ethernet/iSCSI port used for
storage system management and access to the BMC
NOTE: To use the MGMT port as an iSCSI port for
replication to another Storage Center, a Flex Port license
and replication license are required. To use the MGMT port
as a front-end connection to host servers, a Flex Port
license is required.
4 REPL port — 10 Mbps, 100 Mbps, or 1 Gbps Ethernet/iSCSI port used for
replication to another Storage Center (requires a replication
license)
NOTE: To use the RELP port as a front-end connection to
host servers, a Flex Port license is required.
9 Recessed reset button Reboots the storage controller forcing it to restart at the POST
process
12 Diagnostic LEDs (8) — • Green LEDs 0–3: Low byte hex POST code
• Green LEDs 4–7: High byte hex POST code
Figure 10. SCv2000/SCv2020 Storage Controller with Four 1 GbE iSCSI Front-End Ports
3 MGMT port — 10 Mbps, 100 Mbps, or 1 Gbps Ethernet port used for storage
system management and access to the BMC
NOTE: To use the MGMT port as an iSCSI port for
replication to another Storage Center, a Flex Port license
and replication license are required. To use the MGMT port
as a front-end connection to host servers, a Flex Port
license is required.
4 REPL port — 10 Mbps, 100 Mbps, or 1 Gbps Ethernet/iSCSI port used for
replication to another Storage Center
NOTE: To use the RELP port as a front-end connection to
host servers, a Flex Port license is required.
9 Recessed reset button Reboots the storage controller forcing it to restart at the POST
process
10 Identification LED • Off: Identification disabled
• Blinking blue (for 15 sec.): Identification is enabled
12 Diagnostic LEDs (8) — • Green LEDs 0–3: Low byte hex POST code
• Green LEDs 4–7: High byte hex POST code
Figure 12. SCv2000/SCv2020 Storage Controller with Four 12 Gb SAS Front-End Ports
3 MGMT port — 10 Mbps, 100 Mbps, or 1 Gbps Ethernet port used for storage
system management and access to the BMC
4 REPL port — 10 Mbps, 100 Mbps, or 1 Gbps Ethernet/iSCSI port used for
replication to another Storage Center
NOTE: To use the RELP port as a front-end connection to
host servers, a Flex Port license is required.
9 Recessed reset button Reboots the storage controller forcing it to restart at the POST
process
10 Identification LED • Off: Identification disabled
• Blinking blue (for 15 sec.): Identification is enabled
• Blinking blue (continuously): Storage controller shut down
to the Advanced Configuration and Power Interface (ACPI)
S5 state
12 Diagnostic LEDs (8) — • Green LEDs 0–3: Low byte hex POST code
• Green LEDs 4–7: High byte hex POST code
• An SCv2000 holds up to 12 drives, which are numbered left to right in rows starting from 0 at the top-
left drive.
2 Power supply status Lights when at least one power supply is supplying power
indicator to the expansion enclosure.
• Off: Both power supplies are off.
• On steady green: At least one power supply is providing
power to the expansion enclosure
• An SC100 holds up to 12 drives, which are numbered in rows starting from 0 at the top-left drive.
• An SC120 holds up to 24 drives, which are numbered left to right starting from 0.
• Rack Space: There must be sufficient space in the rack to accommodate the storage system chassis,
expansion enclosures, and switches.
• Power: Power must be available in the rack, and the power delivery system must meet the
requirements of the Storage Center.
• Connectivity: The rack must be wired for connectivity to the management network and any networks
that carry front-end IO from the Storage Center to servers.
Safety Precautions
Always follow these safety precautions to avoid injury and damage to Storage Center equipment.
If equipment described in the document is used in a manner not specified by Dell, the protection
provided by the equipment may be impaired. For your safety and protection, observe the rules described
in the following sections.
• Dell recommends that only individuals with rack-mounting experience install an SCv2000/SCv2020
storage system in a rack.
• Make sure the storage system is fully grounded at all times to prevent damage from electrostatic
discharge.
• When handling the storage system hardware, you should use an electrostatic wrist guard (not
included) or a similar form of protection.
The storage system chassis MUST be mounted in a rack; the following safety requirements must be
considered when doing so:
• The rack construction must be capable of supporting the total weight of the installed chassis and the
design should incorporate stabilizing features suitable to prevent the rack tipping or being pushed
over during installation or in normal use.
• To avoid danger of the rack toppling over, do not slide more than one chassis out of the rack at a
time.
• The rack design should take into consideration the maximum operating ambient temperature for the
unit, which is 57°C.
• Provide a suitable power source with electrical overload protection. All Storage Center components
must be grounded before applying power. Make sure that there is a safe electrical earth connection to
power supply cords. Check the grounding before applying power.
• The plugs on the power supply cords are used as the main disconnect device. Make sure that the
socket outlets are located near the equipment and are easily accessible.
• Know the locations of the equipment power switches and the room's emergency power-off switch,
disconnection switch, or electrical outlet.
• Do not work alone when working with high-voltage components.
• Use rubber mats specifically designed as electrical insulators.
• Do not remove covers from the power supply unit. Disconnect the power connection before
removing a power supply from the storage system.
• Do not remove a faulty power supply unless you have a replacement model of the correct type ready
for insertion. A faulty power supply must be replaced with a fully operational module power supply
within 24 hours.
• Unplug the storage system chassis before you move it or if you think it has become damaged in any
way. When powered by multiple AC sources, disconnect all supply power for complete isolation.
• Dell recommends that you always use a static mat and static strap while working on components in
the interior of the storage system chassis.
• Observe all conventional ESD precautions when handling plug-in modules and components.
• Use a suitable ESD wrist or ankle strap.
• Avoid contact with backplane components and module connectors.
• Keep all components and printed circuit boards (PCBs) in their antistatic bags until ready for use.
• Keep the area around the storage system chassis clean and free of clutter.
• Place any system components that have been removed away from the storage system chassis or on a
table so that they are not in the way of foot traffic.
• While working on the storage system chassis, do not wear loose clothing such as neckties and
unbuttoned shirt sleeves, which can come into contact with electrical circuits or be pulled into a
cooling fan.
• Remove any jewelry or metal objects from your body because they are excellent metal conductors
that can create short circuits and harm you if they come into contact with printed circuit boards or
areas where power is present.
• Do not lift a storage system chassis by the handles of the power supply units (PSUs). They are not
designed to hold the weight of the entire chassis, and the chassis cover may become bent.
• Before moving a storage system chassis, remove the PSUs to minimize weight.
• Do not remove drives until you are ready to replace them.
NOTE: To ensure proper storage system cooling, hard drive blanks must be installed in any hard
drive slot that is not occupied.
Steps
1. Secure the rails that are pre-attached to both sides of the storage system chassis.
a. Lift the locking tab on the rail.
b. Push the rail towards the back of the chassis until it locks in place.
2. Determine where to mount the storage system and mark the location at the front and rear of the
rack.
8. Secure the storage system chassis to the rack using the mounting screws within each chassis ear.
a. Lift the latch on each chassis ear to access the screws.
b. Tighten the screws to secure the chassis into the rack.
c. Close the latch on each chassis ear.
9. If the Storage Center system includes expansion enclosures, mount the expansion enclosures in the
rack. See the instructions included with the expansion enclosure for detailed steps.
Port Redundancy
To allow for port redundancy, two front-end ports on a storage controller must be connected to the
same switch or server.
Fault domains group front-end ports that are connected to the same network. Ports that belong to the
same fault domain can fail over to each other because they have the same connectivity.
If a port becomes unavailable because it is disconnected or there is a hardware failure, the port moves
over to another port in the same fault domain.
32 Front-End Cabling
Storage Controller Redundancy
To allow for storage controller redundancy, a front-end port on each storage controller must be
connected to the same switch or server.
If a storage controller becomes unavailable, the front-end ports on the offline storage controller move
over to the ports (in the same fault domain) on available storage controller.
Multipath IO
MPIO allows a server to use multiple paths for IO if they are available.
MPIO software offers redundancy at the path level. MPIO typically operates in a round-robin manner by
sending packets first down one path and then the other. If a path becomes unavailable, MPIO software
continues to send packets down the functioning path. MPIO is required to enable redundancy for severs
connected to a Storage Center with SAS front-end connectivity.
NOTE: MPIO is operating-system specific and it loads as a driver on the server or it is part of the
server operating system.
MPIO Behavior
The server must have at least two FC, iSCSI, or SAS ports to use MPIO.
When MPIO is configured, a server can send IO to multiple ports on the same storage controller.
NOTE: Compare the host server settings applied by the Dell Storage Client wizard against the latest
Dell Storage Center Best Practices document located on the Dell TechCenter (http://
en.community.dell.com/techcenter/storage/).
Table 2. MPIO Configuration Documents
VMware vSphere 5.x Dell Compellent Storage Center Best Practices with vSphere 5.x
Windows Server 2008, Dell Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices
2008 R2, 2012, and 2012 Guide
R2
To manually configure MPIO on a host server, see the Dell Best Practices document that corresponds to
the server operating system. Depending on the operating system, you may need to install MPIO software
or configure server options.
Front-End Cabling 33
Cabling SAN-Attached Host Servers
An SCv2000/SCv2020 storage system with Fibre Channel or iSCSI front-end ports connects to host
servers through Fibre Channel or Ethernet switches.
• A storage system with Fibre Channel front-end ports connects to one or more FC switches, which
connect to one or more host servers.
• A storage system with iSCSI front-end ports connects to one or more Ethernet switches, which
connect to one or more host servers.
Steps
1. Install Fibre Channel HBAs in the host servers.
NOTE: Do not install Fibre Channel HBAs from different vendors in the same server.
2. Install supported drivers for the HBAs and make sure that the HBAs have the latest supported
firmware
3. Use the Fibre Channel cabling diagrams to cable the host servers to the switches. Connecting host
servers directly to the storage system without using Fibre Channel switches is not supported.
• If a physical port or FC switch becomes unavailable, the storage system is accessed from the switch in
the other fault domain.
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
Steps
1. Connect each server to both FC fabrics.
2. Connect fault domain 1 (shown in orange) to fabric 1.
• Storage controller 1: port 1 to FC switch 1
• Storage controller 2: port 1 to FC switch 1
3. Connect fault domain 2 (shown in blue) to fabric 2.
34 Front-End Cabling
• Storage controller 1: port 2 to FC switch 2
• Storage controller 2: port 2 to FC switch 2
Example
Figure 26. Storage System with Dual 16 Gb Storage Controllers and Two FC Switches
1. Server 1 2. Server 2
3. FC switch 1 (Fault Domain 1) 4. FC switch 2 (Fault Domain 2)
5. Storage system 6. Storage controller 1
7. Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
• If a physical port becomes unavailable, the virtual port moves to another physical port in the same
fault domain on the same storage controller.
• If an FC switch becomes unavailable, the storage system is accessed from the switch in the other fault
domain.
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
Front-End Cabling 35
Steps
1. Connect each server to both FC fabrics.
2. Connect fault domain 1 (shown in orange) to fabric 1.
• Storage controller 1: port 1 to FC switch 1
• Storage controller 1: port 3 to FC switch 1
• Storage controller 2: port 1 to FC switch 1
• Storage controller 2: port 3 to FC switch 1
3. Connect fault domain 2 (shown in blue) to fabric 2.
• Storage controller 1: port 2 to FC switch 2
• Storage controller 1: port 4 to FC switch 2
• Storage controller 2: port 2 to FC switch 2
• Storage controller 2: port 4 to FC switch 2
Example
Figure 27. Storage System with Dual 8 Gb Storage Controllers and Two FC Switches
1. Server 1 2. Server 2
3. FC switch 1 (Fault domain 1) 4. FC switch 2 (Fault domain 2)
5. Storage system 6. Storage controller 1
7. Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
36 Front-End Cabling
One Fibre Channel Fabric with Dual 16 Gb 2–Port Storage Controllers
Use one Fibre Channel (FC) fabric to prevent an unavailable port or storage controller from causing a loss
of connectivity between the host servers and a storage system with dual 16 Gb 2–port storage
controllers.
About this task
In this configuration, there are two fault domains, one fabric, and one FC switch. Each storage controller
connects to the FC switch using two FC connections.
• If a physical port becomes unavailable, the storage system is accessed from another port on the FC
switch.
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
physical ports on the other storage controller.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity
between the host servers and storage system.
Steps
1. Connect each server to the FC fabric.
2. Connect fault domain 1 (shown in orange) to the fabric.
• Storage controller 1: port 1 to the FC switch
• Storage controller 2: port 1 to the FC switch
3. Connect fault domain 2 (shown in blue) to the fabric.
• Storage controller 1: port 2 to the FC switch
• Storage controller 2: port 2 to the FC switch
Example
Figure 28. Storage System with Dual 16 Gb Storage Controllers and One FC Switch
1. Server 1 2. Server 2
3. FC switch (Fault domain 1 and fault domain 2) 4. Storage system
5. Storage controller 1 6. Storage controller 2
Front-End Cabling 37
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
• If a physical port becomes unavailable, the virtual port moves to another physical port in the same
fault domain on the same storage controller.
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity
between the host servers and storage system.
Steps
1. Connect each server to the FC fabric.
2. Connect fault domain 1 (shown in orange) to the fabric.
• Storage controller 1: port 1 to the FC switch
• Storage controller 1: port 3 to the FC switch
• Storage controller 2: port 1 to the FC switch
• Storage controller 2: port 3 to the FC switch
3. Connect fault domain 2 (shown in blue) to the fabric.
• Storage controller 1: port 2 to the FC switch
• Storage controller 1: port 4 to the FC switch
• Storage controller 2: port 2 to the FC switch
• Storage controller 2: port 4 to the FC switch
38 Front-End Cabling
Example
Figure 29. Storage System with Dual 8 Gb Storage Controllers and One FC Switch
1. Server 1 2. Server 2
3. FC switch (Fault domain 1 and fault domain 2) 4. Storage system
5. Storage controller 1 6. Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of
connectivity between the host servers and storage system.
Steps
1. Connect each server to the FC fabric.
2. Connect fault domain 1 (shown in orange) to the fabric 1.
Front-End Cabling 39
Storage controller: port 1 to FC switch 1.
3. Connect fault domain 2 (shown in blue) to the fabric 2.
Storage controller: port 2 to FC switch 2.
Example
Figure 30. Storage System a Single 16 Gb Storage Controller and Two FC Switches
1. Server 1 2. Server 2
3. FC switch 1 (Fault domain 1) 4. FC switch 2 (Fault domain 2)
5. Storage system 6. Storage controller
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
• If a physical port becomes unavailable, the virtual port moves to another physical port in the same
fault domain on the storage controller.
• If an FC switch becomes unavailable, the storage system is accessed from the switch in the other fault
domain.
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of
connectivity between the host servers and storage system.
40 Front-End Cabling
Steps
1. Connect each server to both FC fabrics.
2. Connect fault domain 1 (shown in orange) to fabric 1.
• Storage controller 1: port 1 → FC switch 1
• Storage controller 1: port 3 → FC switch 1
3. Connect fault domain 2 (shown in blue) to fabric 2.
• Storage controller 1: port 2 → FC switch 2
• Storage controller 1: port 4 → FC switch 2
Example
Figure 31. Storage System a Single 8 Gb Storage Controller and Two FC Switches
1. Server 1 2. Server 2
3. FC switch 1 (Fault domain 1) 4. FC switch 2 (Fault domain 2)
5. Storage system 6. Storage controller 1
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
Front-End Cabling 41
Using SFP+ Transceiver Modules
An SCv2000/SCv2020 storage system with 16 Gb Fibre Channel storage controllers comes with short
range small-form-factor pluggable (SFP+) transceiver modules.
The SFP+ transceiver modules are installed into the ports of the SCv2000/SCv2020 storage controller.
Fiber-optic cables are connected from the SFP+ transceiver modules in the SCv2000/SCv2020 to SFP+
transceiver modules in Fibre Channel switches.
CAUTION: When handling static-sensitive devices, take precautions to avoid damaging the
product from static electricity.
• Use only Dell supported SFP+ transceiver modules with the SCv2000/SCv2020. Other generic SFP+
transceiver modules are not supported and may not work with the SCv2000/SCv2020.
• The SFP+ transceiver module housing has an integral guide key that is designed to prevent you from
inserting the transceiver module incorrectly.
• Use minimal pressure when inserting an SFP+ transceiver module into an FC port. Forcing the SFP+
transceiver module into a port may damage the transceiver module or the port.
• The SFP+ transceiver module must be installed into a port before you connect the fiber-optic cable.
• The fiber-optic cable must be removed from the SFP+ transceiver module before you remove the
transceiver module from the port.
42 Front-End Cabling
WARNING: To reduce the risk of injury from laser radiation or damage to the equipment, observe
the following precautions:
• Do not open any panels, operate controls, make adjustments, or perform procedures to a laser
device other than those specified herein.
• Do not stare into the laser beam.
CAUTION: Transceiver modules can be damaged by electrostatic discharge (ESD). To prevent ESD
damage to the transceiver module, take the following precautions:
2. Insert the transceiver module into the port until it is firmly seated and the latching mechanism clicks.
The transceiver modules are keyed so that they can only be inserted with the correct orientation. If a
transceiver module does not slide in easily, ensure that it is correctly oriented.
CAUTION: To reduce the risk of damage to the equipment, do not use excessive force when
inserting the transceiver module.
3. Position fiber-optic cable so that the key (the ridge on one side of the cable connector) is aligned
with the slot in the transceiver module.
CAUTION: Touching the end of a fiber-optic cable damages the cable. Whenever a fiber-
optic cable is not connected, replace the protective covers on the ends of the cable.
4. Insert the fiber-optic cable into the transceiver module until the latching mechanism clicks.
5. Insert the other end of the fiber-optic cable into the SFP+ transceiver module of a Fibre Channel
switch.
Front-End Cabling 43
About this task
Read the following cautions and information before beginning removal or replacement procedures.
WARNING: To reduce the risk of injury from laser radiation or damage to the equipment, observe
the following precautions:
• Do not open any panels, operate controls, make adjustments, or perform procedures to a laser
device other than those specified herein.
• Do not stare into the laser beam.
CAUTION: Transceiver modules can be damaged by electrostatic discharge (ESD). To prevent ESD
damage to the transceiver module, take the following precautions:
44 Front-End Cabling
Dell recommends creating zones using a single initiator host port and multiple Storage Center ports.
– Include all Storage Center physical WWNs from Storage Center system A and Storage Center
system B in a single zone.
– Include all Storage Center physical WWNs of Storage Center system A and the virtual WWNs of
Storage Center system B on the particular fabric.
– Include all Storage Center physical WWNs of Storage Center system B and the virtual WWNs of
Storage Center system A on the particular fabric.
NOTE: Some ports may not be used or dedicated for replication, however ports that are used
must be in these zones.
Front-End Cabling 45
Figure 35. Attach Label to Cable
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text.
46 Front-End Cabling
• If the host server is a Windows or Linux host:
a. Install the iSCSI HBAs or network adapters dedicated for iSCSI traffic in the host servers.
NOTE: Do not install iSCSI HBAs or network adapters from different vendors in the same
server.
b. Install supported drivers for the HBAs or network adapters and make sure that the HBAs or
network adapter have the latest supported firmware
c. Use the host operating system to assign IP addresses for each iSCSI port. The IP addresses must
match the subnets for each fault domain.
CAUTION: Correctly assign IP addresses to the HBAs or network adapters. Assigning IP
addresses to the wrong ports can cause connectivity issues.
NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data
path, adapter ports, switches, and storage system.
d. Use the iSCSI cabling diagrams to cable the host servers to the switches. Connecting host servers
directly to the storage system without using Ethernet switches is not supported.
• If the host server is a vSphere host:
a. Install the iSCSI HBAs or network adapters dedicated for iSCSI traffic in the host servers.
b. Install supported drivers for the HBAs or network adapters and make sure that the HBAs or
network adapter have the latest supported firmware
c. If the host uses network adapters for iSCSI traffic, create a VMkernel port for each network
adapter. (1 VMkernel per vSwitch)
d. Use the host operating system to assign IP addresses for each iSCSI port. The IP addresses must
match the subnets for each fault domain..
CAUTION: Correctly assign IP addresses to the HBAs or network adapters. Assigning IP
addresses to the wrong ports can cause connectivity issues.
NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data
path, adapter ports, switches, and storage system.
e. If the host uses network adapters for iSCSI traffic, add the VMkernel ports to the iSCSI software
initiator.
f. Use the iSCSI cabling diagrams to cable the host servers to the switches. Connecting host servers
directly to the storage system without using Ethernet switches is not supported.
• If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the
switch in the other fault domain.
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
Steps
1. Connect each server to both iSCSI networks.
2. Connect fault domain 1 (shown in orange) to iSCSI network 1.
• Storage controller 1: port 1 to Ethernet switch 1
Front-End Cabling 47
• Storage controller 2: port 1 to Ethernet switch 1
3. Connect fault domain 2 (shown in blue) to iSCSI network 2.
• Storage controller 1: port 2 to Ethernet switch 2
• Storage controller 2: port 2 to Ethernet switch 2
Example
Figure 37. Storage System with Dual 10 GbE Storage Controllers and Two Ethernet Switches
1. Server 1 2. Server 2
3. Ethernet switch 1 (Fault domain 1) 4. Ethernet switch 2 (Fault domain 2)
5. Storage system 6. Storage controller 1
7. Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
• If a physical port becomes unavailable, the virtual port moves to another physical port in the same
fault domain on the same storage controller.
• If an Ethernet switch becomes unavailable, the storage system is accessed from the switch in the
other fault domain.
48 Front-End Cabling
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
physical ports on the other storage controller.
Steps
1. Connect each server to both iSCSI networks.
2. Connect fault domain 1 (shown in orange) to iSCSI network 1.
• Storage controller 1: port 1 to Ethernet switch 1
• Storage controller 2: port 1 to Ethernet switch 1
• Storage controller 1: port 3 to Ethernet switch 1
• Storage controller 2: port 3 to Ethernet switch 1
3. Connect fault domain 2 (shown in blue) to iSCSI network 2.
• Storage controller 1: port 2 to Ethernet switch 2
• Storage controller 2: port 2 to Ethernet switch 2
• Storage controller 1: port 4 to Ethernet switch 2
• Storage controller 2: port 4 to Ethernet switch 2
Example
Figure 38. Storage System with Dual 1 GbE Storage Controllers and Two Ethernet Switches
1. Server 1 2. Server 2
3. Ethernet switch 1 (Fault domain 1) 4. Ethernet switch 2 (Fault domain 2)
5. Storage system 6. Storage controller 1
7. Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
Front-End Cabling 49
One iSCSI Network with Dual 10 GbE 2–Port Storage Controllers
Use one iSCSI network to prevent an unavailable port or storage controller from causing a loss of
connectivity between the host servers and a storage system with dual 10 GbE 2–Port storage controllers .
About this task
In this configuration, there are two fault domains, one iSCSI network, and one Ethernet switch. Each
storage controller connects to the Ethernet switch using two iSCSI connections.
• If a physical port becomes unavailable, the storage system is accessed from another port on the
Ethernet switch.
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity
between the host servers and storage system.
Steps
1. Connect each server to the iSCSI network.
2. Connect fault domain 1 (shown in orange) to the iSCSI network.
• Storage controller 1: port 1 to the Ethernet switch
• Storage controller 2: port 1 to the Ethernet switch
3. Connect fault domain 2 (shown in blue) to the iSCSI network.
• Storage controller 1: port 2 to the Ethernet switch
• Storage controller 2: port 2 to the Ethernet switch
Example
Figure 39. Storage System with Dual 10 GbE Storage Controllers and One Ethernet Switch
1. Server 1 2. Server 2
3. Ethernet switch (Fault domain 1 and fault 4. Storage system
domain 2)
5. Storage controller 1 6. Storage controller 2
50 Front-End Cabling
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
• If a physical port becomes unavailable, the virtual port moves to another physical port in the same
fault domain on the same storage controller.
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity
between the host servers and storage system.
Steps
1. Connect each server to the iSCSI network.
2. Connect fault domain 1 (shown in orange) to the iSCSI network.
• Storage controller 1: port 1 to the Ethernet switch
• Storage controller 1: port 3 to the Ethernet switch
• Storage controller 2: port 1 to the Ethernet switch
• Storage controller 2: port 3 to the Ethernet switch
3. Connect fault domain 2 (shown in blue) to the iSCSI network.
• Storage controller 1: port 2 to the Ethernet switch
• Storage controller 1: port 4 to the Ethernet switch
• Storage controller 2: port 2 to the Ethernet switch
• Storage controller 2: port 4 to the Ethernet switch
Front-End Cabling 51
Example
Figure 40. Storage System with Dual 1 GbE Storage Controllers and One Ethernet Switch
1. Server 1 2. Server 2
3. Ethernet switch (Fault domain 1 and fault 4. Storage system
domain 2)
5. Storage controller 1 6. Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of
connectivity between the host servers and storage system.
Steps
1. Connect each server to the iSCSI network.
2. Connect the fault domain 1 (shown in orange) to iSCSI network 1.
52 Front-End Cabling
Storage controller: port 1 to Ethernet switch 1
3. Connect the fault domain 2 (shown in orange) to iSCSI network 2.
Storage controller: port 2 to Ethernet switch 2
Example
Figure 41. Storage System with One 10 GbE Storage Controller and Two Ethernet Switches
1. Server 1 2. Server 2
3. Ethernet switch 1 (Fault domain 1) 4. Ethernet switch 2 (Fault domain 2)
5. Storage system 6. Storage controller
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
• If a physical port becomes unavailable, the virtual port moves to another physical port in the same
fault domain on the storage controller.
• If an Ethernet switch becomes unavailable, the storage system is accessed from the switch in the
other fault domain.
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of
connectivity between the host servers and storage system.
Front-End Cabling 53
Steps
1. Connect each server to both iSCSI networks.
2. Connect fault domain 1 (shown in orange) to iSCSI network 1.
• Storage controller 1: port 1 to Ethernet switch 1
• Storage controller 1: port 3 to Ethernet switch 1
3. Connect fault domain 2 (shown in blue) to iSCSI network 2.
• Storage controller 1: port 2 to Ethernet switch 2
• Storage controller 1: port 4 to Ethernet switch 2
Example
Figure 42. Storage System with One 1 GbE Storage Controller and Two Ethernet Switches
1. Server 1 2. Server 2
3. Ethernet switch 1 (Fault domain 1) 4. Ethernet switch 2 (Fault domain 2)
5. Storage system 6. Storage controller
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
54 Front-End Cabling
Figure 43. Attach Label to Cable
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text
Steps
1. Install the SAS HBAs in the host servers.
Front-End Cabling 55
NOTE: Do not install SAS HBAs from different vendors in the same server.
2. Install supported drivers for the HBAs and make sure that the HBAs have the latest supported
firmware installed.
3. Use the SAS cabling diagram to cable the host servers directly to the storage system.
NOTE: If deploying vSphere hosts, configure only one host at a time.
If a storage controller becomes unavailable, the volume becomes active on the other storage controller.
The state of the paths on the available storage controller are set to Active/Optimized and the state of the
paths on the other storage controller are set to Standby. When the storage controller becomes available
again and the ports are rebalanced, the volume moves back to its preferred storage controller and the
ALUA states are updated.
If a SAS path becomes unavailable, the Active/Optimized volumes on that path become active on the
other storage controller. The state of the failed path for those volumes is set to Standby and the state of
the active path for those volumes is set to Active/Optimized.
NOTE: Failover in SAS virtual port mode occurs within a single fault domain. Therefore, a server
must have both connections in the same fault domain. For example, if a server is connected to SAS
port 2 on one storage controller, it must be connected to SAS port two on the other storage
controller. If a server is not cabled correctly when a storage controller or SAS path becomes
unavailable, access to the volume is lost.
If a storage controller becomes unavailable, all of the standby paths on the other storage controller
become active.
Steps
1. Connect fault domain 1 (shown in orange) to the host server 1.
a. Connect a SAS cable from storage controller 1: port 1 to host server 1.
b. Connect a SAS cable from storage controller 2: port 1 to host server 1.
2. Connect fault domain 2 (shown in blue) to the host server 1.
a. Connect a SAS cable from storage controller 1: port 2 to host server 1.
b. Connect a SAS cable from storage controller 2: port 2 to host server 1.
3. Connect fault domain 3 (shown in gray) to the host server 2.
a. Connect a SAS cable from storage controller 1: port 3 to host server 2.
56 Front-End Cabling
b. Connect a SAS cable from storage controller 2: port 3 to host server 2.
4. Connect fault domain 4 (shown in red) to the host server 2.
a. Connect a SAS cable from storage controller 1: port 4 to host server 2.
b. Connect a SAS cable from storage controller 2: port 4 to host server 2.
Example
Figure 45. Storage System with Dual 12 Gb SAS Storage Controllers Connected to Two Host Servers
1. Server 1 2. Server 2
3. Storage system 4. Storage controller 1
5. Storage controller 2
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
If a storage controller becomes unavailable, all of the standby paths on the other storage controller
become active.
Steps
1. Connect fault domain 1 (shown in orange) to the host server 1.
Front-End Cabling 57
a. Connect a SAS cable from storage controller 1: port 1 to host server 1.
b. Connect a SAS cable from storage controller 2: port 1 to host server 1.
2. Connect fault domain 2 (shown in blue) to the host server 2.
a. Connect a SAS cable from storage controller 1: port 2 to host server 2.
b. Connect a SAS cable from storage controller 2: port 2 to host server 2.
3. Connect fault domain 3 (shown in gray) to the host server 3.
a. Connect a SAS cable from storage controller 1: port 3 to host server 3.
b. Connect a SAS cable from storage controller 2: port 3 to host server 3.
4. Connect fault domain 4 (shown in red) to the host server 4.
a. Connect a SAS cable from storage controller 1: port 4 to host server 4.
b. Connect a SAS cable from storage controller 2: port 4 to host server 4.
Example
Figure 46. Storage System with Dual 12 Gb SAS Storage Controllers Connected to Four Host Servers
1. Server 1 2. Server 2
3. Server 3 4. Server 4
5. Storage system 6. Storage controller 1
7. Storage controller 2
Next steps
Install or enable MPIO on the host servers.
58 Front-End Cabling
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
NOTE: This configuration is vulnerable to storage controller unavailability, which results in a loss of
connectivity between the host servers and storage system.
Steps
1. Connect fault domain 1 to the host server 1, by connecting a SAS cable from storage controller 1:
port 1 to host server 1.
2. Connect fault domain 2 to the host server 1, by connecting a SAS cable from storage controller 1:
port 2 to host server 1.
3. Connect fault domain 3 to the host server 2, by connecting a SAS cable from storage controller 1:
port 3 to host server 2.
4. Connect fault domain 4 to the host server 2, by connecting a SAS cable from storage controller 1:
port 4 to host server 2.
Example
Figure 47. Storage System with One 12 Gb SAS Storage Controller Connected to Two Host Servers
1. Server 1 2. Server 2
3. Storage system 4. Storage controller
Next steps
Install or enable MPIO on the host servers.
Front-End Cabling 59
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices document located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text.
60 Front-End Cabling
Cabling the Ethernet Management Port
To manage Storage Center, the Ethernet management (MGMT) port of each storage controller must be
connected to an Ethernet switch that is part of the management network.
About this task
The management port provides access to the storage system through the Dell Storage Client software
and is used to send emails, alerts, SNMP traps, and SupportAssist diagnostic data. The management port
also provides access to the baseboard management controller (BMC) software.
NOTE: If the Flex Port license is installed, the management port becomes a shared iSCSI port. To
use the management port as an iSCSI port, it must be cabled to a network switch dedicated to iSCSI
traffic. Special considerations must be taken into account when sharing the management port.
Steps
1. Connect the Ethernet management port on storage controller 1 to the Ethernet switch.
2. Connect the Ethernet management port on storage controller 2 to the Ethernet switch.
Front-End Cabling 61
Figure 51. Attach Label to Cable
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text.
62 Front-End Cabling
Steps
1. Connect the replication port on storage controller 1 to Ethernet switch 2.
2. Connect the replication port on storage controller 2 to Ethernet switch 2.
NOTE: The management port on each storage controller is connected to an Ethernet switch on
the management network.
Related Links
Configure Embedded iSCSI Ports
Cabling the Management Port and Replication Port for iSCSI Replication
If replication is licensed and the Flex Port license is installed, both the management (MGMT) and
replication (REPL) ports can be used to replicate data to another Storage Center.
About this task
Connect the management port and replication port on each storage controller to an Ethernet switch
through which the Storage Center can perform replication.
Steps
1. Connect Flex Port Domain 1 (shown in orange) to the iSCSI network.
a. Connect the management port on storage controller 1 to the Ethernet switch.
b. Connect the management port on storage controller 2 to the Ethernet switch.
2. Connect iSCSI Embedded Domain 2 (shown in blue) to the iSCSI network.
a. Connect the replication port on storage controller 1 to the Ethernet switch.
b. Connect the replication port on storage controller 2 to the Ethernet switch.
Front-End Cabling 63
Figure 54. Management and Replication Ports Connected to an iSCSI Network
Related Links
Configure Embedded iSCSI Ports
Two iSCSI Networks with Dual Storage Controllers and Embedded Ethernet
Ports
Use two iSCSI networks to prevent an unavailable port, switch, or storage controller from causing a loss
of connectivity between the host servers and a storage system with dual storage controllers.
About this task
In this configuration, there are two fault domains, two iSCSI networks, and two Ethernet switches. The
storage controllers connect to each Ethernet switch using one iSCSI connection.
• If a physical port or Ethernet switch becomes unavailable, the storage system is accessed from the
switch in the other fault domain.
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
Steps
1. Connect each server to both iSCSI networks.
2. Connect embedded fault domain 1 (shown in orange) to iSCSI network 1.
a. Connect the management port on storage controller 1 to Ethernet switch 1.
b. Connect the management port on storage controller 2 to Ethernet switch 1.
3. Connect embedded fault domain 2 (shown in blue) to iSCSI network 2.
a. Connect the replication port on storage controller 1 to Ethernet switch 2.
b. Connect the replication port on storage controller 2 to Ethernet switch 2.
64 Front-End Cabling
Figure 55. Storage System with Dual Storage Controllers and Two Ethernet Switches
1. Server 1 2. Server 2
3. Ethernet switch 1 (Fault domain 1) 4. Ethernet switch 2 (Fault domain 2)
5. Storage system 6. Storage controller 1
7. Storage controller 2
4. To configure the fault domains and ports, click the Configure Embedded iSCSI Ports link on the
Configuration Complete page of the Discover and Configure Uninitialized SCv2000 Series Storage
Centers wizard.
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices documents located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
Related Links
Configure Embedded iSCSI Ports
One iSCSI Network with Dual Storage Controllers and Embedded Ethernet
Ports
Use one iSCSI network to prevent an unavailable port or storage controller from causing a loss of
connectivity between the host servers and a storage system with dual storage controllers .
About this task
In this configuration, there are two fault domains, one iSCSI network, and one Ethernet switch. Each
storage controller connects to the Ethernet switch using two iSCSI connections.
• If a physical port becomes unavailable, the storage system is accessed from another port on the
Ethernet switch.
• If a storage controller becomes unavailable, the virtual ports on the offline storage controller move to
the physical ports on the other storage controller.
Front-End Cabling 65
NOTE: This configuration is vulnerable to switch unavailability, which results in a loss of connectivity
between the host servers and storage system.
Steps
1. Connect each server to the iSCSI network.
2. Connect embedded fault domain 1 (shown in orange) to the iSCSI network.
a. Connect the management port on storage controller 1 to the Ethernet switch
b. Connect the management port on storage controller 2 to the Ethernet switch
3. Connect embedded fault domain 2 (shown in blue) to the iSCSI network.
a. Connect the replication port on storage controller 1 to the Ethernet switch
b. Connect the replication port on storage controller 2 to the Ethernet switch
Figure 56. Storage System with Dual Storage Controllers and One Ethernet Switch
1. Server 1 2. Server 2
3. Ethernet switch (Fault domain 1 and fault 4. Storage system
domain 2)
5. Storage controller 1 6. Storage controller 2
4. To configure the fault domains and ports, click the Configure Embedded iSCSI Ports link on the
Configuration Complete page of the Discover and Configure Uninitialized SCv2000 Series Storage
Centers wizard.
Next steps
Install or enable MPIO on the host servers.
NOTE: After the Storage Center configuration is complete, run the host access wizard to configure
host server access and apply MPIO best practices. For the latest best practices, see the Dell Storage
Center Best Practices documents located on the Dell TechCenter (http://en.community.dell.com/
techcenter/storage/).
Related Links
Configure Embedded iSCSI Ports
66 Front-End Cabling
4
Back-End Cabling and Connecting Power
Back-end cabling refers to the connections between the storage system and expansion enclosures. After
the back-end cabling is complete, connect power cables to the storage system components and turn on
the hardware.
An SCv2000/SCv2020 storage system can be deployed with or without expansion enclosures.
• When an SCv2000/SCv2020 is deployed without expansion enclosures, the storage controllers must
be interconnected using SAS cables. This connection enables SAS path redundancy between the
storage controllers and the internal disks.
• When an SCv2000/SCv2020 is deployed with expansion enclosures, the expansion enclosures
connect to the SAS ports on the storage controllers.
SAS Redundancy
Use redundant SAS cabling to make sure that an unavailable IO port or storage controller does not cause
a Storage Center outage.
If an IO port or storage controller becomes unavailable, the Storage Center IO moves to the redundant
path.
Example
• Side A (Orange): Expansion enclosures are connected from port B to port A, using the top EMMs.
• Side B (Orange): Expansion enclosures are connected from port A to port B, using the bottom EMMs.
Path Connections
Chain 1: A Side (Orange)
1. Storage controller 1: port A to the expansion enclosure: top EMM, port A.
2. Expansion enclosure: top EMM, port B to storage controller 2: port B.
To connect additional expansion enclosures, cable the expansion enclosures in series. Cable the top
EMM, port B from last enclosure in the chain to the top EMM, port A of the enclosure to add. Then, cable
the bottom EMM, port B from last enclosure in the chain to the bottom EMM, port A of the enclosure to
add.
Path Connections
Chain 1: A Side (Orange) 1. Storage controller 1: port A to expansion enclosure 1: top EMM, port A.
2. Expansion enclosure 1: top EMM, port B to expansion enclosure 2: top
EMM, port A.
3. Expansion enclosure 2: top EMM, port B to storage controller 2: port B.
Chain 1: B Side (Blue) 1. Storage controller 2: port A to expansion enclosure 1: bottom EMM, port
A.
2. Expansion enclosure 1: bottom EMM, port B to expansion enclosure 2:
bottom EMM, port A.
3. Expansion enclosure 2: bottom EMM, port B to storage controller 1: port
B.
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text.
• If the storage system is installed without expansion enclosures, connect power cables to the storage
system chassis and turn on the storage system.
• If the storage system is installed with expansion enclosures, connect power cables to the expansion
enclosure chassis and turn on the expansion enclosures as described in the Dell Storage Center
SC100/SC120 Expansion Enclosure Getting Started Guide. After the expansion enclosures are
powered on, connect power to the storage system chassis and turn on the storage system.
Steps
1. Ensure that the power switches are in the OFF position before connecting the power cables.
2. Connect the power cables to both power supply/cooling fan modules in the storage system chassis
and secure the power cables firmly to the brackets using the straps provided.
3. Plug the other end of the power cables into a grounded electrical outlet or a separate power source
such as an uninterrupted power supply (UPS) or a power distribution unit (PDU).
4. Press both power switches on the rear of the storage system chassis to turn on the storage system.
When the SCv2000/SCv2020 storage system is powered on, there is a delay while the storage
system prepares to start up. During the first minute, the only indication that the storage system is
The storage system hardware must be installed and cabled before the Storage Center can be configured.
Management IPv4 address (Storage Center management address) ___ . ___ . ___ . ___
Top Controller IPv4 address (Controller 1 MGMT port) ___ . ___ . ___ . ___
Bottom Controller IPv4 address (Controller 2 MGMT port) ___ . ___ . ___ . ___
NOTE: For a storage system deployed with two Ethernet switches, Dell recommends setting up
each fault domain on separate subnets.
Table 7. iSCSI Fault Domain 1
IPv4 address for storage controller module 1: port 1 ___ . ___ . ___ . ___
IPv4 address for storage controller module 2: port 1 ___ . ___ . ___ . ___
(Four port I/O card only) IPv4 address for storage controller module 1: ___ . ___ . ___ . ___
port 3
(Four port I/O card only) IPv4 address for storage controller module 2: ___ . ___ . ___ . ___
port 3
IPv4 address for storage controller module 1: port 2 ___ . ___ . ___ . ___
IPv4 address for storage controller module 2: port 2 ___ . ___ . ___ . ___
(Four port I/O card only) IPv4 address for storage controller module 1: ___ . ___ . ___ . ___
port 4
(Four port I/O card only) IPv4 address for storage controller module 2: ___ . ___ . ___ . ___
port 4
(Four port I/O card only) Physical WWN of storage controller 1: port 3 ________________
(Four port I/O card only) Physical WWN of storage controller 2: port 3 ________________
(Four port I/O card only) Virtual WWN of storage controller 1: port 3 ________________
(Four port I/O card only) Virtual WWN of storage controller 2: port 3 ________________
(Four port I/O card only) Physical WWN of storage controller 1: port 4 ________________
(Four port I/O card only) Physical WWN of storage controller 2: port 4 ________________
(Four port I/O card only) Virtual WWN of storage controller 1: port 4 ________________
(Four port I/O card only) Virtual WWN of storage controller 2: port 4 ________________
Steps
1. Make sure that you have the required information that is listed on the first page of the wizard. This
information is needed to configure the Storage Center.
2. Click Next. The Select a Storage Center to Initialize page appears and lists the uninitialized Storage
Centers discovered by the wizard.
1. Enter a descriptive name for the Storage Center in the Storage Center Name field.
2. Enter the system management IPv4 address for the Storage Center in the Management IPv4 Address
field. The Management IPv4 Address is the IP address used to manage the Storage Center and is
different than a controller IPv4 address.
3. Enter an IPv4 address for the management port of each controller.
NOTE: The controller IPv4 addresses and Management IPv4 Address must be within the same
subnet.
4. Enter the subnet mask of the management network in the Subnet Mask field.
5. Enter the gateway address of the management network in the Gateway IPv4 Address field.
6. Enter the domain name of the management network in the Domain Name field.
7. Enter the DNS server addresses of the management network in the DNS Server and Secondary DNS
Server fields.
8. Click Next. The Set Administration Information page appears.
1. Enter a new password for the default Storage Center administrator user in the New Admin Password
and Confirm Password fields.
2. Enter the email address of the default Storage Center administrator user in the Admin Email Address
field.
3. Click Next.
• For a Fibre Channel or SAS storage system, the Confirm Configuration page appears.
• For an iSCSI storage system, the Configure iSCSI Fault Domains page appears.
1. (Optional) On the Configure iSCSI Fault Domains page, click More information about fault domains
or How to set up an iSCSI network to learn more about these topics.
2. Click Next.
NOTE: If there are down iSCSI ports, a dialog box appears that allows you to unconfigure down
iSCSI ports. Unconfiguring the down iSCSI ports will prevent unnecessary alerts.
3. On the Configure iSCSI HBA Fault Domain 1 page, enter network information for the fault domain
and its ports.
NOTE: Make sure that all the IP addresses for iSCSI Fault Domain 1 are in the same subnet.
4. Click Next.
5. On the Configure iSCSI HBA Fault Domain 2 page, enter network information for the fault domain
and its ports. Then click Next.
NOTE: Make sure that all the IP addresses for iSCSI Fault Domain 2 are in the same subnet.
6. Click Next.
NOTE: After the Apply Configuration button is clicked, the configuration cannot be changed
until after the Storage Center is fully configured.
1. The Storage Center performs system setup tasks. The Initialize Storage Center page displays the
status of the system setup tasks.
To learn more about the initialization process, click More information about Initialization.
• If one or more of the system setup tasks fails, click Troubleshoot Initialization Error to learn how
to resolve the issue.
• If the Configuring Disks task fails, click View Disks to see the status of the disks detected by the
Storage Center.
1. (Optional) On the Fault Domains page, click More information about fault domains to learn more
about fault domains.
2. Click Next.
3. On the Review Front-End Configuration page, make sure that the information about the fault
domains is correct.
4. Using the information provided on the Review Front-End Configuration page, configure Fibre
Channel zoning to create the physical and virtual zones described in Fibre Channel Zoning.
5. Click Next.
1. (Optional) On the Fault Domains page, click More information about fault domains to learn more
about fault domains.
2. Click Next.
3. On the Review Front-End Configuration page, make sure that the information about the fault
domains is correct.
4. Click Next.
1. From the Region and Time Zone drop-down menus, select the region and time zone used to set the
time.
2. Select Use NTP Server and enter the host name or IPv4 address of the NTP server, or select Set
Current Time and set the time and date manually.
3. Click Next.
1. To allow SupportAssist to collect diagnostic data and send this information to technical support,
select By checking this box you accept the above terms.
2. Click Next.
3. If you did not select By checking this box you accept the above terms, the SupportAssist
Recommended pane appears.
• Click No to return to the SupportAssist Data Collection and Storage page and accept the
agreement.
• Click Yes to opt out of SupportAssist and proceed to the Update Storage Center page.
Dell strongly recommends enabling comprehensive support service at time of incident and proactive
service with SupportAssist.
• If no update is available, the Storage Center Up to Date page appears. Click Next.
• If an update is available, the current and available versions are listed.
a. Select Enabled.
b. Enter the proxy settings.
c. Click OK. The Storage Center attempts to contact the SupportAssist Update Server to check for
updates.
1. (Optional) Click one of the Next Steps to configure a localhost, configure a VMware host, or create a
volume.
When you have completed the step, you are returned to the Configuration Complete page.
2. (Optional) Click one of the Advanced Steps to configure embedded iSCSI ports or modify BMC
settings.
Steps
1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click
Set up block level storage for this host.
The Set up localhost for Storage Center wizard appears.
• If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into
Storage Center via iSCSI page appears. Select the target fault domains, and then click Log In.
• In all other cases, the Verify localhost Information page appears. Proceed to the next step.
2. On the Verify localhost Information page, verify that the information is correct. Then click Create
Server.
The server definition is created on the Storage Center for the connected and partially connected
initiators.
3. The Host Setup Successful page displays the best practices that were set by the wizard and best
practices that were not set. Make a note of any best practices that were not set by the wizard. It is
recommended that these updates are applied manually before starting IO to the Storage Center.
4. (Optional) Place a check next to Create a Volume for this host to create a volume after finishing host
setup.
5. Click Finish.
1. Configure the fault domain and ports for iSCSI Embedded Domain 1.
a. Enter the target IPv4 address, subnet mask, and gateway for the fault domain.
b. Enter an IPv4 address for each port in the fault domain.
NOTE: Make sure that all the IP addresses for iSCSI Embedded Domain 1 are in the same
subnet.
2. If the Flex Port license is installed, configure the fault domain and ports for Flex Port Domain 1.
a. Enter the target IPv4 address, subnet mask, and gateway for the fault domain.
b. Enter an IPv4 address for each port in the fault domain.
NOTE: Make sure that all the IP addresses for iSCSI Embedded Domain 1 are in the same
subnet.
3. Click OK.
1. Connect to the server, create a Test folder on the server, and copy at least 2 GB of data into it.
2. Restart the top storage controller while copying data to verify that the failover event does not
interrupt IO.
a. Copy the Test folder to the TestVol1 volume.
b. During the copy process, restart the top storage controller (the storage controller through which
TestVol1 is mapped) by selecting it from the Hardware tab and clicking Shutdown/Restart
Controller.
c. Verify that the copy process continues while the storage controller restarts.
d. Wait several minutes and verify that the storage controller has finished restarting.
3. Restart the bottom storage controller while copying data to verify that the failover event does not
interrupt IO.
a. Copy the Test folder to the TestVol2 volume.
b. During the copy process, restart the bottom storage controller (the storage controller through
which the TestVol2 is mapped) by selecting it from the Hardware tab and clicking Shutdown/
Restart Controller.
c. Verify that the copy process continues while the storage controller restarts.
d. Wait several minutes and verify that the storage controller has finished restarting.
Test MPIO
Perform the following tests for a Storage Center with Fibre Channel or iSCSI front-end connectivity if the
network environment and servers are configured for MPIO. Always perform the following tests for a
Storage Center with SAS front-end connectivity.
1. Create a Test folder on the server and copy at least 2GB of data into it.
2. Make sure that the server is configured to use load balancing MPIO (round-robin).
3. Manually disconnect a path while copying data to TestVol1 to verify that MPIO is functioning
correctly.
a. Copy the Test folder to the TestVol1 volume.
b. During the copy process, disconnect one of the paths and verify that the copy process continues.
c. Reconnect the path.
4. Repeat the previous steps as necessary to test additional paths.
5. Restart the storage controller that contains the active path while IO is being transferred and verify
that the IO process continues.
6. If the front-end connectivity of the Storage Center is Fibre Channel or iSCSI and the Storage Center
is not in a production environment, restart the switch that contains the active path while IO is being
transferred, and verify that the IO process continues.
1. Connect to the server to which the volumes are mapped and remove the volumes.
2. Connect to the Storage Center using the Dell Storage Client.
3. Click the Storage tab.
4. From the Storage tab navigation pane, select the Volumes node.
5. Select the volumes to delete.
6. Right-click on the selected volumes and select Delete. The Delete dialog box appears.
7. Click OK
1. Click Send SupportAssist Data Now. The Send Support Assist Data Now dialog box appears.
2. Select Storage Center Configuration and Detailed Logs.
3. Click OK.
Related Links
Cable the Expansion Enclosures Together
Check the Current Disk Count before Adding Expansion Enclosures
1. Connect a SAS cable from expansion enclosure 1: top, port B to expansion enclosure 2: top, port A.
2. Connect a SAS cable from expansion enclosure 1: bottom, port B to expansion enclosure 2: bottom,
port A.
3. Storage controller 2
5. Expansion enclosure 2
5. Expansion enclosure 2
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text.
Related Links
Check the Current Disk Count before Adding Expansion Enclosures
Add an Expansion Enclosure to the A-side Chain
Add an Expansion Enclosure to the B-side Chain
Label the Back-End Cables
1. Turn on the expansion enclosure being added. When the drives spin up, make sure that the front
panel and power status LEDs show normal operation.
2. Disconnect the A-side cables (shown in orange) from the storage controllers. The storage system IO
continues through the B-side cables.
1. Disconnect the B-side cables (shown in blue) from the storage controllers. The storage system IO
continues through the A-side cables.
• Disconnect the SAS cable from storage controller 1: port B.
• Disconnect the SAS cable from storage controller 2, port A.
2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so
that it does not obscure the text.
Related Links
Release the Disks in the Expansion Enclosure
Disconnect the A-Side Chain from the SC100/SC120 Expansion Enclosure
Disconnect the B-Side Chain from the SC100/SC120 Expansion Enclosure
When all of the drives in the expansion enclosure are in the Unassigned disk folder, the expansion
enclosure is safe to remove.
Figure 81. Disconnecting the SC100/SC120 Expansion Enclosure from the A-side Chain
5. Expansion enclosure 2
5. Expansion enclosure 2
5. Expansion enclosure 2
5. Expansion enclosure 1
1. Check the status of the storage controller using the Dell Storage Client.
2. Check the pins and reseat the storage controller.
a. Remove the storage controller.
b. Verify that the pins on the storage system backplane and the storage controller are not bent.
c. Reinstall the storage controller.
3. Determine the status of the storage controller link status indicators. If the indicators are not green,
check the cables.
a. Shut down the storage controller.
b. Reseat the cables on the storage controller.
c. Restart the storage controller.
d. Recheck the link status indicators. If the link status indicators are not green, replace the cables.
1. Check the status of the hard drive using the Dell Storage Client.
2. Determine the status of the hard drive indicators.
• If the hard drive status indicator blinks amber on 2 seconds / off 1 second, the hard drive has
failed.
• If the hard drive status indicator is not lit, proceed to the next step.
3. Check the connectors and reseat the hard drive.
a. Remove the hard drive.
b. Check the hard drive and the backplane to ensure that the connectors are not damaged.
c. Reinstall the hard drive. Make sure the hard drive makes contact with the backplane.
1. Check the status of the expansion enclosure using the Dell Storage Client.
2. If an expansion enclosure and/or drives are missing in the Dell Storage Client, you may need to
check for and install Storage Center updates to use the expansion enclosure and/or drives.
3. If an expansion enclosure firmware update fails, check the back-end cabling and ensure that
redundant connections are used.