Data Center Design PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Chapter Four:

Standards
General Standards
There are two types of environments in the data center:
local area networks (LANs) and storage area networks
(SANs).
A LAN is a network linking multiple devices in a single
geographical location. Typical LAN speeds are 1 Gb or
10 Gb Ethernet.
A SAN is an area in the network linking servers to storage
equipment, which introduces the flexibility of networking
to servers and storage. Speeds are typically 2G, 4G, 8G or
10G Fibre Channel.
When designing a data center, several factors should be
taken into consideration, including standards compliance.
TIA-942, Telecommunications Infrastructure Standard for
Data Center, details several of the factors that should be
considered when designing a data center. When imple-
menting a structured cabling solution, the standard recom-
mends a star topology architecture to achieve maximum
network flexibility. TIA-942 outlines additional factors
crucial to data center design, including recognized media,
cable types, recommended distances, pathway and space
considerations and redundancy. In addition to standards
compliance, the need for infrastructure flexibility to
accommodate future moves, adds and changes due to
growth, new applications, data rates and technology
advancements in system equipment must be considered.
Data Center Needs
As data centers face the continued need to expand and
grow, the fundamental concerns are constant. Data
center infrastructures must provide reliability, flexibility
and scalability in order to meet the ever-changing data
center network.
Reliability: Data center cabling infrastructures
must provide security and enable 24 x 365 x 7 uptime.
Tier 4 data centers have uptime requirements of 99.995
percent, less than one-half hour per year.
Chapter Four: Standards | LAN-1160-EN | Page 14
I
N
T
R
O
D
U
C
T
I
O
N
T
O
D
A
T
A
C
E
N
T
E
R
S
D
E
S
I
G
N
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
D
E
P
L
O
Y
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
P
E
R
F
O
R
M
A
N
C
E
M
E
T
R
I
C
S
A
N
D
A
D
M
I
N
I
S
T
R
A
T
I
O
N
I
N
F
O
R
M
A
T
I
O
N
A
N
D
T
O
O
L
S
Designing the Physical Infrastructure
2 S
E
C
T
I
O
N
Chapter Four: Standards | LAN-1160-EN | Page 15
Flexibility: With the constant in data centers being
change, the cabling infrastructure must be modular
to accommodate changing requirements and easy to
manage and adjust for minimal downtime during
moves, adds and changes.
Scalability: Cabling infrastructures must support data
center growth, both in addition of system electronics
and increasing data rates to accommodate the need for
more bandwidth. The infrastructure must be able to
support existing serial duplex transmission and provide
a clear migration path to future parallel optic transmis-
sion. In general, the infrastructure should be designed
to meet the challenges of the data center over a 15- to
20-year service life.
TIA-942
TIA-942, Telecommunications Infrastructure Standards
for Data Centers, was released in April 2005. The purpose
of this standard is to provide information on the factors
that should be considered when planning and preparing
the installation of a data center or computer room.
TIA-942 combines within a single document all of the
information specific to data center applications. This
standard defines the telecommunications spaces, infra-
structure components and requirements for each within
the data center. Additionally, the standard includes guid-
ance as to recommended topologies, cabling distances,
building infrastructure requirements, labeling and
administration, and redundancy.
Data Center Spaces and Infrastructure
The main elements of a data center, defined by TIA-942,
are the entrance room (ER), main distribution area
(MDA), horizontal distribution area (HDA), zone
distribution area (ZDA), equipment distribution area
(EDA) and telecommunications room (TR).
Entrance room (ER): The space used for the
interface between data center structured cabling
and interbuilding cabling, both access provider
and customer-owned. The ER interfaces with the
computer room through the MDA.
Main distribution area (MDA): Includes the main
cross-connect, which is the central point of distribution
for the data center structured cabling system and may
include a horizontal cross-connect when equipment
areas are directly served from the MDA. Every data
center shall include at least one MDA.
Horizontal distribution area (HDA):
Serves equipment areas.
Equipment distribution area (EDA): Allocated for
end equipment and shall not serve the purposes of
an ER, MDA or HDA.
Telecommunications room (TR): Supports cabling to
areas outside the computer room and shall meet the
specifications of ANSI/TIA-569-B.
The components of the cabling infrastructure, as defined
by TIA-942, are as follows:
Horizontal cabling
Backbone cabling
Cross-connect in the ER or MDA
Main cross-connect in the MDA
Horizontal cross-connect in the TR, HDA, MDA
Zone outlet or consolidation point in the ZDA
Outlet in the EDA
Entrance Room
(Carrier Equip and
Demarcation)
Computer
Room
Main Distribution Area
(Routers, Backbone LAN/SAN
Switches, PBX, M13 Muxes)
Telecom Room
(Office and Operations
Center LAN Switches)
Offices, Ops. Center,
Support Rooms
Zone Dist Area
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Equip Dist Area
(Rack/Cabinet)
Equip Dist Area
(Rack/Cabinet)
Equip Dist Area
(Rack/Cabinet)
Equip Dist Area
(Rack/Cabinet)
Access
Providers
Access
Providers
Horizontal
Cabling
Backbone
Cabling
Figure 4.1
TIA-942 | Drawing ZA-3301
In a data center, including HDAs, the maximum distance
allowed for horizontal cabling is 90 m, independent of
media type. With patch cords, the maximum channel
distance allowed is 100 m, assuming 5 m of patch cord at
each end of the channel for connection to end equipment.
When a ZDA is used, horizontal cabling distances for
copper may need to be reduced.
Depending on the type and size of the data center, the
HDA may be collapsed back to the MDA. This is a typical
design for enterprise data centers. In this scenario, the
cabling from the MDA to the EDA, with or without a
ZDA, is considered horizontal cabling. In a collapsed
design, horizontal cabling is limited to 300 m for optical
fiber and 90 m for copper.
TIA-942 defines the maximum distance for backbone
cabling as being application and media dependent.
Equip Dist Area
(Rack/Cabinet)
Zone Dist Area
Equip Dist Area
(Rack/Cabinet)
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Horizontal
Cabling
Horizontal
Cabling
Horizontal
Cabling
90 m (Horizontal Dist.)
100 m (Channel Dist.)
90 m (Horizontal Dist.)
100 m (Channel Dist.)
Figure 4.2
Horizontal Distribution Area Topology | Drawing ZA-3581
Computer
Room
Zone Dist Area
Equip Dist Area
(Rack/Cabinet)
Equip Dist Area
(Rack/Cabinet)
Access Providers
Horizontal
Cabling
300 moptical
or 90 mcopper
Offices, Ops. Center,
Support Rooms
Main Distribution Area
(Routers, Backbone LAN/SAN
Switches, PBX, M13 Muxes)
C
Figure 4.3
Reduced Data Center Topology | Drawing ZA-3427
Chapter Four: Standards | LAN-1160-EN | Page 16
I
N
T
R
O
D
U
C
T
I
O
N
T
O
D
A
T
A
C
E
N
T
E
R
S
D
E
S
I
G
N
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
D
E
P
L
O
Y
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
P
E
R
F
O
R
M
A
N
C
E
M
E
T
R
I
C
S
A
N
D
A
D
M
I
N
I
S
T
R
A
T
I
O
N
I
N
F
O
R
M
A
T
I
O
N
A
N
D
T
O
O
L
S
Tier Ratings for Data Centers
Additional considerations when planning a data center
infrastructure include redundancy and reliability. TIA-942
describes redundancy using four tiers to distinguish
between varying levels of availability of the data center
infrastructure. The tiers used by this standard correspond
to industry tier ratings for data centers, as defined by the
Uptime Institute. The tiers are defined as Tier I, II, III
and IV, where a higher tier rating corresponds to increased
availability. The requirements of the higher-rated tiers are
inclusive of the lower level tiers. Tier ratings are specified
for various portions of the data center infrastructure,
including telecommunications systems architectural and
structural systems, electrical systems and mechanical
systems. Each system can have a different tier rating,
however; the overall data center tier rating is equal to
the lowest of the ratings across the infrastructure.
Tier I Data Center: Basic
A data center with a Tier I rating has no redundancy.
The data center utilizes single paths and has no redundant
components.
From the Uptime Institute
A Tier I data center is susceptible to disruptions from both
planned and unplanned activity. It has computer power
distribution and cooling, but it may or may not have a
raised floor, a UPS, or an engine generator. The critical
load on these systems is up to 100 percent of N. If it does
have UPS or generators, they are single-module systems
and have many single points of failure. The infrastructure
should be completely shut down on an annual basis to
perform preventive maintenance and repair work.
Urgent situations may require more frequent shutdowns.
Operation errors or spontaneous failures of site infrastruc-
ture components will cause a data center disruption.
Chapter Four: Standards | LAN-1160-EN | Page 17
Redundancy in the Data Center
Offices,
Operations Center,
Support Rooms
Telecom Room
Secondary Customer
Maintenance Hole
(Tier 2 and Higher)
Primary Entrance Room
(Tier 1 and Higher)
DATA CENTER
T
I
E
R
1
T
I
E
R
3
T
I
E
R
4
T
I
E
R
1
T
IE
R
3 T
IE
R
4
TIER
2 TIER
3
TIER 4
COMPUTER
ROOM
Secondary Entrance Room
(Tier 3 and Higher)
Primary Customer
Maintenance Hole
(Tier 1 and Higher)
Equip Dist Area
(Rack/Cabinet)
Primary Dist Area
(Tier 1 and Higher)
Secondary Dist Area
(Optional for Tier 4)
Horiz Dist Area
(LAN/SAN/KVM
Switches) Horiz Dist Area
(LAN/SAN/KVM
Switches)
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Equip Dist Area
(Rack/Cabinet)
Zone Dist Area
Equip Dist Area
(Rack/Cabinet)
Cabling
Optional
Cabling
Figure 4.4
Tier Ratings for Data Centers | Drawing ZA-3582
TIA-942 includes four tiers relating to
various levels of redundancy (Annex G)
Tier I No Redundancy
99.671% available
Tier II Redundant component, but 1 path
99.741% available
Tier III Multiple paths, components,
but 1 active path
99.982% available
Tier IV Multiple paths, components,
all active
99.995% available
< 1/2 hour downtime/year
Tier II Data Center: Redundant Components
A data center with a Tier II rating has redundant
components, but utilizes only a single path.
From the Uptime Institute
Tier II facilities with redundant components are slightly
less susceptible to disruptions from both planned and
unplanned activity than a basic data center. They have a
raised floor, UPS and engine generators, but their capacity
design is N+1, which has a single-threaded distribution
path throughout. Critical load is up to 100 percent of N.
Maintenance of the critical power path and other parts of
the site infrastructure will require a processing shutdown.
Tier III Data Center: Concurrently Maintainable
A data center with a Tier III rating has multiple paths,
but only one path is active.
From the Uptime Institute
Tier III level capability allows for any planned site infra-
structure activity without disrupting the computer hard-
ware operation. Planned activities include preventive and
programmable maintenance, repair and replacement of
components, addition or removal of capacity components,
testing of components and systems and more. For large
sites using chilled water, this means two independent sets of
pipes. Sufficient capacity and distribution must be available
to simultaneously carry the load on one path while
performing maintenance or testing on the other path.
Unplanned activities such as errors in operation or sponta-
neous failures of facility infrastructure components will still
cause a data center disruption. The critical load on a system
does not exceed 90 percent of N. Many Tier III sites are
designed with planned upgrades to Tier IV when the
clients business case justifies the cost of additional protec-
tion. The acid test for a concurrently maintainable data
center is the ability to accommodate any planned work
activity without disruption to computer room processing.
Tier IV Data Center: Fault Tolerant
A data center with a Tier IV rating has multiple active
paths and provides increased fault tolerance.
From the Uptime Institute
Tier IV provides site infrastructure capacity and capability
to permit any planned activity without disruption to the
critical load. Fault-tolerant functionality also provides the
ability of the site infrastructure to sustain at least one
worst-case unplanned failure or event with no critical load
impact. This requires simultaneously active distribution
paths, typically in a system-to-system configuration.
Electrically, this means two separate UPS systems in which
each system has N+1 redundancy. The combined critical
load on a system does not exceed 90 percent of N. As a
result of fire and electrical safety codes, there will still be
downtime exposure due to fire alarms or people initiating
an emergency power off (EPO). Tier IV requires all com-
puter hardware to have dual power inputs as defined by the
Institutes Fault-Tolerant Power Compliance Specifications
Version 2.0, which can be found at www.uptimeinstitute.org.
The acid test for a fault tolerant data center is the ability
to sustain an unplanned failure or operations error without
disrupting computer room processing. In consideration
of this acid test, compartmentalization requirements must
be addressed.
Chapter Four: Standards | LAN-1160-EN | Page 18
I
N
T
R
O
D
U
C
T
I
O
N
T
O
D
A
T
A
C
E
N
T
E
R
S
D
E
S
I
G
N
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
D
E
P
L
O
Y
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
P
E
R
F
O
R
M
A
N
C
E
M
E
T
R
I
C
S
A
N
D
A
D
M
I
N
I
S
T
R
A
T
I
O
N
I
N
F
O
R
M
A
T
I
O
N
A
N
D
T
O
O
L
S
Structured Cabling
TIA-942 provides structured cabling guidance for data
centers. To implement a structured cabling solution, a
star topology is recommended. If an unstructured cabling
solution is used (e.g., a point-to-point installation with
jumpers), moves, adds and changes (MACs) to the data
center become difficult. Issues that may arise include the
following: manageability, scalability, cooling, density and
flexibility. For data centers utilizing access flooring, it is
imperative to keep under-floor obstructions like cabling
to a minimum so cooling airflow is not impeded.
With a star topology, maximum flexibility in the network
is achieved. TIA-942 states that both horizontal and
backbone cabling shall be installed using a star topology.
The cabling infrastructure should be implemented to allow
moves, adds and changes without disturbing the cabling
itself. MACs include network reconfiguration, growing
and changing user applications and/or protocols.
Figure 5.1
Data Center Example | Drawing ZA-3583
EDA
Server
Cabinet MDA
S
A
N
L
A
N
LAN
EDA
SAN
SAN Switch
Storage
EDGE Switch
Servers
Distribution
Switch
Router
Figure 5.2
Data Center Topology | Drawing ZA-3584
Chapter Five:
Designing a Scalable Infrastructure
Chapter Five: Designing a Scalable Infrastructure | LAN-1160-EN | Page 19
Implementation of a star topology with ZDAs allows for
a flexible and manageable cabling infrastructure. Cabling
can be consolidated from hundreds of jumpers to just a
few low-profile, high-fiber-count trunk cables routed to
several zone locations. When adding equipment, extender
trunks (usually much lower fiber count than the trunks,
i.e., 12 fibers to 48 fibers) can be added incrementally,
interconnected at the ZDA (TIA-942 only allows one
ZDA in a link; ZDAs cannot be concatenated) and routed
to the equipment racks. This can be done easily without
disrupting the backbone cabling and without pulling floor
tiles across the entire data center.
Standards Compliance
When designing a data center to meet these needs, best
practices should be followed. TIA-942 addresses recom-
mended design practices for all areas of the data center,
including pathways and spaces and the cabling infrastructure.
Design Recommendations Using Zones
Zone distribution is not only a design topology recom-
mended in TIA-942, but also one incorporated into many
data centers operating today. Consider these steps when
considering a zoned architecture:
1. Identify zones or zone distribution areas (ZDAs)
throughout the data center.
2. Install high-fiber-count cabling from the MDA to the
localized zones or ZDAs.
3. Distribute lower-fiber-count cabling from the ZDAs
to the cabinets or components within the zone.
Zone distribution provides many benefits when
incorporated in the data center cabling infrastructure:
Reduces pathway congestion.
Limits data center disruption from the MDA and eases
implementation of MACs.
Enables a modular solution for a pay-as-you-grow
approach.
Entrance Room
(Carrier Equip and
Demarcation)
Computer
Room
Main Distribution Area
(Routers, Backbone LAN/SAN
Switches, PBX, M13 Muxes)
Telecom Room
(Office and Operations
Center LAN Switches)
Offices, Ops. Center,
Support Rooms
Zone Dist Area
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Horiz Dist Area
(LAN/SAN/KVM
Switches)
Equip Dist Area
(Rack/Cabinet)
Equip Dist Area
(Rack/Cabinet)
Equip Dist Area
(Rack/Cabinet)
Equip Dist Area
(Rack/Cabinet)
Access
Providers
Access
Providers
Horizontal
Cabling
Backbone
Cabling
Figure 5.3
TIA-942 | Drawing ZA-3301
ZDA
ZDA
ZDA
Server Cabinets Server Cabinets
Server Cabinets Server Cabinets
Server Cabinets Server Cabinets
Cabinets
grouped
into zones
Main Distribution
Frame
Main Distribution
Area (MDA)
MDF
Zone Distribution Area (ZDA) located
in the center of each zone
Additional
Cabinet
Zones
ZA-3585
Figure 5.4
Identify Zones or ZDAs | Drawing ZA-3585
ZDA
ZDA
ZDA
Cabinets
grouped
into zones
Main Distribution
Area (MDA)
MDF
Connectivity is quickly and easily deployed
fromthe ZDAs to the Server Cabinets on an
as-needed basis
Additional
Cabinet
Zones
ZA 3587
Figure 5.6
Distribute Lower-Fiber-Count Cabling | Drawing ZA-3587
ZDA
ZDA
ZDA
Cabinets
grouped
into zones
Main Distribution
Area (MDA)
MDF
Trunk Cabling Star Networked
fromthe MDFs to the ZDAs
Additional
Cabinet
Zones
ZA-3586
Figure 5.5
Install High-Fiber-Count Cabling | Drawing ZA-3586
Zone Distribution in the Data Center
Chapter Five: Designing a Scalable Infrastructure | LAN-1160-EN | Page 20
I
N
T
R
O
D
U
C
T
I
O
N
T
O
D
A
T
A
C
E
N
T
E
R
S
D
E
S
I
G
N
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
D
E
P
L
O
Y
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
P
E
R
F
O
R
M
A
N
C
E
M
E
T
R
I
C
S
A
N
D
A
D
M
I
N
I
S
T
R
A
T
I
O
N
I
N
F
O
R
M
A
T
I
O
N
A
N
D
T
O
O
L
S
The selection of the fiber count, or number of fibers used
in the cable plant, is an extremely important decision that
impacts both the current and future system capabilities,
as well as the cost of a communications network. The
development and widespread use of fiber in all aspects of
the data center network require the designer to plan not
only for the immediate system requirements, but for the
evolution of future system demands as well. Since these
fiber systems will provide service for a number of differ-
ent applications later, the number of fibers designed
into the network today must be carefully considered.
Before fiber counts are determined, the designer needs
to analyze the following:
1. Physical infrastructure design for data centers
TIA/EIA 942
Defining MDAs, HDAs and ZDAs
2. Logical topologies for data centers
Common architectures
3. Mapping logical topologies into the physical
infrastructure
TIA-942 and logical architectures
Choosing the proper TIA-942 architecture
Logical Topologies for Data Center
While standards help guide the data center physical
infrastructure, the data center logical infrastructure
does not have a standards body helping with design.
Logical architectures as shown in Table 6.1 vary based
on customer preference and are also guided by the
electronics manufacturers.
Though a standard does not exist, there are some
common architecture best practices that can be followed.
Most logical architectures can be broken into four layers:
1. Core
2. Aggregation
3. Access
4. Storage
Core
The core layer provides the high-speed connectivity
between the data center and the campus network.
This is typically the area where multiple ISPs provide
connections to the internet.
Aggregation
The aggregation layer provides a point where all server
area devices can share common applications such as fire-
walls, cache engines, load balancers and other value-added
services. The aggregation layer must be able to support
multiple 10G and 1 Gig connections to support a
high-speed switching fabric.
Access
The access layer provides the connectivity between the
aggregation layer shared services and the server farm.
Since additional segmentation may be required in the
access area three different segments are needed:
1. Front-end segment This area contains web servers,
DNS servers, FTP and other business application
servers.
2. Application segment Provides the connection
between the front-end servers and the back-end servers.
3. Back-end segment Provides connectivity to the
database servers. This segment also provides access
to the storage area network (SAN).
Storage
The storage layer contains the Fibre Channel switches and
other storage devices such as magnetic disc media or tape.
Chapter Six:
Determining the Fiber Counts
Chapter Six: Determining the Fiber Counts | LAN-1160-EN | Page 21
Layer Logical Architecture
Core
Aggregation
Access
Storage
TABLE 6.1
Chapter Six: Determining the Fiber Counts | LAN-1160-EN | Page 22
Storage
Layer
Back-End
Layer
App
Layer
Front-End
Layer
Core Layer
Aggregation
Layer
Access
Layer
Storage
Layer
Figure 6.1
Logical Architecture | Drawing ZA-3656
TIA-942 Physical Architecture Area Logical Architecture Area
MDA = Main Distribution Area Maps to Core and Aggregation
HDA = Horizontal Distribution Area Maps to Aggregation
ZDA = Zone Distribution Area
Maps to Access and Storage
EDA = Equipment Distribution Area
TABLE 6.2: Mapping Architectures
Mapping Logical Architectures to TIA-942
The key for many data center designers is how to translate
the many logical topologies onto a TIA-942 structured
cabling infrastructure. This translation will affect some
of the key design elements of a structured cabling solution
such as fiber counts, hardware considerations and physical
cable runs. The first step is to translate the TIA-942 areas
(MDA, HDA, ZDA, EDA) to the logical architecture areas
(core, aggregation, access, storage). Table 6.2 shows a
comparison between the two.
The next step is to take an example logical architecture
and translate it to a TIA-942 structured cabling solution.
In this example, we will use a small data center and map
the logical architecture shown in Figure 6.1 to the physical
architecture of the data center (racks and cabinets) that is
shown in Figure 6.2.
The next step is to choose the TIA-942 architecture that
will best map to the logical architecture shown in Figure
6.1. Since this data center is small, a reduced TIA-942
architecture will be implemented. In this architecture,
an MDA, ZDA and EDA will be implemented.
I
N
T
R
O
D
U
C
T
I
O
N
T
O
D
A
T
A
C
E
N
T
E
R
S
D
E
S
I
G
N
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
D
E
P
L
O
Y
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
P
E
R
F
O
R
M
A
N
C
E
M
E
T
R
I
C
S
A
N
D
A
D
M
I
N
I
S
T
R
A
T
I
O
N
I
N
F
O
R
M
A
T
I
O
N
A
N
D
T
O
O
L
S
Chapter Six: Determining the Fiber Counts | LAN-1160-EN | Page 23
Figure 6.2
Data Center Rack Layout | Drawing ZA-3540
Core Switching
Aggregation Switching
SAN Switching
Main Distribution Area
(MDA)
Server Cabinets
Server Cabinets
Server Cabinets
Storage Cabinets
ZDA
MC
ZDA
ZDA
ZDA
Front-End
Layer Zone
EDA EDA EDA EDA EDA EDA EDA EDA
Application
Layer Zone
EDA EDA EDA EDA EDA EDA EDA EDA
Back-End
Layer Zone
EDA EDA EDA EDA EDA EDA EDA EDA
Storage
Zone
EDA EDA EDA EDA EDA EDA EDA EDA
Figure 6.3
Data Center Cabled Architecture | Drawing ZA-3541
16x 10GE
32x 10GE
Up To 20 10GE Uplinks Per Switch
2x Switch
2x Blade Server Chassis
With 16 Pass-Through
10GE Connections
Figure 6.4
Switch Configuration | Drawing ZA-3657
In implementing this structured cabling design, the data
center will be segmented based on the logical topology
shown in Figure 6.1. The segmentation will be as follows:
1. Collapse the core switching LAN and SAN and
aggregation switching in the MDA area.
2. Segment the access layer into three zones (front-end,
application and back-end).
3. Segment the storage into a separate zone.
Each zone will use a middle-of-the-rack (MoR)
interconnect solution for the cabling and within each
zone, the EDAs will utilize a top-of-the-rack interconnect.
The EDAs will serve the electronics in each cabinet and
the ZDAs will serve the EDAs. The ZDAs will homerun
back to the MDA where they will terminate in a main
cross-connect (MC). This is shown in Figure 6.3.
The next step is to determine the number of fibers that
are needed to implement this structured cabling solution.
Two things the designer needs to take into account are:
1. Redundancy requirements for each section or zone
2. Networking requirements
Many data centers are set up to have redundant cable
routes to each zone area. An A and a B route are very
common in todays infrastructure design. Redundancy in
the data center will increase the fiber count to each zone.
Networking requirements will also affect the fiber counts
in the data center. Many networking configurations will
require redundant switches in each rack to reduce single
points of failure in the data center. Also the number
of upstream ports versus downstream ports (oversubscrip-
tion) will affect the fiber count.
As illustrated in the switch configuration shown in
Figure 6.4, this configuration calls for two switches on top
of the EDA cabinet. Each switch will feed 16 blade servers
for a total of 32 downstream ports. The number of
upstream ports (fiber links back to the MDA) will
depend on how much the network engineers want to
oversubscribe the switch. For example, to have a 1:1 over-
subscription, you would need 32 upstream ports to match
the 32 downstream ports. Table 6.3 shows the fiber counts
required for this configuration.
Chapter Six: Determining the Fiber Counts | LAN-1160-EN | Page 24
Oversubscription Ratio
Per Switch
10G Uplinks Per Switch Fiber Count Per Switch Fibers Per Rack
8:1 4 8 24
4:1 8 16 48
1.6:1 20 40 96
TABLE 6.3: Oversubscription Ratios for 10G
Using Table 6.3 and applying a 1.6:1 oversubscription would
yield a fiber count configuration shown in Figure 6.4.
In Figure 6.5 each of the nine EDA cabinets require
96 fibers to support the oversubscription rate and the
requirements for redundancy. Using 144-fiber trunk
cables yields three 144-fiber cables to Core A and three
144-fiber cables to Core B. The same process would need
to be repeated for the other zones in this example.
The Future: 40G/100G Systems
Migrating to the next generation of switches will require
careful planning for fiber counts. Advanced systems such
as 40G Ethernet and 100G Ethernet will require thousands
of fibers for network connectivity. 40G Ethernet systems
will utilize a 12-fiber MPO-style (MTP

) connector as
the interface into the end electronics. A basic configuration
for a 40G switch may consist of 12 fibers per port and
16 ports per card (Figure 6.6).
If the designer replaces the 10G switches with 40G
switches, the fiber count would increase. Using the same
scenario as before (32 servers) and the same oversubscrip-
tion ratios as before, the fiber counts per rack increase.
Table 6.4 shows the fiber counts based on 40G.
1
2
3
4
5
6
7
8
9
Figure 6.6
Switch Configuration | Drawing ZA-3588
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
Front-End
Layer Zone
CoreA CoreB
Main Distribution Area
(MDA)
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
Front-End
Layer Zone
Application
Layer Zone
Back-End
Layer Zone
Storage
Zone
96F 96F 96F 96F 96F 96F 96F 96F
3 x 144F 3 x 144F
CoreA CoreB
Main Distribution Area
(MDA)
Figure 6.5
Fiber Count Configuration | Drawing ZA-3658
I
N
T
R
O
D
U
C
T
I
O
N
T
O
D
A
T
A
C
E
N
T
E
R
S
D
E
S
I
G
N
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
D
E
P
L
O
Y
I
N
G
T
H
E
P
H
Y
S
I
C
A
L
I
N
F
R
A
S
T
R
U
C
T
U
R
E
P
E
R
F
O
R
M
A
N
C
E
M
E
T
R
I
C
S
A
N
D
A
D
M
I
N
I
S
T
R
A
T
I
O
N
I
N
F
O
R
M
A
T
I
O
N
A
N
D
T
O
O
L
S
Chapter Six: Determining the Fiber Counts | LAN-1160-EN | Page 25
Oversubscription Ratio
Per Switch
40G Uplinks Per Switch Fiber Count Per Switch Fibers Per Rack
8:1 4 48 72
4:1 8 96 144
1.6:1 20 240 288
TABLE 6.4: Oversubscription Ratios for 40G
Using Table 6.4 and applying a 1.6:1 oversubscription would
yield the fiber count configuration shown in Figure 6.7.
In this example each of the nine EDA cabinets require
288 fibers to support the oversubscription rate of 1.6:1
and the requirements for redundancy. Using 144-fiber
trunk cables yields nine 144-fiber cables to Core A and
nine 144-fiber cables to Core B.
100G Ethernet systems will utilize a 24-fiber MTP

Connector as the interface into the end electronics.


A basic configuration for a 100G switch may consist
of 24 fibers per port and 16 ports per card.
If the designer replaces the 10G switches with 100G
switches, the fiber count would increase. Using the same
oversubscription ratios as before, the fiber counts per rack
increase. Table 6.5 shows the fiber counts based on 100G.
Using Table 6.5 and applying a 1.6:1 oversubscription
would yield a fiber count configuration shown in
Figure 6.8.
In this example, each of the nine EDA cabinets require
576 fibers to support the oversubscription rate of 1.6:1
and the requirements for redundancy. Using 144-fiber
trunk cables yields 18 144-fiber cables to Core A
and 18 144-fiber cables to Core B.
Oversubscription Ratio
Per Switch
100G Uplinks Per Switch Fiber Count Per Switch Fibers Per Rack
8:1 4 96 144
4:1 8 192 288
1.6:1 20 480 576
TABLE 6.5: Oversubscription Ratios for 100G
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
Front-End
Layer Zone
Application
Layer Zone
Back-End
Layer Zone
Storage
Zone
288F 288F 288F 288F 288F 288F 288F 288F
9 x 144F 9 x 144F
CoreA CoreB
Main Distribution Area
(MDA)
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
Front-End
Main Distribution Area
(MDA)
Figure 6.7
Fiber Count Configuration | Drawing ZA-3658
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
EDA EDA EDA EDA EDA EDA EDA EDA ZDA
Front-End
Layer Zone
Application
Layer Zone
Back-End
Layer Zone
Storage
Zone
576F 576F 576F 576F 576F 576F 576F 576F
18 x 144F 18 x 144F
CoreA CoreB
Main Distribution Area
(MDA)
Figure 6.8
Fiber Count Configuration | Drawing ZA-3658
Corning Cable Systems LLC PO Box 489 Hickory, NC 28603-0489 USA
800-743-2675 FAX: 828-325-5060 International: +1-828-901-5000 www.corning.com/cablesystems
Corning Cable Systems reserves the right to improve, enhance and modify the features and specifications of Corning Cable Systems products without prior notification. ALTOS, LANscape,
Pretium and UniCam are registered trademarks of Corning Cable Systems Brands, Inc. CamSplice, LID-SYSTEM, Plug & Play and Pretium EDGE are trademarks of Corning Cable
Systems Brands, Inc. ClearCurve and Corning are registered trademarks of Corning Incorporated. MTP is a registered trademark of USConec, Ltd. All other trademarks are the properties
of their respective owners. Corning Cable Systems is ISO 9001 certified. 2010 Corning Cable Systems. All rights reserved. Published in the USA. LAN-1160-EN / November 2010

You might also like