Data Center Guide
Data Center Guide
www.belden.com
Acronyms
1. INTRODUCTION ...................................................................................................................... 1
2. SCOPE...................................................................................................................................... 3
3. SPACES ................................................................................................................................... 4
4. PATHWAYS ............................................................................................................................. 7
5. CABINETS .............................................................................................................................12
10. CERTIFICATION....................................................................................................................43
REFERENCES ...............................................................................................................................45
Figures
1.1 The term data center is used to describe a specialized facility for housing a large
number of servers, storage devices, networking equipment and communications
links. Past (and current) equivalents to a data center include mainframe computer
rooms and telephone company central offices (COs). In all cases, the facility serves
as a centralized operations center for processing, storage, and communications.
1.2 Like a commercial office building, a data center can serve a single organization,
such as a bank or an Internet merchant. Alternatively, the owner of the data center
can lease out space to multiple organizations, enabling them to place (or collocate)
their equipment in a securely managed facility. A variation of this model has the
owner leasing the hardware as well, with the “tenant” responsible for running their
application(s) on the leased server(s).
1.3 Regardless of the type of ownership, all data centers have similar needs, since their
goal is the same—to provide secure and uninterrupted availability to high-
performance computing, storage, networking, and communications resources.
1.4 The ever-increasing density of equipment makes a data center environment more
challenging than a typical telecommunications room. For example, a single
equipment or cabling rack can house over 100 blade servers or serve as a
termination point for more than 3000 optical fibers. A data center can be designed
for hundreds or thousands of such racks.
1.5 As a result of this equipment and cabling density, data center design must place
particular emphasis on such factors as:
1.7 The investment of time and materials associated with data center deployment is
considerable. New servers, storage devices, network gear, and communications
links are routinely added to maintain or improve services, all without interrupting
existing operations.
1.8 To summarize, everything has to work when moving bits into, around, and out of the
data center—and the cabling infrastructure is where the bits move. While most
network equipment is replaced in 3 to 5 year cycles, cabling is expected to serve
multiple generations of devices over a 10 to 25 year period. A well-designed cabling
infrastructure provides the needed flexibility and scalability when reconfiguring or
expanding data center services.
1.9 Belden’s role as the premier cabling system innovator dates back over a century to
the beginning of electronic communications. Our corporate history spans the
introduction of the telephone, computers, and of course, structured cabling. We
continue to provide solutions to large-scale infrastructure projects around the world
sponsored by governments, financial organizations, telephone companies, research
centers, and other institutions, both private and public.
2.1 The contents of this document, Belden’s Data Center Cabling Guidelines, have been
updated to incorporate best practices guidance from the TIA as well as other
sources of expertise.
2.2 The scope of these guidelines is to aid in the design and implementation of
structured cabling systems in data centers and similar facilities, such as network
equipment or computer rooms. These guidelines are not intended to replace or
supersede any existing of future cabling standards or any existing applicable codes
and regulations.
2.3 We hope you find this document useful in planning your high-density infrastructure.
3.1 When discussing spaces within a commercial building, the largest allocation is made
for users and their work areas (WAs). In a data center environment, the equivalent
of the WA is the equipment rack, which may be enclosed and referred to as a
cabinet or an enclosure.
NOTE: The terms “rack” and “cabinet” are used interchangeably in this document
to refer to support frames for housing any combination of network
equipment, power units, and cabling connectivity (e.g., patch panels,
modular cords).
3.2 Although equipment cabinets are similar in appearance and dimensions, their
contents determine where they are located within the data center. The types of
network devices commonly associated with data center cabinets include:
• Servers
• Storage devices
• Switches (general network, server cluster, storage network, management)
• Load balancers
• Routers
• Special-purpose appliances (security, acceleration)
3.3 In a commercial building, WAs and floors are used to group and organize individuals
by function or department. Similarly, entrance facilities (EFs), equipment rooms
(ERs), telecommunications rooms (TRs), and telecommunications enclosures (TEs)
provide for the distribution of networking equipment and cabling infrastructure in
commercial buildings (see Figure 1).
3.4 The TIA has adopted a similar hierarchy for the deployment of equipment cabinets
within data centers. Cabinets are grouped together based on the function of their
contents. This makes it easier to manage and expand operations, since space is
allocated and reserved for each type of cabinet.
NOTE: The combined space allocated for the MDA, HDA, ZDA, and EDA cabinets
is referred to as the Computer Room.
Horizontal Cable
WA
TE th
TR 4 floor
WA
CP
rd
WA Horizontal Cable TR 3 floor
Backbone Cables
nd
WA Horizontal Cable TR 2 floor
st
Backbone Cable 1 floor
EF ER
Backbone Cable
Support Area TR
Backbone MDA
Cable
Horizontal Cable
Backbone Cable
Backbone Cable
HDA
HDA
Horizontal Cable
Horizontal Cable
Horizontal Cable or
EDA EDA Zone Area Cord
EDA
Computer Room
4. Pathways
4.1 Three types of pathways are generally required to service network equipment
cabinets in a data center environment:
4.3 In some cases, network equipment vendors design their cabinet layouts in a manner
that enforces adequate separation between power and network cable. For example,
a server cabinet may be optimized for power cable entry from below and network
cable entry from above. Such an arrangement would require both raised floor and
overhead (ceiling-supported) pathways.
Example
4.5 The TIA recommends a row-based arrangement of cabinets in a data center, with
the fronts of equipment racks facing each other in one row (cold aisle with perforated
tiles) and the backs facing each other in both adjacent rows (hot aisles with non-
perforated tiles), as shown in Figure 5. In this arrangement, lower-density power
cable pathways are routed through cold aisles to optimize airflow and higher-density
network cable pathways are placed in the hot aisles. Similarly, cold air enters from
the front of the cabinets in the cold aisles and exits from the back of the cabinets in
the hot aisles. Air circulation can be passive or forced (e.g., using fans to pull in cold
air or expel hot air).
4.6 It is desirable to keep pathways separate even in cases where optical fiber network
cable is used and electrical interference is not an issue. For example, the setup and
reconfiguration of equipment cabinets is simplified when all power cables are
underfloor and all network cables are overhead.
4.7 In cases where both copper and optical fiber network cable share a pathway, the
two types of cable should be grouped separately. Whenever possible, optical fiber
cable should rest on top of copper cable to avoid excessive optical loss due to
mechanical stress on the fibers.
5.1 The terms cabinet, rack, and enclosure are all used to describe the support frame
for data center equipment and cabling. The current generation of cabinets can be
classified into two types; both specialized for data center environments:
NOTE: Cabinets dedicated to switching equipment are typically wider than server
or storage cabinets (e.g., 762 mm [30 in] vs. 610 mm [24 in]) to
accommodate the larger volume of cables and modular cords.
5.2 In addition to the alternating cold and hot aisle configuration described in the
previous section, TIA considerations for cabinets include the following:
6.1 Power consumption and energy costs are significant factors for any data center
owner. High-density components make it possible to group significant computing,
storage, or networking resources into a single cabinet. However, savings from
reduce space requirements are offset by the cost of maintaining acceptable
operating temperatures. Accordingly, watts per square foot (W/sq. ft) is a commonly
used measure to indicate data center operating cost and equipment density. Typical
values vary from 50 W/sq. ft to 300 W/sq. ft, with advice to plan for values as high as
500 W/sq. ft.
6.2 It is possible to configure a single blade server cabinet that requires 30,000 watts of
power to operate. However, the thermal engineering and cost of cooling for such a
setup make it impractical to consider this level of equipment density.
6.3 Estimates suggest that for each kilowatt of power used by cabinet equipment, a
second kilowatt is needed for cooling. Since data centers operate continuously, the
challenge for designers is to find the right balance between space and energy
consumption. If equipment is spread out over many cabinets, the cooling
requirements per cabinet are reduced, but the size of the space to be cooled
increases.
The following example illustrates the yearly cost associated with data center
powering and cooling for a single 10,000 watt (10 kilowatt) cabinet (assuming
powering and cooling costs are equal).
6.5 Historically, dc powering has been associated with telephone equipment and COs.
However, some data center equipment vendors promote it as a means of reducing
power consumption, equipment cooling, and rack space for power supplies.
The following are two examples of available power at a cabinet, using a single-phase
or a three-phase circuit:
6.6 Multiple vendors offer high-capacity network devices equipped with power supplies
rated at 4000, 6000, or 8000 watts. Alternatively, 40 single unit (1U) rack-mount
servers equipped with 300 watt power supplies gives a rating of 12,000 watts. Such
examples illustrate the need to carefully plan the contents and locations of high
power consumption cabinets in data centers.
Figure 10: Belden High-density Open Frame Enclosure with perforated cabinet doors
7.1 The bulk of the cabling infrastructure in a typical data center serves to connect end
devices (servers, storage arrays) to switches in a single-level or multi-level
hierarchy. This is comparable to most commercial buildings, where horizontal
cabling to user work areas makes up the largest quantity of premises cabling. The
biggest difference between the two environments is density.
Example:
A dense cabling system design for a 9.3 sq. m (100 sq. ft) office may specify six
cabling runs to accommodate all expected network devices for its occupant.
Compare this to a typical EDA equipment cabinet with 24 servers, each equipped
with 2 network interface cards (NICs), requiring 48 cabling runs to serve a 1.5 sq. m
(16 sq. ft) area.
7.2 Simply terminating a large number of cables at a cabinet is not enough. Cables
represent the static side of a run, since they are not typically changed after initial
installation. The challenging part is the modular cords, which are constantly
accessed to make or change port connections (see Figure 11).
Access Primary
Providers Entrance Facility
• Workstation outlets
• Copper/fiber cable
• Copper/fiber PPs
• Copper/fiber cordage Backbone Cable
• HD IDC blocks
Support Area • Fiber PP
TR
• Coax connectivity
Backbone MDA (for DS-1, DS-3 circuits)
Cable • Racks for SP equipment
Horizontal Cable
(584 mm [23 in])
• Core equipment cabinets
Offices, Operations Center,
(762 mm [30 in])
Support Rooms
• Copper backbone cabling channels (e.g., from a core switch in the MDA to a
second-tier switch cabinet in an HDA) shall not exceed 100 m (328 ft). The
HDA should be located at least 15 m (49 ft) from the MDA to minimize
interference caused by multiple connection points in close proximity.
PLEASE NOTE:
• Unlike copper cabling, allowable lengths for fiber backbone cabling channels
vary by application (e.g., 2 Gb/s vs 4 Gb/s Fiber Channel, Gigabit Ethernet
[GbE] vs 10 Gigabit Ethernet [10 GbE]). In general, 150 m to 300 m (492 ft to
984 ft) is specified by standards when using laser-optimized (LO) 50-m
multimode fiber (OM3).
NOTE: Longer runs are possible for specific application by using enhanced
(standard exceeding) LO multimode fiber (e.g., 500 m [1640 ft] for
10 GbE) or singlemode (OS1, OS2) fiber (e.g., 10 km [6.2 mi] for
Fibre Channel). As well, adapter and switch vendors may offer
multiple fiber transceiver options—with varying distance limits—for
their products.
• If HDAs are omitted and cabling runs are from the MDA directly to an EDA, a
maximum channel length of 300 m (984 ft) is allowed for optical fiber cabling
and 100 m (328 ft) is allowed for copper cabling.
All switches are grouped together in one area. The cabling infrastructure is used to
connect switch ports to servers/storage units. Uses the most cabling. Most flexible
for management (entire data center is managed as a single network entity). Switch
port use is maximized (any port can connect to any server/storage unit).
Each row of server/storage cabinets has a set of switches in a cabinet at the end of
the row. Less cabling is used than with centralized topology. Moderate flexibility for
management (each row is managed as a separate network entity). Some switch
ports may remain unused (ports are dedicated to the row).
Each server/storage cabinet is equipped with a switch. Uses the least amount of
cabling. Least flexible for management (each cabinet is managed as a separate
network entity). Many switch ports may remain unused (ports are dedicated to the
cabinet).
• Isolation of network electronics. After initial cabling to the cabinets, you would
not need to touch the equipment to make or change connections. The central
patch panels can even be in another area or room of the data center.
• Investment protection. Changing cabinet equipment would not require any
changes to the cabling system. The existing cabling would connect to the
port(s) in the new device and be managed as usual from the central patch
panels.
8.1 The two types of networks associated with data center implementations are Ethernet
for server networking and Fiber Channel for storage networking. While it is possible
to use copper cabling for Fiber Channel connections, the distance limitations (e.g.,
33 m [108 ft] at 1 Gb/s, 17 m [56 ft] at 2 Gb/s) make it impractical to consider copper
as a suitable medium for storage area networks (SANs).
8.2 In the current data center standard, which was published in 2005, the TIA
recommends Category 6 (CAT 6) cabling runs. Innovations in design and
manufacturing have since improved and the current industry recommendation is to
install Augmented Category 6 (CAT 6A), which is expected to be specified in the
next version of the data center standard.
8.3 Copper is suitable for server networking, where 100 m (328 ft) port-to-port (channel)
runs are possible for the full range of Ethernet data rates, including GbE and
10 GbE.
NOTE: The term 10GBASE-T is also used to describe 10GbE operations over
twisted-pair copper cabling.
8.4 Installed CAT 6 cabling infrastructure can support the operation of 10GBASE-T only
up to 55 m (180 ft) under certain conditions as detailed in TIA TSB-155. To
overcome these limitations, TIA is developing the standard for CAT 6A, to be
published as TIA/EIA-568-B.2-10. CAT 6A will support the operation of 10GBASE-T
over a maximum of 100 m (328 ft) of twisted-pair copper cabling with up to four
connectors, as well as the low-power “short-reach mode” version of 10GBASE-T
over a maximum of 30 m (98 ft) of twisted-pair copper cabling with up to two
connectors.
8.5 The need for 10 GbE is greatest in inter-switch links (ISLs), which are used to make
distributed switch ports function as a single switch. Instead of a one-time purchase
of a very large switch, multiple smaller GbE switches can be acquired as needed
and linked together using 10 GbE. In such cases, CAT 6 or CAT 6A can be used for
GbE links to servers and CAT 6A (or fiber) can be used for 10GbE ISLs.
® ®
Figure 20: Belden 10GX CAT 6A cable with RoundFleX technology
® ®
Figure 21: Belden 10GX CAT 6A IDC connecting block with X-Pair technology
® ®
Figure 23: Belden 10GX CAT 6A module with FleXPoint, MatriX IDC, and X-Bar technologies
® ®
Figure 24: Belden 10GX CAT 6A patch panel with 48 10GX modules
®
Figure 26: Belden equipment harnesses
9.1 Optical fiber cabling can be used exclusively or in conjunction with copper cabling in
the data center. Its advantages include electromagnetic interference (EMI) immunity,
flexibility, smaller cable and patch cord diameters, and allowable spans exceeding
100 m (328 ft). Its principal disadvantage is the higher cost of provisioning devices
with fiber interfaces instead of copper ones. In many cases, both in commercial and
data center environments, switches are provisioned with fiber ISL (or uplink) ports
and copper end-node ports.
9.2 Multiple types of optical fiber have been introduced over the years and the current
TIA recommendation is to use LO 50-m multimode fiber for maximum flexibility.
Similarly, many types of fiber connectors are available, with LC small form factor
(SFF) connectors favored by equipment vendors for their small size (compared to
previous generation products such as SC).
9.3 From an installation perspective, two options are generally considered for data
center fiber deployment:
® ®
Figure 27: Belden FiberExpress secure/keyed LC system
®
Figure 28: Belden FiberExpress pre-connectorized assemblies
FiberExpress Bar
®
Figure 29: Belden Optimax field-installable connectors and tool kit
® ®
Figure 30: Belden FiberExpress Manager (FXM)
FXM Shelf 1U
Left side
Right side
Connector module
10.1 Unlike network devices, a cabling infrastructure does not arrive at a site fully
assembled. As well, the upper limits of a network may be increased months or years
after the initial installation (e.g., migration from GbE to 10 GbE networking).
10.2 The purpose of certification is to provide customers the assurance that their installed
cabling system performs as expected over time. A combination of manufacturing,
design, and installation expertise is required to deliver this assurance.
®
10.3 Belden’s IBDN Structured Cabling System Certification Program provides
comprehensive training to our business partners and comprehensive
product/application warrantees to our customers. The result is a 25-year extended
product warranty for individual components and a lifetime application assurance for
the installed cabling system as a whole.
• Your IBDN Certified System will support any current or future standards-defined
application that is designed to operate over the cabling system you have
purchased.
• If your IBDN Certified System is unable to support an approved current or
future application, Belden and its partners will correct the failure at no cost to
you, including materials and labor.
DatacenterDynamics
http://www.datacenterdynamics.com
Ethernet Alliance
http://www.ethernetalliance.org